Skip to contents

A recurrent neural network as a torch::nn_module, designed for generalized Pareto distribution parameter prediction, with sequential dependence.

Usage

Recurrent_GPD_net(
  type = c("lstm", "gru"),
  nb_input_features,
  hidden_size,
  num_layers = 1,
  dropout = 0,
  shape_fixed = FALSE,
  device = default_device()
)

Arguments

type

the type of recurrent architecture, can be one of "lstm" (default) or "gru",

nb_input_features

the input size (i.e. the number of features),

hidden_size

the dimension of the hidden latent state variables in the recurrent network,

num_layers

the number of recurrent layers,

dropout

probability parameter for dropout before each hidden layer for regularization during training,

shape_fixed

whether the shape estimate depends on the covariates or not (bool),

device

a torch::torch_device() for an internal constant vector. Defaults to default_device().

Details

The constructor allows specifying:

  • typethe type of recurrent architecture, can be one of "lstm" (default) or "gru",

  • nb_input_featuresthe input size (i.e. the number of features),

  • hidden_sizethe dimension of the hidden latent state variables in the recurrent network,

  • num_layersthe number of recurrent layers,

  • dropoutprobability parameter for dropout before each hidden layer for regularization during training,

  • shape_fixedwhether the shape estimate depends on the covariates or not (bool),

  • devicea torch::torch_device() for an internal constant vector. Defaults to default_device().