Skip to contents

A parameter-separated self-normalizing network (or multi-layer perception) as a torch::nn_module, designed for generalized Pareto distribution parameter prediction.

Usage

Separated_GPD_SNN(
  D_in,
  Hidden_vect_scale = c(64, 64, 64),
  Hidden_vect_shape = c(5, 3),
  p_drop = 0.01
)

Arguments

D_in

the input size (i.e. the number of features),

Hidden_vect_scale

a vector of integers whose length determines the number of layers in the sub-network for the scale parameter and entries the number of neurons in each corresponding successive layer,

Hidden_vect_shape

a vector of integers whose length determines the number of layers in the sub-network for the shape parameter and entries the number of neurons in each corresponding successive layer,

p_drop

probability parameter for the alpha-dropout before each hidden layer for regularization during training.

Details

The constructor allows specifying:

  • D_inthe input size (i.e. the number of features),

  • Hidden_vect_scalea vector of integers whose length determines the number of layers in the sub-network for the scale parameter and entries the number of neurons in each corresponding successive layer,

  • Hidden_vect_shapea vector of integers whose length determines the number of layers in the sub-network for the shape parameter and entries the number of neurons in each corresponding successive layer,

  • p_dropprobability parameter for the alpha-dropout before each hidden layer for regularization during training.

References

Gunter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter. Self-Normalizing Neural Networks. Advances in Neural Information Processing Systems 30 (NIPS 2017), 2017.