Skip to contents

A fully-connected self-normalizing network (or multi-layer perception) as a torch::nn_module, designed for generalized Pareto distribution parameter prediction.

Usage

FC_GPD_SNN(D_in, Hidden_vect = c(64, 64, 64), p_drop = 0.01)

Arguments

D_in

the input size (i.e. the number of features),

Hidden_vect

a vector of integers whose length determines the number of layers in the neural network and entries the number of neurons in each corresponding successive layer,

p_drop

probability parameter for the alpha-dropout before each hidden layer for regularization during training.

Details

The constructor allows specifying:

  • D_inthe input size (i.e. the number of features),

  • Hidden_vecta vector of integers whose length determines the number of layers in the neural network and entries the number of neurons in each corresponding successive layer,

  • p_dropprobability parameter for the alpha-dropout before each hidden layer for regularization during training.

References

Gunter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter. Self-Normalizing Neural Networks. Advances in Neural Information Processing Systems 30 (NIPS 2017), 2017.