netket.nn.blocks.MLP#
- class netket.nn.blocks.MLP[source]#
Bases:
Module
A Multi-Layer Perceptron with hidden layers.
This combines multiple dense layers and activations functions into a single object. It separates the output layer from the hidden layers, since it typically has a different form. One can specify the specific activation functions per layer. The size of the hidden dimensions can be provided as a number, or as a factor relative to the input size (similar as for RBM). The default model is a single linear layer without activations.
Forms a common building block for models such as PauliNet (continuous)
- Attributes
The size of the hidden layers, excluding the output layer.
The size of the hidden layers provided as number of times the input size. One must choose to either specify this or the hidden_dims keyword argument
-
output_activation:
Callable
|None
= None# The nonlinear activation at the output layer. If None is provided, the output layer will be essentially linear.
-
precision:
Precision
|None
= None# Numerical precision of the computation see
jax.lax.Precision
for details.
If True uses a bias in the hidden layer.
The nonlinear activation function after each hidden layer. Can be provided as a single activation, where the same activation will be used for every layer.