netket.nn.blocks.MLP#

class netket.nn.blocks.MLP[source]#

Bases: Module

A Multi-Layer Perceptron with hidden layers.

This combines multiple dense layers and activations functions into a single object. It separates the output layer from the hidden layers, since it typically has a different form. One can specify the specific activation functions per layer. The size of the hidden dimensions can be provided as a number, or as a factor relative to the input size (similar as for RBM). The default model is a single linear layer without activations.

Forms a common building block for models such as PauliNet (continuous)

Attributes
hidden_dims: Union[int, tuple[int, ...], None] = None#

The size of the hidden layers, excluding the output layer.

hidden_dims_alpha: Union[int, tuple[int, ...], None] = None#

The size of the hidden layers provided as number of times the input size. One must choose to either specify this or the hidden_dims keyword argument

output_activation: Optional[Callable] = None#

The nonlinear activation at the output layer. If None is provided, the output layer will be essentially linear.

output_dim: int = 1#

The output dimension

precision: Optional[Precision] = None#

Numerical precision of the computation see jax.lax.Precision for details.

use_hidden_bias: bool = True#

If True uses a bias in the hidden layer.

use_output_bias: bool = True#

If True adds a bias to the output layer.

hidden_activations: Union[Callable, tuple[Callable, ...], None]#

The nonlinear activation function after each hidden layer. Can be provided as a single activation, where the same activation will be used for every layer.

kernel_init: Callable[[Any, Sequence[int], Any], Union[ndarray, Array]]#

Initializer for the Dense layer matrix.

bias_init: Callable[[Any, Sequence[int], Any], Union[ndarray, Array]]#

Initializer for the biases.

Methods
__call__(input)[source]#

Call self as a function.