netket.nn#

This sub-module extends flax.linen with layers and tools that are useful to applications in quantum physics. Read more about the design goal of this module in their README

Linear Modules#

DenseSymm

Implements a projection onto a symmetry group.

DenseEquivariant

A group convolution operation that is equivariant over a symmetry group.

The following modules can be used in autoregressive neural networks, see AbstractARNN

MaskedDense1D

1D linear transformation module with mask for autoregressive NN.

MaskedConv1D

1D convolution module with mask for autoregressive NN.

MaskedConv2D

2D convolution module with mask for autoregressive NN.

FastMaskedDense1D

1D linear transformation module with mask for fast autoregressive NN.

FastMaskedConv1D

1D convolution module with mask for fast autoregressive NN.

FastMaskedConv2D

2D convolution module with mask for fast autoregressive NN.

Activation functions#

activation.reim(f)

Modifies a non-linearity to act separately on the real and imaginary parts

activation.reim_relu()

relu applied separately to the real andimaginary parts of it's input.

activation.reim_selu()

selu applied separately to the real andimaginary parts of it's input.

activation.log_cosh(x)

Logarithm of the hyperbolic cosine, implemented in a more stable way.

activation.log_sinh(x)

Logarithm of the hyperbolic sine.

activation.log_tanh(x)

Logarithm of the hyperbolic tangent.

Miscellaneous Functions#

binary_encoding(hilbert, x, *[, max_bits])

Encodes the array x into a set of binary-encoded variables described by the shape of a Hilbert space.

states_to_numbers(hilbert, σ)

Converts the configuration σ to a 32-bit integer denoting its index in the full Hilbert space.

Utility functions#

to_array(hilbert, apply_fun, variables, *[, ...])

Computes apply_fun(variables, states) on all states of hilbert and returns

to_matrix(hilbert, machine, params, *[, ...])

rtype:

Union[ndarray, Array]

Blocks#

blocks.MLP

A Multi-Layer Perceptron with hidden layers.

blocks.DeepSetMLP

Implements the DeepSets architecture, which is permutation invariant and is suitable for the encoding of bosonic systems.

blocks.SymmExpSum

A flax module symmetrizing the log-wavefunction \(\log\psi_\theta(\sigma)\) encoded into another flax module (flax.linen.Module) by summing over all possible symmetries \(g\) in a certain discrete permutation group \(G\).

Experimental#

Recurrent Neural Network cells#

The following are RNN layers (in flax those would be called a RNN), which can be stacked within a netket.experimental.models.RNN.

rnn.RNNLayer

Recurrent neural network layer that maps inputs at N sites to outputs at N sites.

rnn.FastRNNLayer

Recurrent neural network layer with fast sampling.

The following are recurrent cells that can be used with netket.experimental.nn.rnn.RNNLayer.

rnn.RNNCell

Recurrent neural network cell that updates the hidden memory at each site.

rnn.LSTMCell

Long short-term memory cell.

rnn.GRU1DCell

Gated recurrent unit cell.

The following are utility functions to build up custom autoregressive orderings.

rnn.check_reorder_idx

Check that the reordering indices determining the autoregressive order of an RNN are correctly declared.

rnn.ensure_prev_neighbors

Deduce the missing arguments between reorder_idx, inv_reorder_idx, and inv_reorder_idx from the specified arguments.

rnn.get_snake_inv_reorder_idx

A helper function to generate the inverse reorder indices in the snake order for a 2D graph.