# netket.models.GCNN#

class netket.models.GCNN[source]#

Bases:

Implements a Group Convolutional Neural Network (G-CNN) that outputs a wavefunction that is invariant over a specified symmetry group.

The G-CNN is described in Cohen et al. and applied to quantum many-body problems in Roth et al. .

The G-CNN alternates convolution operations with pointwise non-linearities. The first layer is symmetrized linear transform given by DenseSymm, while the other layers are G-convolutions given by DenseEquivariant. The hidden layers of the G-CNN are related by the following equation:

${\bf f}^{i+1}_h = \Gamma( \sum_h W_{g^{-1} h} {\bf f}^i_h).$
Parameters
• symmetries – A specification of the symmetry group. Can be given by a netket.graph.Graph, a netket.utils.group.PermutationGroup, or an array [n_symm, n_sites] specifying the permutations corresponding to symmetry transformations of the lattice.

• product_table – Product table describing the algebra of the symmetry group. Only needs to be specified if mode=’fft’ and symmetries is specified as an array.

• irreps – List of 3D tensors that project onto irreducible representations of the symmetry group. Only needs to be specified if mode=’irreps’ and symmetries is specified as an array.

• point_group – The point group, from which the space group is built. If symmetries is a graph the default point group is overwritten.

• mode – string “fft, irreps, matrix, auto” specifying whether to use a fast fourier transform over the translation group, a fourier transform using the irreducible representations or by constructing the full kernel matrix.

• shape – A tuple specifying the dimensions of the translation group.

• layers – Number of layers (not including sum layer over output).

• features – Number of features in each layer starting from the input. If a single number is given, all layers will have the same number of features.

• characters – Array specifying the characters of the desired symmetry representation.

• parity – Optional argument with value +/-1 that specifies the eigenvalue with respect to parity (only use on two level systems).

• param_dtype – The dtype of the weights.

• activation – The nonlinear activation function between hidden layers. Defaults to netket.nn.activation.reim_selu .

• output_activation – The nonlinear activation before the output.

• equal_amplitudes – If True forces all basis states to have equal amplitude by setting $$\Re(\psi) = 0$$ .

• use_bias – If True uses a bias in all layers.

• precision – Numerical precision of the computation see jax.lax.Precision for details.

• kernel_init – Initializer for the kernels of all layers. Defaults to lecun_normal(in_axis=1, out_axis=0) which guarantees the correct variance of the output. See the documentation of flax.linen.initializers.lecun_normal() for more information.

• bias_init – Initializer for the biases of all layers.

• complex_output – If True, ensures that the network output is always complex. Necessary when network parameters are real but some characters are negative.

__init__ = <method-wrapper '__init__' of function object>#