netket.models.ARNNConv1D#

class netket.models.ARNNConv1D[source]#

Bases: ARNNSequential

Autoregressive neural network with 1D convolution layers.

Attributes
kernel_dilation: int = 1#

1).

Type:

dilation factor of the convolution kernel (default

machine_pow: int = 2#

exponent to normalize the outputs of __call__.

precision: Any = None#

numerical precision of the computation, see jax.lax.Precision for details.

use_bias: bool = True#

True).

Type:

whether to add a bias to the output (default

layers: int#

number of layers.

features: Union[tuple[int, ...], int]#

output feature density in each layer. If a single number is given, all layers except the last one will have the same number of features.

kernel_size: int#

length of the convolutional kernel.

kernel_init: Callable[[Any, Sequence[int], Any], Union[ndarray, Array]]#

initializer for the weights.

bias_init: Callable[[Any, Sequence[int], Any], Union[ndarray, Array]]#

initializer for the biases.

hilbert: HomogeneousHilbert#

the Hilbert space. Only homogeneous unconstrained Hilbert spaces are supported.

Methods
__call__(inputs)#

Computes the log wave-functions for input configurations.

Parameters:

inputs (Union[ndarray, Array]) – configurations with dimensions (batch, Hilbert.size).

Return type:

Union[ndarray, Array]

Returns:

The log psi with dimension (batch,).

activation()#

selu applied separately to the real andimaginary parts of it’s input.

The docstring to the original function follows.

Scaled exponential linear unit activation.

Computes the element-wise function:

\[\begin{split}\mathrm{selu}(x) = \lambda \begin{cases} x, & x > 0\\ \alpha e^x - \alpha, & x \le 0 \end{cases}\end{split}\]

where \(\lambda = 1.0507009873554804934193349852946\) and \(\alpha = 1.6732632423543772848170429916717\).

For more information, see Self-Normalizing Neural Networks.

Args:

x : input array

Returns:

An array.

See also:

elu()

Return type:

Array