netket.experimental.models.FastRNN#

class netket.experimental.models.FastRNN[source]#

Bases: FastARNNSequential

Base class for recurrent neural networks with fast sampling.

The fast autoregressive sampling is described in Ramachandran et. {it al}. To generate one sample using an autoregressive network, we need to evaluate the network N times, where N is the number of input sites. But actually we only change one input site each time, and not all intermediate results depend on the changed input because of the autoregressive property, so we can cache unchanged intermediate results and avoid repeated computation.

This optimization is particularly useful for RNN where each output site of a layer only depends on a small number of input sites. In the slow RNN, we need to run N RNN steps in each layer during each AR sampling step. While in the fast RNN, we cache the relevant hidden memories in each layer from the previous AR sampling step, and only run one RNN step to update from the changed input.

See netket.experimental.models.RNN for explanation of the arguments related to the autoregressive order.

Attributes
graph: Optional[AbstractGraph] = None#

graph of the physical system.

inv_reorder_idx: Optional[HashableArray] = None#

indices to transform the inputs from ordered to unordered. See netket.models.AbstractARNN.reorder() for details.

machine_pow: int = 2#

exponent to normalize the outputs of __call__.

prev_neighbors: Optional[HashableArray] = None#

previous neighbors of each site.

reorder_idx: Optional[HashableArray] = None#

indices to transform the inputs from unordered to ordered. See netket.models.AbstractARNN.reorder() for details.

layers: int#

number of layers.

features: Union[Iterable[int], int]#

output feature density in each layer. If a single number is given, all layers except the last one will have the same number of features.

kernel_init: Callable[[Any, Sequence[int], Any], Union[ndarray, Array]]#

initializer for the weights.

bias_init: Callable[[Any, Sequence[int], Any], Union[ndarray, Array]]#

initializer for the biases.

hilbert: HomogeneousHilbert#

the Hilbert space. Only homogeneous unconstrained Hilbert spaces are supported.

Methods
__call__(inputs)#

Computes the log wave-functions for input configurations.

Parameters:

inputs (Union[ndarray, Array]) – configurations with dimensions (batch, Hilbert.size).

Return type:

Union[ndarray, Array]

Returns:

The log psi with dimension (batch,).

inverse_reorder(inputs, axis=0)[source]#

Transforms an array from ordered to unordered. See reorder.

Parameters:
  • inputs (Union[ndarray, Array]) – an array with ordered layout along a dimension.

  • axis (int) – the dimension to reorder on.

Return type:

Union[ndarray, Array]

Returns:

The array with unordered layout.

reorder(inputs, axis=0)[source]#

Transforms an array from unordered to ordered.

We call a 1D array ‘unordered’ if we need non-trivial indexing to access its elements in the autoregressive order, e.g., a[0], a[1], a[3], a[2] for the snake order. Otherwise, we call it ‘ordered’.

The inputs of conditionals_log_psi, conditionals, conditional, and __call__ are assumed to have unordered layout, and those inputs are always transformed through reorder before evaluating the network.

Subclasses may override reorder and inverse_reorder together to define this transformation.

Parameters:
  • inputs (Union[ndarray, Array]) – an array with unordered layout along a dimension.

  • axis (int) – the dimension to reorder on.

Return type:

Union[ndarray, Array]

Returns:

The array with ordered layout.