netket.sampler.ARDirectSampler#

class netket.sampler.ARDirectSampler#

Bases: netket.sampler.Sampler

Direct sampler for autoregressive neural networks.

This sampler only works with Flax models. This flax model must expose a specific method, model._conditional, which given a batch of samples and an index iโˆˆ[0,self.hilbert.size] must return the vector of partial probabilities at index i for the various (partial) samples provided.

In short, if your model can be sampled according to a probability $ p(x) = p_1(x_1)p_2(x_2|x_1)dots p_N(x_N|x_{N-1}dots x_1) $ then model._conditional(x, i) should return $p_i(x)$.

NetKet implements some autoregressive networks that can be used together with this sampler.

Inheritance
Inheritance diagram of netket.sampler.ARDirectSampler
__init__(*args, __precompute_cached_properties=False, __skip_preprocess=False, **kwargs)#

Construct an autoregressive direct sampler.

Parameters
  • hilbert โ€“ The Hilbert space to sample.

  • dtype โ€“ The dtype of the states sampled (default = np.float64).

Note

ARDirectSampler.machine_pow has no effect. Please set the modelโ€™s machine_pow instead.

Attributes
is_exact#

Returns True because the sampler is exact.

The sampler is exact if all the samples are exactly distributed according to the chosen power of the variational state, and there is no correlation among them.

machine_pow: int = 2#
n_batches#

The batch size of the configuration $sigma$ used by this sampler.

In general, it is equivalent to n_chains_per_rank.

Return type

int

n_chains#

The total number of independent chains across all MPI ranks.

If you are not using MPI, this is equal to n_chains_per_rank.

Return type

int

n_chains_per_rank: int = None#
hilbert: netket.hilbert.AbstractHilbert#
Methods
init_state(machine, parameters, seed=None)#

Creates the structure holding the state of the sampler.

If you want reproducible samples, you should specify seed, otherwise the state will be initialised randomly.

If running across several MPI processes, all sampler_state`s are guaranteed to be in a different (but deterministic) state. This is achieved by first reducing (summing) the seed provided to every MPI rank, then generating `n_rank seeds starting from the reduced one, and every rank is initialized with one of those seeds.

The resulting state is guaranteed to be a frozen Python dataclass (in particular, a Flax dataclass), and it can be serialized using Flax serialization methods.

Parameters
  • machine (Union[Callable, Module]) โ€“ A Flax module or callable with the forward pass of the log-pdf. If it is a callable, it should have the signature f(parameters, ฯƒ) -> jnp.ndarray.

  • parameters (Any) โ€“ The PyTree of parameters of the model.

  • seed (Union[int, Any, None]) โ€“ An optional seed or jax PRNGKey. If not specified, a random seed will be used.

Return type

SamplerState

Returns

The structure holding the state of the sampler. In general you should not expect it to be in a valid state, and should reset it before use.

log_pdf(model)#

Returns a closure with the log-pdf function encoded by this sampler.

Parameters

model (Union[Callable, Module]) โ€“ A Flax module or callable with the forward pass of the log-pdf. If it is a callable, it should have the signature f(parameters, ฯƒ) -> jnp.ndarray.

Return type

Callable

Returns

The log-probability density function.

Note

The result is returned as a HashablePartial so that the closure does not trigger recompilation.

replace(**updates)#

Returns a new object replacing the specified fields with new values.

reset(machine, parameters, state=None)#

Resets the state of the sampler. To be used every time the parameters are changed.

Parameters
  • machine (Union[Callable, Module]) โ€“ A Flax module or callable with the forward pass of the log-pdf. If it is a callable, it should have the signature f(parameters, ฯƒ) -> jnp.ndarray.

  • parameters (Any) โ€“ The PyTree of parameters of the model.

  • state (Optional[SamplerState]) โ€“ The current state of the sampler. If not specified, it will be constructed by calling sampler.init_state(machine, parameters) with a random seed.

Return type

SamplerState

Returns

A valid sampler state.

sample(machine, parameters, *, state=None, chain_length=1)#

Samples chain_length batches of samples along the chains.

Parameters
  • machine (Union[Callable, Module]) โ€“ A Flax module or callable with the forward pass of the log-pdf. If it is a callable, it should have the signature f(parameters, ฯƒ) -> jnp.ndarray.

  • parameters (Any) โ€“ The PyTree of parameters of the model.

  • state (Optional[SamplerState]) โ€“ The current state of the sampler. If not specified, then initialize and reset it.

  • chain_length (int) โ€“ The length of the chains (default = 1).

Returns

The generated batches of samples. state: The new state of the sampler.

Return type

ฯƒ

samples(machine, parameters, *, state=None, chain_length=1)#

Returns a generator sampling chain_length batches of samples along the chains.

Parameters
  • machine (Union[Callable, Module]) โ€“ A Flax module or callable with the forward pass of the log-pdf. If it is a callable, it should have the signature f(parameters, ฯƒ) -> jnp.ndarray.

  • parameters (Any) โ€“ The PyTree of parameters of the model.

  • state (Optional[SamplerState]) โ€“ The current state of the sampler. If not specified, then initialize and reset it.

  • chain_length (int) โ€“ The length of the chains (default = 1).

Return type

Iterator[ndarray]