netket.experimental.sampler.MetropolisPtSampler#

class netket.experimental.sampler.MetropolisPtSampler[source]#

Bases: MetropolisSampler

Metropolis-Hastings with Parallel Tempering sampler.

This sampler samples an Hilbert space, producing samples off a specific dtype. The samples are generated according to a transition rule that must be specified.

Inheritance
Inheritance diagram of netket.experimental.sampler.MetropolisPtSampler
__init__(*args, n_replicas=32, **kwargs)[source]#

MetropolisSampler is a generic Metropolis-Hastings sampler using a transition rule to perform moves in the Markov Chain. The transition kernel is used to generate a proposed state \(s^\prime\), starting from the current state \(s\). The move is accepted with probability

\[A(s\rightarrow s^\prime) = \mathrm{min}\left (1,\frac{P(s^\prime)}{P(s)} e^{L(s,s^\prime)} \right),\]

where the probability being sampled from is \(P(s)=|M(s)|^p\). Here \(M(s)\) is a user-provided function (the machine), \(p\) is also user-provided with default value \(p=2\), and \(L(s,s^\prime)\) is a suitable correcting factor computed by the transition kernel.

Parameters:
  • hilbert – The hilbert space to sample

  • rule – A MetropolisRule to generate random transitions from a given state as well as uniform random states.

  • n_chains – The number of Markov Chain to be run in parallel on a single process.

  • sweep_size – The number of exchanges that compose a single sweep. If None, sweep_size is equal to the number of degrees of freedom being sampled (the size of the input vector s to the machine).

  • n_chains – The number of batches of the states to sample (default = 8)

  • machine_pow – The power to which the machine should be exponentiated to generate the pdf (default = 2).

  • dtype – The dtype of the states sampled (default = np.float32).

  • n_replicas (int)

Attributes
is_exact#

Returns True if the sampler is exact.

The sampler is exact if all the samples are exactly distributed according to the chosen power of the variational state, and there is no correlation among them.

n_batches#

The batch size of the configuration $sigma$ used by this sampler on this jax process.

If you are not using MPI, this is equal to n_chains * n_replicas, but if you are using MPI this is equal to n_chains_per_rank * n_replicas.

n_chains: int#

Total number of independent chains across all MPI ranks and/or devices.

n_chains_per_rank#

The total number of independent chains per MPI rank (or jax device if you set NETKET_EXPERIMENTAL_SHARDING=1).

If you are not distributing the calculation among different MPI ranks or jax devices, this is equal to n_chains.

In general this is equal to

from netket.jax import sharding
sampler.n_chains // sharding.device_count()
n_sweeps#
n_replicas: int#

The number of replicas evolving with different temperatures for every _physical_ markov chain.

The total number of chains evolved is n_chains * n_replicas.

rule: MetropolisRule#

The Metropolis transition rule.

sweep_size: int#

Number of sweeps for each step along the chain. Defaults to the number of sites in the Hilbert space.

reset_chains: bool#

If True, resets the chain state when reset is called on every new sampling.

hilbert: AbstractHilbert#

The Hilbert space to sample.

machine_pow: int#

The power to which the machine should be exponentiated to generate the pdf.

dtype: DType#

The dtype of the states sampled.

Methods
init_state(machine, parameters, seed=None)#

Creates the structure holding the state of the sampler.

If you want reproducible samples, you should specify seed, otherwise the state will be initialised randomly.

If running across several MPI processes, all sampler_state`s are guaranteed to be in a different (but deterministic) state. This is achieved by first reducing (summing) the seed provided to every MPI rank, then generating `n_rank seeds starting from the reduced one, and every rank is initialized with one of those seeds.

The resulting state is guaranteed to be a frozen Python dataclass (in particular, a Flax dataclass), and it can be serialized using Flax serialization methods.

Parameters:
  • machine (Union[Callable, Module]) – A Flax module or callable with the forward pass of the log-pdf. If it is a callable, it should have the signature f(parameters, σ) -> jnp.ndarray.

  • parameters (Any) – The PyTree of parameters of the model.

  • seed (Union[int, Any, None]) – An optional seed or jax PRNGKey. If not specified, a random seed will be used.

Return type:

SamplerState

Returns:

The structure holding the state of the sampler. In general you should not expect it to be in a valid state, and should reset it before use.

log_pdf(model)#

Returns a closure with the log-pdf function encoded by this sampler.

Parameters:

model (Union[Callable, Module]) – A Flax module or callable with the forward pass of the log-pdf. If it is a callable, it should have the signature f(parameters, σ) -> jnp.ndarray.

Return type:

Callable

Returns:

The log-probability density function.

Note

The result is returned as a HashablePartial so that the closure does not trigger recompilation.

replace(**kwargs)#

Replace the values of the fields of the object with the values of the keyword arguments. If the object is a dataclass, dataclasses.replace will be used. Otherwise, a new object will be created with the same type as the original object.

Return type:

TypeVar(P, bound= Pytree)

Parameters:
  • self (P)

  • kwargs (Any)

reset(machine, parameters, state=None)#

Resets the state of the sampler. To be used every time the parameters are changed.

Parameters:
  • machine (Union[Callable, Module]) – A Flax module or callable with the forward pass of the log-pdf. If it is a callable, it should have the signature f(parameters, σ) -> jnp.ndarray.

  • parameters (Any) – The PyTree of parameters of the model.

  • state (Optional[SamplerState]) – The current state of the sampler. If not specified, it will be constructed by calling sampler.init_state(machine, parameters) with a random seed.

Return type:

SamplerState

Returns:

A valid sampler state.

sample(machine, parameters, *, state=None, chain_length=1)#

Samples chain_length batches of samples along the chains.

Parameters:
  • machine (Union[Callable, Module]) – A Flax module or callable with the forward pass of the log-pdf. If it is a callable, it should have the signature f(parameters, σ) -> jnp.ndarray.

  • parameters (Any) – The PyTree of parameters of the model.

  • state (Optional[SamplerState]) – The current state of the sampler. If not specified, then initialize and reset it.

  • chain_length (int) – The length of the chains (default = 1).

Returns:

The generated batches of samples. state: The new state of the sampler.

Return type:

σ

sample_next(machine, parameters, state=None)#

Samples the next state in the Markov chain.

Parameters:
  • machine (Union[Callable, Module]) – A Flax module or callable with the forward pass of the log-pdf. If it is a callable, it should have the signature f(parameters, σ) -> jnp.ndarray.

  • parameters (Any) – The PyTree of parameters of the model.

  • state (Optional[SamplerState]) – The current state of the sampler. If not specified, then initialize and reset it.

Returns:

The new state of the sampler. σ: The next batch of samples.

Return type:

state

Note

The return order is inverted wrt sample because when called inside of a scan function the first returned argument should be the state.

samples(machine, parameters, *, state=None, chain_length=1)#

Returns a generator sampling chain_length batches of samples along the chains.

Parameters:
  • machine (Union[Callable, Module]) – A Flax module or callable with the forward pass of the log-pdf. If it is a callable, it should have the signature f(parameters, σ) -> jnp.ndarray.

  • parameters (Any) – The PyTree of parameters of the model.

  • state (Optional[SamplerState]) – The current state of the sampler. If not specified, then initialize and reset it.

  • chain_length (int) – The length of the chains (default = 1).

Return type:

Iterator[Array]