netket.experimental.driver.VMC_SRt#

class netket.experimental.driver.VMC_SRt[source]#

Bases: AbstractVariationalDriver

Energy minimization using Variational Monte Carlo (VMC) and the kernel formulation of Stochastic Reconfiguration (SR). This approach lead to exactly the same parameter updates of the standard SR with a diagonal shift regularization. For this reason, it is equivalent to the standard nk.driver.VMC with the preconditioner nk.optimizer.SR(solver=netket.optimizer.solver.solvers.solve)). In the kernel SR framework, the updates of the parameters can be written as:

\[\delta \theta = \tau X(X^TX + \lambda \mathbb{I}_{2M})^{-1} f,\]

where \(X \in R^{P \times 2M}\) is the concatenation of the real and imaginary part of the centered Jacobian, with P the number of parameters and M the number of samples. The vector f is the concatenation of the real and imaginary part of the centered local energy. Note that, to compute the updates, it is sufficient to invert an \(M\times M\) matrix instead of a \(P\times P\) one. As a consequence, this formulation is useful in the typical deep learning regime where \(P \gg M\).

See R.Rende, L.L.Viteritti, L.Bardone, F.Becca and S.Goldt for a detailed description of the derivation. A similar result can be obtained by minimizing the Fubini-Study distance with a specific constrain, see A.Chen and M.Heyl for details.

Inheritance
Inheritance diagram of netket.experimental.driver.VMC_SRt
__init__(hamiltonian, optimizer, *, diag_shift, linear_solver_fn=<function <lambda>>, jacobian_mode=None, variational_state=None)[source]#

Initializes the driver class.

Parameters:
  • hamiltonian (AbstractOperator) – The Hamiltonian of the system.

  • optimizer (Any) – Determines how optimization steps are performed given the bare energy gradient.

  • diag_shift (Union[Any, Callable[[Union[Array, ndarray, bool_, number, float, int]], Union[Array, ndarray, bool_, number, float, int]]]) – The diagonal shift of the stochastic reconfiguration matrix. Typical values are 1e-4 Γ· 1e-3. Can also be an optax schedule.

  • hamiltonian – The Hamiltonian of the system.

  • linear_solver_fn (Callable[[Array, Array], Array]) – Callable to solve the linear problem associated to the updates of the parameters

  • jacobian_mode (Optional[str]) – The mode used to compute the jacobian of the variational state. Can be β€˜real’ or β€˜complex’ (defaults to the dtype of the output of the model).

  • variational_state (MCState) – The netket.vqs.MCState to be optimised. Other variational states are not supported.

Attributes
energy#

Return MCMC statistics for the expectation value of observables in the current state of the driver.

jacobian_mode#

The mode used to compute the jacobian of the variational state. Can be β€˜real’ or β€˜complex’.

Real mode truncates imaginary part of the wavefunction, while complex does not. This internally uses netket.jax.jacobian(). See that function for a more complete documentation.

optimizer#

The optimizer used to update the parameters at every iteration.

state#

Returns the machine that is optimized by this driver.

step_count#

Returns a monotonic integer labelling all the steps performed by this driver. This can be used, for example, to identify the line in a log file.

Methods
advance(steps=1)#

Performs steps optimization steps.

Parameters:

steps (int) – (Default=1) number of steps.

estimate(observables)#

Return MCMC statistics for the expectation value of observables in the current state of the driver.

Parameters:

observables – A pytree of operators for which statistics should be computed.

Returns:

A pytree of the same structure as the input, containing MCMC statistics for the corresponding operators as leaves.

iter(n_steps, step=1)#

Returns a generator which advances the VMC optimization, yielding after every step_size steps.

Parameters:
  • n_steps (int) – The total number of steps to perform (this is equivalent to the length of the iterator)

  • step (int) – The number of internal steps the simulation is advanced between yielding from the iterator

Yields:

int – The current step.

reset()#

Resets the driver.

Subclasses should make sure to call super().reset() to ensure that the step count is set to 0.

run(n_iter, out=(), obs=None, step_size=1, show_progress=True, save_params_every=50, write_every=50, callback=<function AbstractVariationalDriver.<lambda>>, timeit=False)#

Runs this variational driver, updating the weights of the network stored in this driver for n_iter steps and dumping values of the observables obs in the output logger.

It is possible to control more specifically what quantities are logged, when to stop the optimisation, or to execute arbitrary code at every step by specifying one or more callbacks, which are passed as a list of functions to the keyword argument callback.

Callbacks are functions that follow this signature:

def callback(step, log_data, driver) -> bool:
    ...
    return True/False

If a callback returns True, the optimisation continues, otherwise it is stopped. The log_data is a dictionary that can be modified in-place to change what is logged at every step. For example, this can be used to log additional quantities such as the acceptance rate of a sampler.

Loggers are specified as an iterable passed to the keyword argument out. If only a string is specified, this will create by default a nk.logging.JsonLog. To know about the output format check its documentation. The logger object is also returned at the end of this function so that you can inspect the results without reading the json output.

Parameters:
  • n_iter (int) – the total number of iterations to be performed during this run.

  • out (Optional[Iterable[AbstractLog]]) – A logger object, or an iterable of loggers, to be used to store simulation log and data. If this argument is a string, it will be used as output prefix for the standard JSON logger.

  • obs (Optional[dict[str, AbstractObservable]]) – An iterable containing all observables that should be computed

  • step_size (int) – Every how many steps should observables be logged to disk (default=1)

  • callback (Callable[[int, dict, AbstractVariationalDriver], bool]) – Callable or list of callable callback functions to stop training given a condition

  • show_progress (bool) – If true displays a progress bar (default=True)

  • save_params_every (int) – Every how many steps the parameters of the network should be serialized to disk (ignored if logger is provided)

  • write_every (int) – Every how many steps the json data should be flushed to disk (ignored if logger is provided)

  • timeit (bool) – If True, provide timing information.

update_parameters(dp)#

Updates the parameters of the machine using the optimizer in this driver

Parameters:

dp – the pytree containing the updates to the parameters