netket.driver.SteadyState#

class netket.driver.SteadyState[source]#

Bases: AbstractVariationalDriver

Steady-state driver minimizing L^†L.

Inheritance
Inheritance diagram of netket.driver.SteadyState
__init__(lindbladian, optimizer, *, variational_state, preconditioner=<function identity_preconditioner>)[source]#

Initializes the driver class.

Parameters:
  • lindbladian (AbstractSuperOperator) – The Lindbladian of the system.

  • optimizer (Any) – Determines how optimization steps are performed given the bare energy gradient.

  • preconditioner (Callable[[VariationalState, Any, Optional[Any]], Any]) – Determines which preconditioner to use for the loss gradient. This must be a tuple of (object, solver) as documented in the section preconditioners in the documentation. The standard preconditioner included with NetKet is Stochastic Reconfiguration. By default, no preconditioner is used and the bare gradient is passed to the optimizer.

  • variational_state (MCMixedState) –

Attributes
ldagl#

Return MCMC statistics for the expectation value of observables in the current state of the driver.

optimizer#

The optimizer used to update the parameters at every iteration.

preconditioner#

The preconditioner used to modify the gradient.

This is a function with the following signature

precondtioner(vstate: VariationalState,
              grad: PyTree,
              step: Optional[Scalar] = None)

Where the first argument is a variational state, the second argument is the PyTree of the gradient to precondition and the last optional argument is the step, used to change some parameters along the optimisation.

Often, this is taken to be SR(). If it is set to None, then the identity is used.

state#

Returns the machine that is optimized by this driver.

step_count#

Returns a monotonic integer labelling all the steps performed by this driver. This can be used, for example, to identify the line in a log file.

Methods
advance(steps=1)#

Performs steps optimization steps.

Parameters:

steps (int) – (Default=1) number of steps.

estimate(observables)#

Return MCMC statistics for the expectation value of observables in the current state of the driver.

Parameters:

observables – A pytree of operators for which statistics should be computed.

Returns:

A pytree of the same structure as the input, containing MCMC statistics for the corresponding operators as leaves.

iter(n_steps, step=1)#

Returns a generator which advances the VMC optimization, yielding after every step_size steps.

Parameters:
  • n_steps (int) – The total number of steps to perform (this is equivalent to the length of the iterator)

  • step (int) – The number of internal steps the simulation is advanced between yielding from the iterator

Yields:

int – The current step.

reset()#

Resets the driver.

Subclasses should make sure to call super().reset() to ensure that the step count is set to 0.

run(n_iter, out=(), obs=None, step_size=1, show_progress=True, save_params_every=50, write_every=50, callback=<function AbstractVariationalDriver.<lambda>>)#

Runs this variational driver, updating the weights of the network stored in this driver for n_iter steps and dumping values of the observables obs in the output logger.

It is possible to control more specifically what quantities are logged, when to stop the optimisation, or to execute arbitrary code at every step by specifying one or more callbacks, which are passed as a list of functions to the keyword argument callback.

Callbacks are functions that follow this signature:

def callback(step, log_data, driver) -> bool:
    ...
    return True/False

If a callback returns True, the optimisation continues, otherwise it is stopped. The log_data is a dictionary that can be modified in-place to change what is logged at every step. For example, this can be used to log additional quantities such as the acceptance rate of a sampler.

Loggers are specified as an iterable passed to the keyword argument out. If only a string is specified, this will create by default a nk.logging.JsonLog. To know about the output format check its documentation. The logger object is also returned at the end of this function so that you can inspect the results without reading the json output.

Parameters:
  • n_iter (int) – the total number of iterations to be performed during this run.

  • out (Optional[Iterable[AbstractLog]]) – A logger object, or an iterable of loggers, to be used to store simulation log and data. If this argument is a string, it will be used as output prefix for the standard JSON logger.

  • obs (Optional[dict[str, AbstractObservable]]) – An iterable containing all observables that should be computed

  • step_size (int) – Every how many steps should observables be logged to disk (default=1)

  • callback (Callable[[int, dict, AbstractVariationalDriver], bool]) – Callable or list of callable callback functions to stop training given a condition

  • show_progress (bool) – If true displays a progress bar (default=True)

  • save_params_every (int) – Every how many steps the parameters of the network should be serialized to disk (ignored if logger is provided)

  • write_every (int) – Every how many steps the json data should be flushed to disk (ignored if logger is provided)

update_parameters(dp)#

Updates the parameters of the machine using the optimizer in this driver

Parameters:

dp – the pytree containing the updates to the parameters