netket.experimental.QSR#

class netket.experimental.QSR#

Bases: AbstractVariationalDriver

Quantum state reconstruction driver minimizing KL divergence.

This driver variationally reconstructs a target state given the measurement data. It’s achieved by minimizing the average negative log-likelihood, or equivalently, the KL divergence between the distributions given by the data and the variational state:

\[\begin{split}&\min_\theta \frac{1}{N_b} \sum_{b=1}^{N_b} \sum_{\sigma_b} q_b(\sigma_b) \log \left[ \frac{q_b(\sigma_b)}{p_{b\theta}(\sigma_b)} \right] \\ &\approx \min_\theta \frac{1}{N_b} \sum_{b=1}^{N_b} \frac{1}{|D_b|} \sum_{\sigma_b \in D_b} [-\log p_{b\theta}(\sigma_b)],\end{split}\]

where \(\theta\) is the variational parameter, \(N_b\) is the number of measurement basis, \(q_b(\sigma_b)\) is the probability of obtaining the outcome state \(\sigma_b\) in the measurement basis \(b\) given the target state, \(p_{b\theta}(\sigma_b)\) is the probability of obtaining the outcome state \(\sigma_b\) in the measurement basis \(b\) given the variational state, and \(D_b\) is the size of the dataset in the measurement basis \(b\).

In practice, the noise introduced by mini-batch training hurts the convergence of accurate quantum state reconstruction. To alleviate this problem, we use a control variate method called stochastic variance reduced gradient (SVRG) to reduce the variance of the gradient estimator. Specifically, we update the parameters \(\theta\) according to

\[\theta_{i+1} = \theta_{i} -\eta \left\{ \underbrace{\nabla_\theta \left[\frac{1}{|B_i|}\sum_{\sigma_b\in B_i} \log p_{b\theta_i}(\sigma_b)\right]}_{\text{I: batch gradient}} - \underbrace{\nabla_\theta \left[\frac{1}{|B_i|}\sum_{\sigma_b\in B_i} \log p_{b\tilde{\theta}_i}(\sigma_b)\right]}_{\text{II: control variate}} + \underbrace{\nabla_\theta \left[\frac{1}{N_b} \sum_{b=1}^{N_b} \frac{1}{|D_b|} \sum_{\sigma_b \in D_b} \log p_{b\tilde{\theta}_i}(\sigma_b)\right]}_{\text{III: expectation of control variate}} \right\},\]

where term I is the normal batch gradient, term II is the control variate which is the batch gradient evaluated with a set of previous parameters

\[\begin{split}\tilde{\theta}_i = \begin{cases} \theta_i, &i=0 \mod m, \\ \tilde{\theta}_{i-1}, &\text{otherwise}, \end{cases}\end{split}\]

updated for every \(m\) iterations, and term III is the expectation value of the control variate since the mini-batch is sampled uniformly from the whole dataset.

Inheritance
Inheritance diagram of netket.experimental.QSR
__init__(training_data, training_batch_size, optimizer, *, variational_state, preconditioner=<function identity_preconditioner>, seed=None, batch_sample_replace=True, control_variate_update_freq=None, chunk_size=None)[source]#

Initializes the QSR driver class.

Parameters:
  • training_data (Union[RawQuantumDataset, Tuple[List, List]]) – A tuple of two arrays (sigma_s, Us). sigma_s is a the sampled states and Us is the corresponding rotations.

  • training_batch_size (int) – The training batch size.

  • optimizer – The optimizer to use. You can use optax optimizers or choose from the predefined optimizers netket offers.

  • variational_state (VariationalState) – The variational state to optimize.

  • preconditioner (Optional[Callable[[VariationalState, Any, Optional[Any]], Any]]) – The preconditioner to use. Defaults to identity_preconditioner.

  • seed (Optional[int]) – The RNG seed. Defaults to None.

  • batch_sample_replace (Optional[bool]) – Whether to sample with replacement. Defaults to True.

  • control_variate_update_freq (Union[int, str, None]) – The frequency of updating the control variates. Defaults to None. β€œAdaptive” for adaptive update frequency, i.e. n_samples // batch size.

  • chunk_size (Optional[int]) – The chunk size for the control variates. Defaults to None.

Raises:
  • Warning – If the chunk size is not a divisor of the training data size.

  • TypeError – If the training data is not a 2 element tuple.

Attributes
dataset#
optimizer#

The optimizer used to update the parameters at every iteration.

state#

Returns the machine that is optimized by this driver.

step_count#

Returns a monotonic integer labelling all the steps performed by this driver. This can be used, for example, to identify the line in a log file.

Methods
KL(target_state=None, n_shots=None)[source]#

Compute average KL divergence loss over a batch of data.

Parameters:

Warning

Exponentially expensive in the hilbert space size!

Need to know the target state!

KL_exact(target_state=None, n_shots=1)[source]#

Compute the average KL divergence loss between the variational state and the target state.

Parameters:
Return type:

float

Warning

Exponentially expensive in the hilbert space size!

Need to know the target state!

KL_whole_training_set(target_state=None, n_shots=None)[source]#

Compute average KL divergence loss over the whole training set.

Parameters:

Warning

Exponentially expensive in the hilbert space size!

Need to know the target state!

advance(steps=1)#

Performs steps optimization steps.

steps: (Default=1) number of steps

Parameters:

steps (int) –

entropy(target_state, n_shots=1, no_cache=False)[source]#

Compute the average entropy of the probability distributions given by the target state in different measurement basis.

Parameters:
  • target_state (Union[ndarray, Array]) – the target state.

  • n_shots (Optional[int]) – number of shots per measurement basis.

  • no_cache (Optional[bool]) – if True, do not use the cached value.

Return type:

float

Warning

Exponentially expensive in the hilbert space size!

Need to know the target state!

estimate(observables)#

Return MCMC statistics for the expectation value of observables in the current state of the driver.

Parameters:

observables – A pytree of operators for which statistics should be computed.

Returns:

A pytree of the same structure as the input, containing MCMC statistics for the corresponding operators as leaves.

info(depth=0)[source]#

Returns an info string used to print information to screen about this driver.

iter(n_steps, step=1)#

Returns a generator which advances the VMC optimization, yielding after every step_size steps.

Parameters:
  • n_iter – The total number of steps to perform.

  • step_size – The number of internal steps the simulation is advanced every turn.

  • n_steps (int) –

  • step (int) –

Yields:

int – The current step.

nll(return_stats=True)[source]#

Compute the Negative-Log-Likelihood over a batch of data.

Parameters:

return_stats (Optional[bool]) – if True, return the statistics.

Warning

Exponentially expensive in the hilbert space size!

nll_whole_training_set(return_stats=True)[source]#

Compute the Negative-Log-Likelihood over the whole training set.

Parameters:

return_stats (Optional[bool]) – if True, return the statistics.

Warning

Exponentially expensive in the hilbert space size!

reset()#

Resets the driver. Concrete drivers should also call super().reset() to ensure that the step count is set to 0.

run(n_iter, out=None, obs=None, show_progress=True, save_params_every=50, write_every=50, step_size=1, callback=<function AbstractVariationalDriver.<lambda>>)#

Executes the Monte Carlo Variational optimization, updating the weights of the network stored in this driver for n_iter steps and dumping values of the observables obs in the output logger. If no logger is specified, creates a json file at out, overwriting files with the same prefix.

By default uses nk.logging.JsonLog. To know about the output format check it’s documentation. The logger object is also returned at the end of this function so that you can inspect the results without reading the json output.

Parameters:
  • n_iter – the total number of iterations

  • out – A logger object, or an iterable of loggers, to be used to store simulation log and data. If this argument is a string, it will be used as output prefix for the standard JSON logger.

  • obs – An iterable containing all observables that should be computed

  • save_params_every – Every how many steps the parameters of the network should be serialized to disk (ignored if logger is provided)

  • write_every – Every how many steps the json data should be flushed to disk (ignored if logger is provided)

  • step_size – Every how many steps should observables be logged to disk (default=1)

  • show_progress – If true displays a progress bar (default=True)

  • callback – Callable or list of callable callback functions to stop training given a condition

update_parameters(dp)#

Updates the parameters of the machine using the optimizer in this driver

Parameters:

dp – the pytree containing the updates to the parameters