netket.optimizer.Momentum

Contents

netket.optimizer.Momentum#

netket.optimizer.Momentum(learning_rate, beta=0.9, nesterov=False)[source]#

Momentum-based Optimizer. The momentum update incorporates an exponentially weighted moving average over previous gradients to speed up descent Qian, N. (1999). The momentum vector \(\mathbf{m}\) is initialized to zero. Given a stochastic estimate of the gradient of the cost function \(G(\mathbf{p})\), the updates for the parameter \(p_k\) and corresponding component of the momentum \(m_k\) are

\[\begin{split}m^\prime_k &= \beta m_k + (1-\beta)G_k(\mathbf{p})\\ p^\prime_k &= \eta m^\prime_\end{split}\]
Parameters:
  • learning_rate (float) – The learning rate \(\eta\)

  • beta (float) – Momentum exponential decay rate, should be in [0,1].

  • nesterov (bool) – Flag to use nesterov momentum correction

Examples

Momentum optimizer.

>>> from netket.optimizer import Momentum
>>> op = Momentum(learning_rate=0.01)