Change Log#
NetKet 3.16 (January 2025)#
Breaking Changes#
The ordering of the
netket.hilbert.Spin
hilbert space has been changed to reflect the more rational ordering of spin up == 1 and spin down == -1. TThe default dtype of the samples returned from
random_state()
has been changed to be consistent with the default dtype of the local values, and will generally switch fromjnp.float32
tojnp.int8
.The default dtype of all
netket.sampler.Sampler
s and their subclasses is now inferred from the Hilbert space, and will generally change fromjnp.float32
tojnp.int8
.
Improvements#
The default dtype of samples generated by discrete Hilbert spaces is now the smallest dtype possible that can represent all local degrees of freedom, and is now much smaller than before 1963.
Deprecations#
The flag
NETKET_DISABLE_ODE_JIT
which has long defaulted to True, has been removed. Now ODE integrator drivers can only run outside of jit, because jax has not supported re-entrant jitting since several versions and officially removed support for it in jax 0.5.
Bug Fixes#
A bug in the deserialization of variational states, which was not properly restoring the good sharding, has been fixed 1983
NetKet 3.15 (24 November 2024)#
Improvements#
The
draw()
method ofLattic
has been overhauled, and now supports 3D lattices and additional keyword arguments. The defaults are now tuned to draw the whole lattice as well as repeated cells due to periodicity, as well as the basis vectors.Drivers now call the loggers from all ranks, allowing more advanced logging logic (and checkpointers) to be implemented #1920.
The
netket.experimental.dynamics
module has been greatly refactored, changing all internal logics but exposing a well designed, easier to extend interface. While the interface is not yet documented, it is now reasonably possible to implement new ode integrators on top of our interface to be used withTDVP
or other drivers #1933.Timing of the run function with
timeit=True
is now more accurate, even on GPUs, but it will decrease performance #1958.The model
netket.models.Jastrow
now constructs its kernel matrix differently, resulting in faster calculations, especially on GPUs. The usage of the class is unchanged and the internal structure of the parameters does not break from previous versions #1964.Several under-the-hood changes to better serialize objects containing sharded arrays.
The
History
objects logged into a logger are now stored in aHistoryDict
dictionary instead of a standard dictionary. This should be a transparent change, as theHistoryDict
behaves as a standard dictionary, but will allow for improved serialization and deserialization.It is now possible to load a json-serialized
.log
file from standard loggers with the commandnetket.utils.history.HistoryDict.from_file()
.
Breaking Changes#
Removed support for using Numba-operators under sharding. This has never really worked realiably and lead to uncomprehensible crashes, and was very hard to maintain so it’s leaving #1919.
Loggers will now be called from all MPI ranks/ Jax processes, and are themselves responsible for only performing expensive I/O operations on a single rank (such as rank 0). The attribute
netket.logging.AbstractLog._is_master_process
can be used to determine whether the logger is being executed on the master process or not. For examples on how update loggers, refer tonetket.logging.RuntimeLog
ornetket.logging.TensorboardLog
#1920.The
integrator
argument of the constructorsTDVP
andTDVPSchmitt
has been renamed toode_solver
, and a deprecation warning will be raised ifintegrator is specified
. The attributeintegrator
of the driver is maintained, albeit it has sensibly different internals, but we also have added a newode_solver
attribute as well #1933.
Deprecations#
Constructing the {class}~
netket.optimizer.SR
object withSR(qgt=QGTType(...))
is now deprecated. This construction can lead to unexpected results because the keyword arguments specified in theQGTType
are overwritten by those specified by the SR class and its defaults. To fix this, construct SR asSR(qgt=QGTType, ...)
. A warning will be raised when using the deprecated syntax, and this will become an error in a future release.
NetKet 3.14.4 (7 November 2024)#
Fix a bug introduced in 3.14.3 when using chunking #1943.
Remove upper version constraints for flax and numba
Support jax 0.4.35
Support mpi4py>4
NetKet 3.14.3 (2 October 2024)#
Fix an issue in Jax operators, which would not be chunking correctly if they had more connected entries than the chunk size #1940.
NetKet 3.14.2 (18 September 2024)#
Fix an issue in
SpinOrbitalFermions
where the extra constraint would not work without a fermion number constraint #1924.
NetKet 3.14.1 (9 September 2024)#
Fix a dtype-stability issue in adaptive TDVP integrators #1918.
NetKet 3.14 (⛓️ 4 September 2024)#
New features#
Hilbert spaces such as
netket.hilbert.Spin
andnetket.hilbert.Fock
, as well as their base classnetket.hilbert.HomogeneousHilbert
, now support arbitrary custom constraints #1908.The constraint interface has been stabilised, documented, and made compatible with several utilities. It is now possible to generate random states from arbitrary constrained hilbert spaces automatically, and it is possible to index into those spaces efficiently. Look at the hilbert space documentation for more information #1908.
Fermionic hilbert spaces
SpinOrbitalFermions
now support an extra arbitrary constraint that can be specified by passing the keyword argumentconstraint=...
#1832Support equinox modules as models in Variational states. Note that equinox models by default only work with scalar inputs, while NetKet requires modules that work with batch inputs, so you will have to modify them slightly.
Breaking Changes#
Jax operators now use the same
chunk_size
as specified by the user when computing the forward pass. Prior to this change, Jax operators would be chunking the sample axis, but if an operator had a lot of connected elements this would end up increasing the effective sample size #1875.Metropolis Hamiltonian sampler for numba operators has been greatly simplified in order to remove the dependency on numba4jax. The new implementation will generally be slower than before, so we encourage you to use Jax Operators if possible. In the future, if people ask for it, we may reintroduce this implementation as a separate package #1747.
Previously-internal hilbert space constraint sub-module located at
netket.hilbert.index.constraints
has been moved tonetket.hilbert.constraint
#1908.Due to improvements to the saving logic, it might no longer be possible to load when using MPI the sampler state saved in previous versions using MPI, as those only contained the sampler state of the rank 0 and it was leading silently to having the same sampler state across all ranks #1914.
Improvements#
Specialised lattice constructors like
netket.graph.Grid()
now accept apoint_group
argument, overriding the default (usually maximal) point groups #1879.Methods to generate random states are automatically implemented for all
netket.hilbert.HomogeneousHilbert
, constrained or not #1911.Serialization of metropolis sampler states when using MPI will now serialise the parameters across all MPI ranks, not only rank 0 #1914.
Our implementation of
netket.sampler.MetropolisSampler
had a sub-optimal complexity ofO((sweep_size+1) * n_samples)
instead ofO(sweep_size * n_samples)
because it was recomputing the variational function at the beginning of every sweep. This has now been fixed #1915.
Bug fixes#
Fix the function
netket.graph.SpaceGroupBuilder.space_group_irreps()
throwing away the imaginary part of point-group characters, which led to incorrect space-group characters in some rare cases #1876.Fixed bug #1811, and it is now possible to serialise sampler states that have new-style jax random number generators #1914.
Finalized deprecations#
Some features that have been deprecated for the last ~24 months have been finally removed from NetKet and will now raise errors. If this is a problem for you, you should install an older version of NetKet.
Finalized deprecation for
netket.nn.update_dense_symm
utility used to change the format of stored parameters for DenseSymm layers. The method was used to update from a format used in NetKet v3.2, released in 2021.Finalized deprecation for
netket.nn.initializers
which has been deprecated in favor ofjax.nn.initializers
in 2021.Finalized deprecation for
netket.nn.Module
,netket.nn.compact
,netket.nn.Dense
and similar methods that have been aliasing toflax.linen
since NetKet 3.5 (released in august 2022).Finalized deprecation for
rescale_shift
argument ofQGTJacobian***
implementations, which was superseeded bydiag_scale
. This was deprecated since NetKet v3.6 released in november 2022.Finalized deprecation for preconditioner signatures with only 2 arguments in favour of the new format using 3 arguments, which have been deprecated since NetKet v3.6 released in november 2022.
NetKet 3.13 (11 July 2024)#
New Features#
Added the observable
netket.experimental.observable.VarianceObservable()
to compute the value and the gradient of the variance of an arbitrary quantum operator #1687.Added the function
netket.jax.tree_norm()
to compute the L-p norm of a PyTree, interpreted as a vector of values, without concatenating or ravelling the leaves #1819.The default value of
n_discard_per_chain
has been changed to 5, which is a more reasonable number in most cases. It might be low for some applications.The sampler
netket.sampler.MetropolisSampler
and all its derivatives now support chunking for the evaluation of the wavefunction at every Metropolis step #1828.Add a new function
netket.hilbert.DiscreteHilbert.local_indices_to_states()
to convert integer indices to local configurations #1833.Support NetKet’s own linear solvers in
netket.experimental.driver.VMC_SRt
#1830.
Deprecations#
Following the discovery and fix of the Parallel Tempering bugs,
netket.experimental.sampler.MetropolisPt
and related samplers have been stabilised, so they should be constructed withnetket.sampler.ParallelTemperingSampler
#1803.
Improvements#
Drivers now always log Monte Carlo acceptance if you are using a Monte Carlo sampler #1816.
netket.sampler.rules.ExchangeRule
now only proposes exchanges where the local degrees of freedom changes #1815.All solvers within
netket.optimizer.solver
now automatically return a partial capturing keyword arguments such asrtol
andrcond
if called with only the keyword arguments. This can be used to more easily set those optimizer options when constructing the solver to be passed to SR or other algorithms #1817.Unify the initialisation logic of
netket.optimizer.qgt.QGTJacobianDense
andnetket.optimizer.qgt.QGTJacobianPyTree
, providing a single entry point for defining the QGT constructors for custom variational states #1320.Fix serialisation of
netket.sampler.SamplerState
RNG seed, which now will be correct under MPI and Sharding #1823.Ensure that
netket.hilbert.DoubledSpace
is indexable in more situations when wrapping constrained Hilbert spaces #1846.Make the identity preconditioner an (empty) PyTree instead of a function #1836.
Improve several aspects of the fermions API when working with systems that have Spin-1 or greater fermions #1844.
Greatly improve the documentation of
netket.jax.expect()
and provide examples of how to use it when running with multiple MPI nodes #1356.Supports Numpy 2.0 #1852.
Supports any positive real power
machine_pow
of the wave function amplitude as probability distribution for Monte Carlo sampling, not just integers #1854.
Bug Fixes#
When deserializing a variational state with flax, convert all arrays to
jax.Array
instead of returning numpy arrays #1842.Fix internal issue with
netket.utils.struct.Pytree
not inizializing default fields correctly #1837.Fix issue with
netket.logging.JsonLog
raising an error at the end of a program because of a wrongly defined__del__
method #2dd40cf
NetKet 3.12.4#
Fix bug #1850 where adding two
netket.operator.Ising
acting on different graphs would yield a wrong operator #1851.
NetKet 3.12.3 (25 June 2024)#
NetKet 3.12.2 (15 June 2024)#
Support jax 0.4.29
The
NETKET_MPI_AUTODETECT_LOCAL_GPU=1
environment variable to autoselect local GPUs when running under MPI has had a bug fixed that prevented it from working correctly .A bug where running with `NETKET_EXPERIMENTAL_SHARDING=1 seeds where not correctly syncronised across different processes has been fixed #1829.
NetKet 3.12.1 (30 May 2024)#
This release fixes a bug in
netket.sampler.MetropolisSamplerNumpy
that prevented it from working when using MPI #1818.
NetKet 3.12 (💫 13 May 2024)#
New Features#
Discrete Hilbert spaces now use a special
netket.utils.StaticRange
object to store the local values that label the local degree of freedom. This special object is jax friendly and can be converted to arrays, and allows for easy conversion from the local degrees of freedom to integers that can be used to index into arrays, and back. While those objects are not really used internally yet, in the future they will be used to simplify the implementations of operators and other objects #1732.Some utilities to time execution of training loop are now provided, that can be used to coarsely see what part of the algorithm is dominating the training cost. To use it, pass
driver.run(..., timeit=True)
to all drivers when running them.Added several new tensor network ansatze to the
netket.models.tensor_networks
namespace. Those also replace previous tensor network implementations, that were de-facto broken #1745.Add jax implementation of Bose Hubbard Operator, named
netket.operator.BoseHubbardJax
and split numba implementation in a separate class #1773.NetKet now automatically sets the visible GPUs when running under MPI with GPUs, by enumerating local GPUs and setting
jax_default_device
according to some local rank. This behaviour should allow users to not have to specifyCUDA_VISIBLE_DEVICES
and local mpi ranks on their scripts. This behaviour is only activated when running using MPI, and not used when using experimental sharding mode. To disable this functionality, setNETKET_MPI_AUTODETECT_LOCAL_GPU=0
#1757.netket.experimental.models.Slater2nd
now implements also the generalized hartree fock, as well as the restricted and unrestricted HF of before #1765.A new variational state computing the sum of multiple slater determinants has been added, named
netket.experimental.models.MultiSlater2nd
. This state has the same options ofSlater2nd
#1765.Support for
jax>=0.4.27
#1801.
Breaking Changes#
The
out
keyword of Discrete Hilbert indexing methods (all_states
,numbers_to_states
andstates_to_numbers
) deprecated in the last release has been removed completely #1722.The Homogeneous Hilbert spaces now must store the list of valid local values for the states with a
netket.utils.StaticRange
objects instead of list of floats. The constructors have been updated accordingly.StaticRange
is a range-like object that is jax-compatible and from now on should be used to index into local hilbert spaces #1732.The
numbers_to_states
andstates_to_numbers
methods ofnetket.hilbert.DiscreteHilbert
must now be jax jittable. Custom Hilbert spaces using non-jittable functions have to be adapted by including ajax.pure_callback()
in thenumbers_to_states
/states_to_numbers
member functions #1748.chunk_size
must be set to an integer and will error immediately otherwise. This might break some code, but in general should give more informative error messages overall #1798.
Deprecations#
The method
netket.nn.states_to_numbers()
is now deprecated. Please usenumbers_to_states()
directly.
Improvements#
Rewrite the code for generating random states of
netket.hilbert.Fock
andnetket.hilbert.Spin
in Jax and jit theinit
andreset
functions ofnetket.sampler.MetropolisSampler
for better performance and improved compatibility with sharding #1721.Rewrite
netket.hilbert.index
used byHomogeneousHilbert
(includingSpin
andFock
) so that larger spaces with a sum constraint can be indexed. This can be useful fornetket.sampler.Exactsampler
,netket.vqs.FullSumState
as well as for ED calculations #1720.Duplicating a
netket.vqs.MCState
now leads to perfectly deterministic, identical samples between two different copies of the sameMCState
even if the sampler is changed. Previously, duplicating anMCState
and changing the sampler on two copies of the same state would lead to some completely random seed being used and therefore different samples to be generated. This change is needed to eventually achieve proper checkpointing of our calculations #1778.The methods converting Jax Operators to another kind (such as LocalOperators to PauliOperators) will return the Jax version of those operators if available #1781.
Parallel Tempering samplers
netket.experimental.sampler.MetropolisPt
now accept a distribution (lin
orlog
) for the distribution of the temperatures, or a custom array #1786.
Finalized Deprecations#
Removed module function
netket.sampler.sample_next
that was deprecated in NetKet 3.3 (December 2021) #17XX.
Internal changes#
Initialize the MetropolisSamplerState in a way that avoids recompilation when using sharding #1776.
Wrap several functions in the samplers and operators with a
shard_map
to avoid unnecessary collective communication when doing batched indexing of sharded arrays #1777.Callbacks are now Pytree and can be flattened/unflatted and serialized with flax #1666.
Bug Fixes#
Fixed the gradient of variational states w.r.t. complex parameters which was missing a factor of 2. The learning rate needs to be halved to reproduce simulations made with previous versions of NetKet #1785.
Fixed the bug #1791. where MetropolisHamiltonian with jax operators was leaking tracers and crashing #1792.
The bug in Parallel Tempering samplers was found and they have now been fixed. In short, usages until now were most likely returning garbage samples, but not anymore! #1769.
NetKet 3.11.3 (🐟 2 April 2024)#
Bugfix release addressing the following issues:
Fixes a bug where the conjugate of a fermionic operator was the conjugate-transpose, and the hermitian transpose
.H
was the identity. This could break code relying on complex-valued fermionic operators #1743.Fixed a bug when converting jax operators to qutip format #1749.
Fixed an internal bug of
netket.utils.struct.Pytree
, where the cached properties’s cache was not cleared whenreplace
was used to copy and modify the Pytree #1750.Update upper bound on optax to
optax<0.3
, following the release ofoptax
0.2 #1751.Support QuTiP 5, released in march 2024 #1762.
NetKet 3.11.2 (27 february 2024)#
Bugfix release to solve the following issues:
Fix error thrown in repr method of error thrown in TDVP integrators.
Fix repr error of
netket.sampler.rules.MultipleRules
#1729.Solve an issue with RK Integrators that could not be initialised with integer
t0
initial time ifdt
was a float, as well as a wrongrepr
method leading to uncomprehensible stacktraces #1736.
NetKet 3.11.1 (19 february 2024)#
Bugfix release to solve two issues:
NetKet 3.11 (~💘 16 february 2024)#
This release supports Python 3.12 through the latest release of Numba, introduces several new jax-compatible operators and adds a new experimental way to distribute calculations among multiple GPUs without using MPI.
We have a few breaking changes as well: deprecations that were issued more than 18 months ago have now been finalized, most notable the dtype
argument to several models and layers, some keywords to GCNN and setting the number of chains of exact samplers.
New Features#
Recurrent neural networks and layers have been added to
nkx.models
andnkx.nn
#1305.Added experimental support for running NetKet on multiple jax devices (as an alternative to MPI). It is enabled by setting the environment variable/configuration flag
NETKET_EXPERIMENTAL_SHARDING=1
. Parallelization is achieved by distributing the Markov chains / samples equally across all available devices utilizingjax.Array
sharding. On GPU multi-node setups are supported via jax.distribued, whereas on CPU it is limited to a single process but several threads can be used by settingXLA_FLAGS='--xla_force_host_platform_device_count=XX'
#1511.netket.experimental.operator.FermionOperator2nd
is a new Jax-compatible implementation of fermionic operators. It can also be constructed starting from a standard fermionic operator by callingoperator.to_jax_operator()
, or used in combination withpyscf
converters#1675,#1684.netket.operator.LocalOperatorJax
is a new Jax-compatible implementation of local operators. It can also be constructed starting from a standard operator by callingoperator.to_jax_operator()
#1654.The logger interface has been formalised and documented in the abstract base class
netket.logging.AbstractLog
#1665.The
ParticleExchange
sampler and corresponding ruleParticleExchangeRule
has been added, which special casesExchangeSampler
to fermionic spaces in order to avoid proposing moves where the two site exchanged have the same population #1683.
Breaking Changes#
The
netket.models.Jastrow
wave-function now only has \(N (N-1)\) variational parameters, instead of the \(N^2\) redundant ones it had before. Saving and loading format has now changed and won’t be compatible with previous versions#1664.Finalize deprecations of some old methods in
netket.sampler
namespace (see original commit 1f77ad8267e16fe8b2b2641d1d48a0e7ae94832e)Finalize deprecations of 2D input to DenseSymm layers, which now turn into error and
extra_bias
option of Equivariant Networks/GCNNs (see original commit c61ea542e9d0f3e899d87a7471dea96d4f6b152d)Finalize deprecations of very old input/properties to Lattices 0f6f520da9cb6afcd2361dd6fd029e7ad6a2693e)
Finalie the deprecation for
dtype=
attribute of several modules innetket.nn
andnetket.models
, which has been printing an error since April 2022. You should update usages ofdtype=
toparam_dtype=
#1724
Deprecations#
MetropolisSampler.n_sweeps
has been renamed tosweep_size
for clarity. Usingn_sweeps
when constructing the sampler now throws a deprecation warning;sweep_size
should be used instead going forward #1657.Samplers and metropolis rules defined as
netket.utils.struct.dataclass()
are deprecated because the base class is now anetket.utils.struct.Pytree
. The only change needed is to remove the dataclass decorator and define a standard init method #1653.The
out
keyword of Discrete Hilbert indexing methods (all_states
,numbers_to_states
andstates_to_numbers
) is deprecated and will be removed in the next release. Plan ahead and remove usages to avoid breaking your code 3 months from now #1725!
Internal changes#
A new class
netket.utils.struct.Pytree
, can be used to create Pytrees for which inheritance autoamtically works and for which it is possible to define__init__
. Several structures such as samplers and rules have been transitioned to this new interface instead of old style@struct.dataclass
#1653.The
FermionOperator2nd
and related classes now store the constant diagonal shift as another term instead of a completely special cased scalar value. The same operators now also respect thecutoff
keyword argument more strictly #1686.Dtypes of the matrix elements of operators are now handled more correctly, and fewer warnings are raised when running NetKet in X32 mode. Moreover, operators like Ising now default to floating point dtype even if the coefficients are integers #1697.
Bug Fixes#
Support multiplication of Discrete Operators by Sparse arrays #1661.
NetKet 3.10.2 (14 november 2023)#
Bug Fixes#
Fixed a bug where it was not possible to recompile functions using two identical but different instances of PauliStringJax #1647.
Fixed a minor bug where chunking was never actually used inside of
local_estimators()
. This will turn on chunking for some other drivers such asnetket.experimental.driver.VMC_SRt
andnetket.experimental.driver.TDVPSchmitt
) #1650.netket.operator.Ising
now throws an error when it is constructed using a non-netket.hilbert.Spin
hilbert space #1648.
NetKet 3.10.1 (8 november 2023)#
Bug Fixes#
Added support for neural networks with complex parameters to
netket.experimental.driver.VMC_SRt
, which was just crashing with unreadable errors before #1644.
NetKet 3.10 (🥶 7 november 2023)#
The highlights of this version are a new experimental driver to optimise networks with millions of parameters using SR, and introduces new utility functions to convert a pyscf molecule to a netket Hamiltonian.
Read below for a more detailed changelog
New Features#
Added new
netket.experimental.driver.VMC_SRt
driver, which leads in identical parameter updates as the standard Stochastic Reconfiguration with diagonal shift regularization. Therefore, it is essentially equivalent to using the standardnetket.driver.VMC
with thenetket.optimizer.SR
preconditioner. The advantage of this method is that it requires the inversion of a matrix with side number of samples instead of number of parameters, making this formulation particularly useful in typical deep learning scenarios #1623.Added a new function
netket.experimental.operator.from_pyscf_molecule()
to construct the electronic hamiltonian of a given molecule specified through pyscf. This is accompanied bynetket.experimental.operator.pyscf.TV_from_pyscf_molecule()
to compute the T and V tensors of a pyscf molecule #1602.Added the operator computing the Rényi2 entanglement entropy on Hilbert spaces with discrete dofs #1591.
It is now possible to disable netket’s double precision default activation and force all calculations to be performed using single precision by setting the environment variable/configuration flag
NETKET_ENABLE_X64=0
, which also setsJAX_ENABLE_X64=0
. When running with this flag, the number of warnings printed by jax is considerably reduced as well #1544.Added new shortcuts to build the identity operator as
netket.operator.spin.identity()
andnetket.operator.boson.identity()
#1601.Added new
netket.hilbert.Particle
constructor that only takes as input the number of dimensions of the system #1577.Added new
netket.experimental.models.Slater2nd
model implementing a Slater ansatz #1622.Added new
netket.jax.logdet_cmplx()
function to compute the complex log-determinant of a batch of matrices #1622.
Breaking changes#
netket.experimental.hilbert.SpinOrbitalFermions
attributes have been changed:n_fermions
now always returns an integer with the total number of fermions in the system (if specified). A new attributen_fermions_per_spin
has been introduced that returns the same tuple of fermion number per spin subsector as before. A few fields are now marked as read-only as modifications where ignored #1622.The
netket.nn.blocks.SymmExpSum
layer is now normalised by the number of elements in the symmetry group in order to maintain a reasonable normalisation #1624.The labelling of spin sectors in
netket.experimental.operator.fermion.create()
and similar operators has now changed from the eigenvalue of the spin operator (\(\pm 1/2\) and so on) to the eigenvalue of the Pauli matrices (\(\pm 1\) and so on) #1637.The connected elements and expectation values of all non-simmetric fermionic operators is now changed in order to be correct #1640.
Improvements#
Considerably reduced the memory consumption of
LocalOperator
, especially in the case of large local hilbert spaces. Also leveraged sparsity in the terms to speed up compilation (_setup
) in the same cases #1558.netket.nn.blocks.SymmExpSum
now works with inputs of arbitrary dimensions, while previously it errored for all inputs that were not 2D #1616Stop using
FrozenDict
fromflax
and instead return standard dictionaries for the variational parameters from the variational state. This makes it much easier to edit parameters #1547.Vastly improved, finally readable documentation of all Flax modules and neural network architectures #1641.
Bug Fixes#
Fixed minor bug where
netket.operator.LocalOperator
could not be built withnp.matrix
object obtained by converting scipy sparse matrices to dense #1597.Raise correct error instead of unintelligible one when multiplying
netket.experimental.operator.FermionOperator2nd
with other operators #1599.Do not rescale the output of
netket.jax.jacobian()
by the square root of number of samples. Previously, when specifyingcenter=True
we were incorrectly rescaling the output #1614.Fix bug in
netket.operator.PauliStrings
that caused the dtype to get out of sync with the dtype of the internal arrays, causing errors when manipulating them symbolically #1619.Fix bug that prevented the use of
netket.operator.DiscreteJaxOperator
as observables with all drivers #1625.Fermionic operator
get_conn
method was returning values as if the operator was transposed, and has now been fixed. This will break the expectation value of non-simmetric fermionic operators, but hopefully nobody was looking into them #1640.
NetKet 3.9.2#
This release requires at least Python 3.9 and Jax 0.4.
Bug Fixes#
Fix a bug introduced in version 3.9 for
netket.experimental.driver.TDVPSchmitt
which resulted in the wrong dynamics #1551.
NetKet 3.9.1#
Bug Fixes#
Fix a bug in the construction of
netket.operator.PauliStringsJax
in some cases #1539.
NetKet 3.9 (🔥 24 July 2023)#
This release requires Python 3.8 and Jax 0.4.
New Features#
netket.callbacks.EarlyStopping
now supports relative tolerances for determining when to stop #1481.netket.callbacks.ConvergenceStopping
has been added, which can stop a driver when the loss function reaches a certain threshold #1481.A new base class
netket.operator.DiscreteJaxOperator
has been added, which will be used as a base class for a set of operators that are jax-compatible #1506.netket.sampler.rules.HamiltonianRule()
has been split into two implementations,netket.sampler.rules.HamiltonianRuleJax
andnetket.sampler.rules.HamiltonianRuleNumba
, which are to be used forDiscreteJaxOperator
and standard numba-basedDiscreteOperator
s. The user-facing API is unchanged, but the returned type might now depend on the input operator #1514.netket.operator.PauliStringsJax
is a new operator that behaves asnetket.operator.PauliStrings
but is Jax-compatible, meaning that it can be used inside of jax-jitted contexts and works better with chunking. It can also be constructed starting from a standard Ising operator by callingoperator.to_jax_operator()
#1506.netket.operator.IsingJax
is a new operator that behaves asnetket.operator.Ising
but is Jax-compatible, meaning that it can be used inside of jax-jitted contexts and works better with chunking. It can also be constructed starting from a standard Ising operator by callingoperator.to_jax_operator()
#1506.Added a new method
netket.operator.LocalOperator.to_pauli_strings()
to convertnetket.operator.LocalOperator
tonetket.operator.PauliStrings
. As PauliStrings can be converted to Jax-operators, this now allows to convert arbitrary operators to Jax-compatible ones #1515.The constructor of
QGTOnTheFly()
now takes an optional boolean argumentholomorphic : Optional[bool]
in line with the other geometric tensor implementations. This flag does not affect the computation algorithm, but will be used to raise an error if the user attempts to callto_dense()
with a non-holomorphic ansatz. While this might break past code, the numerical results were incorrect.
Breaking Changes#
The first two axes in the output of the samplers have been swapped, samples are now of shape
(n_chains, n_samples_per_chain, ...)
consistent withnetket.stats.statistics
. Custom samplers need to be updated to return arrays of shape(n_chains, n_samples_per_chain, ...)
instead of(n_samples_per_chain, n_chains, ...)
. #1502The tolerance arguments of
TDVPSchmitt
have all been renamed to more understandable quantities without inspecting the source code. In particular,num_tol
has been renamed torcond
,svd_tol
torcond_smooth
andnoise_tol
tonoise_atol
.
Deprecations#
netket.vqs.ExactState
has been renamed tonetket.vqs.FullSumState
to better reflect what it does. Using the old name will now raise a warning #1477.
Known Issues#
The new
Jax
-friendly operators do not work withnetket.vqs.FullSumState
because they are not hashable. This will be fixed in a minor patch (coming soon).
NetKet 3.8 (8 May 2023)#
This is the last NetKet release to support Python 3.7 and Jax 0.3. Starting with NetKet 3.9 we will require Jax 0.4, which in turns requires Python 3.8 (and soon 3.9).
New features#
netket.hilbert.TensorHilbert
has been generalised and now works with both discrete, continuous or a combination of discrete and continuous hilbert spaces #1437.NetKet is now compatible with Numba 0.57 and therefore with Python 3.11 #1462.
The new Metropolis sampling transition proposal rules
netket.sampler.rules.MultipleRules()
has been added, which can be used to pick from different transition proposals according to a certain probability distribution.The new Metropolis sampling transition proposal rules
netket.sampler.rules.TensorRule()
has been added, which can be used to combine different transition proposals acting on different subspaces of the Hilbert space together.The new Metropolis sampling transition proposal rules
netket.sampler.rules.FixedRule()
has been added, which does not change the configuration.
Deprecations#
The non-public API function to select the default QGT mode for
QGTJacobian
, located atnetket.optimizer.qgt.qgt_jacobian_common.choose_jacobian_mode
has been renamed and made part of the public API of asnetket.jax.jacobian_default_mode
. If you were using this function, please update your codes #1473.
Bug Fixes#
Fix issue #1435, where a 0-tangent originating from integer samples was not correctly handled by
netket.jax.vjp()
#1436.Fixed a bug in
netket.sampler.rules.LangevinRule
when settingchunk_size
#1465.
Improvements#
netket.operator.ContinuousOperator
has been improved and now they correctly test for equality and generate a consistent hash. Moreover, the internal logic ofnetket.operator.SumOperator
andnetket.operator.Potential
has been improved, and they lead to less recompilations when constructed again but identical. A few new attributes for those operators have also been exposed #1440.netket.nn.to_array()
accepts an optional keyword argumentchunk_size
, and related methods on variational states now use the chunking specified in the variational state when generating the dense array #1470.
Breaking Changes#
Jax version
0.4
is now required, meaning that NetKet no longer works on Python 3.7.
NetKet 3.7 (💘 13 february 2023)#
New features#
Input and hidden layer masks can now be specified for
netket.models.GCNN
#1387.Support for Jax 0.4 added #1416.
Added a continuous space langevin-dynamics transition rule
netket.sampler.rules.LangevinRule
and its corresponding shorthand for constructing the MCMC samplernetket.sampler.MetropolisAdjustedLangevin()
#1413.Added an experimental Quantum State Reconstruction driver at
netket.experimental.QSR
to reconstruct states from data coming from quantum computers or simulators #1427.Added
netket.nn.blocks.SymmExpSum
flax module that symmetrizes a bare neural network module by summing the wave-function over all possible symmetry-permutations given by a certain symmetry group #1433.
Breaking Changes#
Parameters of models
netket.models.GCNN
and layersnetket.nn.DenseSymm
andnetket.nn.DenseEquivariant
are stored as an array of shape ‘[features,in_features,mask_size]’. Masked parameters are now excluded from the model instead of multiplied by zero #1387.
Improvements#
The underlying extension API for Autoregressive models that can be used with Ancestral/Autoregressive samplers has been simplified and stabilized and will be documented as part of the public API. For most models, you should now inherit from
netket.models.AbstractARNN
and define the methodconditionals_log_psi()
. For additional performance, implementers can also redefine__call__()
andconditional()
but this should not be needed in general. This will cause some breaking changes if you were relying on the old undocumented interface #1361.netket.operator.PauliStrings
now works with non-homogeneous Hilbert spaces, such as those obtained by taking the tensor product of multiple Hilbert spaces #1411.The
netket.operator.LocalOperator
now keep sparse matrices sparse, leading to faster algebraic manipulations of those objects. The overall computational and memory cost is, however, equivalent, when running VMC calculations. All pre-constructed operators such asnetket.operator.spin.sigmax()
andnetket.operator.boson.create()
now build sparse-operators #1422.When multiplying an operator by it’s conjugate transpose NetKet does not return anymore a lazy
Squared
object if the operator is hermitian. This avoids checking if the object is hermitian which greatly speeds up algebric manipulations of operators, and returns more unbiased epectation values #1423.
Bug Fixes#
Fixed a bug where
netket.hilbert.Particle.random_state()
could not be jit-compiled, and therefore could not be used in the sampling #1401.Fixed bug #1405 where
netket.nn.DenseSymm()
andnetket.models.GCNN()
did not work or correctly consider masks #1428.
Deprecations#
netket.models.AbstractARNN._conditional()
has been removed from the API, and its use will throw a deprecation warning. Update your ARNN models accordingly! #1361.Several undocumented internal methods from
netket.models.AbstractARNN
have been removed #1361.
NetKet 3.6 (🏔️ 6 November 2022)#
New features#
Added a new ‘Full statevector’ model
netket.models.LogStateVector
that stores the exponentially large state and can be used as an exact ansatz #1324.Added a new experimental
TDVPSchmitt
driver, implementing the signal-to-noise ratio TDVP regularisation by Schmitt and Heyl #1306.Added a new experimental
TDVPSchmitt
driver, implementing the signal-to-noise ratio TDVP regularisation by Schmitt and Heyl #1306.QGT classes accept a
chunk_size
parameter that overrides thechunk_size
set by the variational state object #1347.QGTJacobianPyTree()
andQGTJacobianDense()
support diagonal entry regularisation with constant and scale-invariant contributions. They accept a newdiag_scale
argument to pass the scale-invariant component #1352.SR()
preconditioner now supports scheduling of the diagonal shift and scale regularisations #1364.
Improvements#
expect_and_grad()
now returns anetket.stats.Stats
object that also contains the variance, asMCState
does #1325.Experimental RK solvers now store the error of the last timestep in the integrator state #1328.
PauliStrings
can now be constructed by passing a single string, instead of the previous requirement of a list of strings #1331.FrozenDict
can now be logged to netket’s loggers, meaning that one does no longer need to unfreeze the parameters before logging them #1338.Fermion operators are much more efficient and generate fewer connected elements #1279.
NetKet now is completely PEP 621 compliant and does not have anymore a
setup.py
in favour of apyproject.toml
based on hatchling. To install NetKet you should use a recent version ofpip
or a compatible tool such as poetry/hatch/flint #1365.QGTJacobianDense()
can now be used withExactState
#1358.
Bug Fixes#
netket.vqs.ExactState.expect_and_grad()
returned a scalar whileexpect()
returned anetket.stats.Stats
object with 0 error. The inconsistency has been addressed and now they both return aStats
object. This changes the format of the files logged when runningVMC
, which will now store the average underMean
instead ofvalue
#1325.netket.optimizer.qgt.QGTJacobianDense()
now returns the correct output for models with mixed real and complex parameters #1397
Deprecations#
The
rescale_shift
argument ofQGTJacobianPyTree()
andQGTJacobianDense()
is deprecated in favour the more flexible syntax withdiag_scale
.rescale_shift=False
should be removed.rescale_shift=True
should be replaced withdiag_scale=old_diag_shift
. #1352.The call signature of preconditioners passed to
netket.driver.VMC
and other drivers has changed as a consequence of scheduling, and preconditioners should now accept an extra optional argumentstep
. The old signature is still supported but is deprecated and will eventually be removed #1364.
NetKet 3.5.2 (Bug Fixes) - 30 October 2022#
Bug Fixes#
PauliStrings
now support the subtraction operator #1336.Autoregressive networks had a default activation function (
selu
) that did not act on the imaginary part of the inputs. We now changed that, and the activation function isreim_selu
, which acts independently on the real and imaginary part. This changes nothing for real parameters, but improves the defaults for complex ones #1371.A major performance degradation that arose when using
LocalOperator
has been addressed. The bug caused our operators to be recompiled every time they were queried, imposing a large overhead 1377.
NetKet 3.5.1 (Bug Fixes)#
New features#
Added a new configuration option
netket.config.netket_experimental_disable_ode_jit
to disable jitting of the ODE solvers. This can be useful to avoid hangs that might happen when working on GPUs with some particular systems #1304.
Bug Fixes#
NetKet 3.5 (☀️ 18 August 2022)#
This release adds support and needed functions to run TDVP for neural networks with real/non-holomorphic parameters, an experimental HDF5 logger, and an MCState
method to compute the local estimators of an observable for a set of samples.
This release also drops support for older version of flax, while adopting the new interface which completely supports complex-valued neural networks. Deprecation warnings might be raised if you were using some layers from netket.nn
that are now avaiable in flax.
A new, more accurate, estimation of the autocorrelation time has been introduced, but it is disabled by default. We welcome feedback.
New features#
The method
local_estimators()
has been added, which returns the local estimatorsO_loc(s) = 〈s|O|ψ〉 / 〈s|ψ〉
(which are known as local energies ifO
is the Hamiltonian). #1179The permutation equivariant
netket.models.DeepSetRelDistance
for use with particles in periodic potentials has been added together with an example. #1199The class
HDF5Log
has been added to the experimental submodule. This logger writes log data and variational state variables into a single HDF5 file. #1200Added a new method
serialize()
to store the content of the logger to disk #1255.New
netket.callbacks.InvalidLossStopping
which stops optimisation if the loss function reaches aNaN
value. An optionalpatience
argument can be set. #1259Added a new method
netket.graph.SpaceGroupBuilder.one_arm_irreps()
to construct GCNN projection coefficients to project on single-wave-vector components of irreducible representations. #1260.New method
expect_and_forces()
has been added, which can be used to compute the variational forces generated by an operator, instead of only the (real-valued) gradient of an expectation value. This in general is needed to write the TDVP equation or other similar equations. #1261TDVP now works for real-parametrized wavefunctions as well as non-holomorphic ones because it makes use of
expect_and_forces()
. #1261New method
apply_to_id()
can be used to apply a permutation (or a permutation group) to one or more lattice indices. #1293It is now possible to disable MPI by setting the environment variable
NETKET_MPI
. This is useful in cases where mpi4py crashes upon load #1254.The new function
netket.nn.binary_encoding()
can be used to encode a set of samples according to the binary shape defined by an Hilbert space. It should be used similarly toflax.linen.one_hot()
and works with non homogeneous Hilbert spaces #1209.A new method to estimate the correlation time in Markov chain Monte Carlo (MCMC) sampling has been added to the
netket.stats.statistics()
function, which uses the full FFT transform of the input data. The new method is not enabled by default, but can be turned on by setting theNETKET_EXPERIMENTAL_FFT_AUTOCORRELATION
environment variable to1
. In the future we might turn this on by default #1150.
Dependencies#
NetKet now requires at least Flax v0.5
Deprecations#
netket.nn.Module
andnetket.nn.compact
have been deprecated. Please use theflax.linen.Module
andflax.linen.compact()
instead.netket.nn.Dense(dtype=mydtype)
and related Modules (Conv
,DenseGeneral
andConvGeneral
) are deprecated. Please useflax.linen.***(param_dtype=mydtype)
instead. Before flax v0.5 they did not support complex numbers properly within their modules, but starting with flax 0.5 they now do so we have removed our linear module wrappers and encourage you to use them. Please notice that thedtype
argument previously used by netket should be changed toparam_dtype
to maintain the same effect. #…
Bug Fixes#
Fixed bug where a
netket.operator.LocalOperator
representing the identity would lead to a crash. #1197Fix a bug where Fermionic operators
nkx.operator.FermionOperator2nd
would not result hermitian even if they were. #1233Fix serialization of some arrays with complex dtype in
RuntimeLog
andJsonLog
#1258Fixed bug where the
netket.callbacks.EarlyStopping
callback would not work as intended when hitting a local minima. #1238chunk_size
and the random seed of Monte Carlo variational states are now serialised. States serialised previous to this change can no longer be unserialised #1247Continuous-space hamiltonians now work correctly with neural networks with complex parameters #1273.
NetKet now works under MPI with recent versions of jax (>=0.3.15) #1291.
NetKet 3.4.2 (BugFixes & DepWarns again)#
Internal Changes#
Several deprecation warnings related to
jax.experimental.loops
being deprecated have been resolved by changing those calls tojax.lax.fori_loop()
. Jax should feel more tranquillo now. #1172
Bug Fixes#
Several type promotion bugs that would end up promoting single-precision models to double-precision have been squashed. Those involved
nk.operator.Ising
andnk.operator.BoseHubbard
#1180,nkx.TDVP
#1186 and continuous-space samplers and operators #1187.nk.operator.Ising
,nk.operator.BoseHubbard
andnk.operator.LocalLiouvillian
now return connected samples with the same precision (dtype
) as the input samples. This allows to preserve low precision along the computation when using those operators.#1180nkx.TDVP
now updates the expectation value displayed in the progress bar at every time step. #1182Fixed bug #1192 that affected most operators (
nk.operator.LocalOperator
) constructed on non-homogeneous hilbert spaces. This bug was first introduced in version 3.3.4 and affects all subsequent versions until 3.4.2. #1193It is now possible to add an operator and it’s lazy transpose/hermitian conjugate #1194
NetKet 3.4.1 (BugFixes & DepWarns)#
Internal Changes#
Several deprecation warnings related to
jax.tree_util.tree_multimap
being deprecated have been resolved by changing those calls tojax.tree_util.tree_map
. Jax should feel more tranquillo now. #1156
Bug Fixes#
~
TDVP
now supports model with real parameters such asRBMModPhase
. #1139~ (not yet fixed)An error is now raised when user attempts to construct a
LocalOperator
with a matrix of the wrong size (bug #1157. #1158A bug where
QGTJacobian
could not be used with models in single precision has been addressed (bug #1153. #1155
NetKet 3.4 (Special 🧱 edition)#
New features#
Lattice
supports specifying arbitrary edge content for each unit cell via the kwargcustom_edges
. A generator for hexagonal lattices with coloured edges is implemented asnk.graph.KitaevHoneycomb
.nk.graph.Grid
again supports colouring edges by direction. #1074Fermionic hilbert space (
nkx.hilbert.SpinOrbitalFermions
) and fermionic operators (nkx.operator.fermion
) to treat systems with a finite number of Orbitals have been added to the experimental submodule. The operators are also integrated with OpenFermion. Those functionalities are still in development and we would welcome feedback. #1090It is now possible to change the integrator of a
TDVP
object without reconstructing it. #1123A
nk.nn.blocks
has been added and contains anMLP
(Multi-Layer Perceptron). #1295
Breaking Changes#
The gradient for models with real-parameter is now multiplied by 2. If your model had real parameters you might need to change the learning rate and halve it. Conceptually this is a bug-fix, as the value returned before was wrong (see Bug Fixes section below for additional details) #1069
In the statistics returned by
netket.stats.statistics
, the.R_hat
diagnostic has been updated to be able to detect non-stationary chains via the split-Rhat diagnostic (see, e.g., Gelman et al., Bayesian Data Analysis, 3rd edition). This changes (generally increases) the numerical values ofR_hat
for existing simulations, but should strictly improve its capabilities to detect MCMC convergence failure. #1138
Internal Changes#
Bug Fixes#
The gradient obtained with
VarState.expect_and_grad
for models with real-parameters was off by a factor of \( 1/2 \) from the correct value. This has now been corrected. As a consequence, the correct gradient for real-parameter models is equal to the old times 2. If your model had real parameters you might need to change the learning rate and halve it. #1069Support for coloured edges in
nk.graph.Grid
, removed in #724, is now restored. #1074Fixed bug that prevented calling
.quantum_geometric_tensor
onnetket.vqs.ExactState
. #1108Fixed bug where the gradient of
C->C
models (complex parameters, complex output) was computed incorrectly withnk.vqs.ExactState
. #1110Fixed bug where
QGTJacobianDense.state
andQGTJacobianPyTree.state
would not correctly transform the starting pointx0
ifholomorphic=False
. #1115The gradient of the expectation value obtained with
VarState.expect_and_grad
forSquaredOperator
s was off by a factor of 2 in some cases, and wrong in others. This has now been fixed. #1065.
NetKet 3.3.2 (🐛 Bug Fixes)#
Internal Changes#
Support for Python 3.10 #952.
The minimum optax version is now
0.1.1
, which finally correctly supports complex numbers. The internal implementation of Adam which was introduced in 3.3 (#1069) has been removed. If an older version ofoptax
is detected, an import error is thrown to avoid providing wrong numerical results. Please update your optax version! #1097
Bug Fixes#
Allow
LazyOperator@densevector
for operators such as lazyAdjoint
,Transpose
andSquared
. #1068The logic to update the progress bar in
nk.experimental.TDVP
has been improved, and it should now display updates even if there are very sparsesave_steps
. #1084The
nk.logging.TensorBoardLog
is now lazily initialized to better work in an MPI environment. #1086Converting a
nk.operator.BoseHubbard
to ank.operator.LocalOperator
multiplied by 2 the nonlinearityU
. This has now been fixed. #1102
NetKet 3.3.1 (🐛 Bug Fixes)#
Initialisation of all implementations of
DenseSymm
,DenseEquivariant
,GCNN
now defaults to truncated normals with Lecun variance scaling. For layers without masking, there should be no noticeable change in behaviour. For masked layers, the same variance scaling now works correctly. #1045Fix bug that prevented gradients of non-hermitian operators to be computed. The feature is still marked as experimental but will now run (we do not guarantee that results are correct). #1053
Common lattice constructors such as
Honeycomb
now accepts the same keyword arguments asLattice
. #1046Multiplying a
QGTOnTheFly
representing the real part of the QGT (showing up when the ansatz has real parameters) with a complex vector now throws an error. Previously the result would be wrong, as the imaginary part was casted away. #885
NetKet 3.3 (🎁 20 December 2021)#
New features#
The interface to define expectation and gradient function of arbitrary custom operators is now stable. If you want to define it for a standard operator that can be written as an average of local expectation terms, you can now define a dispatch rule for
netket.vqs.get_local_kernel_arguments()
andnetket.vqs.get_local_kernel()
. The old mechanism is still supported, but we encourage to use the new mechanism as it is more terse. #954nk.optimizer.Adam()
now supports complex parameters, and you can usenk.optimizer.split_complex()
to make optimizers process complex parameters as if they are pairs of real parameters. #1009Chunking of
MCState.expect
andMCState.expect_and_grad
computations is now supported, which allows to bound the memory cost in exchange of a minor increase in computation time. #1006 (and discussions in #918 and #830)A new variational state that performs exact summation over the whole Hilbert space has been added. It can be constructed with
nk.vqs.ExactState
and supports the same Jax neural networks asnk.vqs.MCState
. #953nk.nn.DenseSymm()
allows multiple input features. #1030[Experimental] A new time-evolution driver
nk.experimental.TDVP
using the time-dependent variational principle (TDVP) has been added. It works with time-independent and time-dependent Hamiltonians and Liouvillians. #1012[Experimental] A set of JAX-compatible Runge-Kutta ODE integrators has been added for use together with the new TDVP driver. #1012
Breaking Changes#
The method
sample_next
inSampler
and exact samplers (ExactSampler
andARDirectSampler
) is removed, and it is only defined inMetropolisSampler
. The module functionnk.sampler.sample_next
also only works withMetropolisSampler
. For exact samplers, please use the methodsample
instead. #1016The default value of
n_chains_per_rank
inSampler
and exact samplers is changed to 1, and specifyingn_chains
orn_chains_per_rank
when constructing them is deprecated. Please changechain_length
when callingsample
. ForMetropolisSampler
, the default value is changed fromn_chains = 16
(across all ranks) ton_chains_per_rank = 16
. #1017GCNN_Parity
allowed biasing both the parity-preserving and the parity-flip equivariant layers. These enter into the network output the same way, so having both is redundant and makes QGTs unstable. The biases of the parity-flip layers are now removed. The previous behaviour can be restored using the deprecatedextra_bias
switch; we only recommend this for loading previously saved parameters. Such parameters can be transformed to work with the new default usingnk.models.update_GCNN_parity
. #1030Kernels of
DenseSymm
are now three-dimensional, not two-dimensional. Parameters saved from earlier implementations can be transformed to the new convention usingnk.nn.update_dense_symm
. #1030
Deprecations#
The method
Sampler.samples
is added to return a generator of samples. The module functionsnk.sampler.sampler_state
,reset
,sample
,samples
, andsample_next
are deprecated in favor of the corresponding class methods. #1025Kwarg
in_features
ofDenseEquivariant
is deprecated; the number of input features are inferred from the input. #1030Kwarg
out_features
ofDenseEquivariant
is deprecated in favour offeatures
. #1030
Internal Changes#
Bug Fixes#
The constructor of
TensorHilbert
(which is used by the product operator*
for inhomogeneous spaces) no longer fails when one of the component spaces is non-indexable. #1004The
flip_state()
method used byMetropolisLocal
now throws an error when called on ank.hilbert.ContinuousHilbert
hilbert space instead of entering an endless loop. #1014Fixed bug in conversion to qutip for
MCMixedState
, where the resulting shape (hilbert space size) was wrong. #1020Setting
MCState.sampler
now recomputesMCState.chain_length
according toMCState.n_samples
and the newsampler.n_chains
. #1028GCNN_Parity
allowed biasing both the parity-preserving and the parity-flip equivariant layers. These enter into the network output the same way, so having both is redundant and makes QGTs unstable. The biases of the parity-flip layers are now removed. #1030
NetKet 3.2 (26 November 2021)#
New features#
GraphOperator
(andHeisenberg
) now support passing a custom mapping of graph nodes to Hilbert space sites via the newacting_on_subspace
argument. This makes it possible to createGraphOperator
s that act on a subset of sites, which is useful in composite Hilbert spaces. #924PauliString
now supports any Hilbert space with local size 2. The Hilbert space is now the optional first argument of the constructor. #960PauliString
now can be multiplied and summed together, performing some simple algebraic simplifications on the strings they contain. They also lazily initialize their internal data structures, making them faster to construct but slightly slower the first time that their matrix elements are accessed. #955PauliString
s can now be constructed starting from anOpenFermion
operator. #956In addition to nearest-neighbor edges,
Lattice
can now generate edges between next-nearest and, more generally, k-nearest neighbors via the constructor argumentmax_neighbor_order
. The edges can be distinguished by theircolor
property (which is used, e.g., byGraphOperator
to apply different bond operators). #970Two continuous-space operators (
KineticEnergy
andPotentialEnergy
) have been implemented. #971Heisenberg
Hamiltonians support different coupling strengths onGraph
edges with different colors. #972.The
little_group
andspace_group_irreps
methods ofSpaceGroupBuilder
take the wave vector as either varargs or iterables. #975A new
netket.experimental
submodule has been created and all experimental features have been moved there. Note that in contrast to the othernetket
submodules,netket.experimental
is not imported by default. #976
Breaking Changes#
Moved
nk.vqs.variables_from_***
tonk.experimental.vqs
module. Also moved the experimental samplers tonk.sampler.MetropolisPt
andnk.sampler.MetropolisPmap
tonk.experimental.sampler
. #976operator.size
, has been deprecated. If you were using this function, please transition tooperator.hilbert.size
. #985
Bug Fixes#
A bug where
LocalOperator.get_conn_flattened
would read out-of-bounds memory has been fixed. It is unlikely that the bug was causing problems, but it triggered warnings when running Numba with boundscheck activated. #966The dependency
python-igraph
has been updated toigraph
following the rename of the upstream project in order to work on conda. #986n_samples_per_rank
was returning wrong values and has now been fixed. #987The
DenseSymm
layer now also accepts objects of typeHashableArray
assymmetries
argument. #989A bug where
VMC.info()
was erroring has been fixed. #984
NetKet 3.1 (20 October 2021)#
New features#
Added Conversion methods
to_qobj()
to operators and variational states, that produce QuTiP’s qobjects.A function
nk.nn.activation.reim
has been added that transforms a nonlinearity to act seperately on the real and imaginary partsNonlinearities
reim_selu
andreim_relu
have been addedAutoregressive Neural Networks (ARNN) now have a
machine_pow
field (defaults to 2) used to change the exponent used for the normalization of the wavefunction. #940.
Breaking Changes#
The default initializer for
netket.models.GCNN
has been changed to fromjax.nn.selu
tonetket.nn.reim_selu
#892netket.nn.initializers
has been deprecated in favor ofjax.nn.initializers
#935.Subclasses of
netket.models.AbstractARNN
must define the fieldmachine_pow
#940nk.hilbert.HilbertIndex
andnk.operator.spin.DType
are now unexported (they where never intended to be visible). #904AbstractOperator
s have been renamedDiscreteOperator
s.AbstractOperator
s still exist, but have almost no functionality and they are intended as the base class for more arbitrary (eg. continuous space) operators. If you have defined a custom operator inheriting fromAbstractOperator
you should change it to derive fromDiscreteOperator
. #929
Internal Changes#
PermutationGroup.product_table
now consumes less memory and is more performant. This is helpfull when working with large symmetry groups. #884 #891Added size check to
DiscreteOperator.get_conn
and throw helpful error messages if those do not match. #927The internal
numba4jax
module has been factored out into a standalone library, named (how original)numba4jax
. This library was never intended to be used by external users, but if for any reason you were using it, you should switch to the external library. #934netket.jax
now includes several batching utilities likebatched_vmap
andbatched_vjp
. Those can be used to build memory efficient batched code, but are considered internal, experimental and might change without warning. #925.
Bug Fixes#
Autoregressive networks now work with
Qubit
hilbert spaces. #937
NetKet 3.0 (23 august 2021)#
New features#
Breaking Changes#
The default initializer for
netket.nn.Dense
layers now matches the same default asflax.linen
, and it islecun_normal
instead ofnormal(0.01)
#869The default initializer for
netket.nn.DenseSymm
layers is now chosen in order to give variance 1 to every output channel, therefore defaulting tolecun_normal
#870
Internal Changes#
Bug Fixes#
NetKet 3.0b4 (17 august 2021)#
New features#
DenseSymm now accepts a mode argument to specify whever the symmetries should be computed with a full dense matrix or FFT. The latter method is much faster for sufficiently large systems. Other kwargs have been added to satisfy the interface. The api changes are also reflected in RBMSymm and GCNN. #792
Breaking Changes#
The so-called legacy netket in
netket.legacy
has been removed. #773
Internal Changes#
The methods
expect
andexpect_and_grad
ofMCState
now use dispatch to select the relevant implementation of the algorithm. They can therefore be expanded and overridden without editing NetKet’s source code. #804netket.utils.mpi_available
has been moved tonetket.utils.mpi.available
to have a more consistent api interface (all mpi-related properties in the same submodule). #827netket.logging.TBLog
has been renamed tonetket.logging.TensorBoardLog
for better readability. A deprecation warning is now issued if the older name is used #827When
MCState
initializes a model by callingmodel.init
, the call is now jitted. This should speed it up for non-trivial models but might break non-jit invariant models. #832operator.get_conn_padded
now supports arbitrarily-dimensioned bitstrings as input and reshapes the output accordingly. #834NetKet’s implementation of dataclasses now support
pytree_node=True/False
on cached properties. #835Plum version has been bumped to 1.5.1 to avoid broken versions (1.4, 1.5). #856.
Numba version 0.54 is now allowed #857.
Bug Fixes#
NetKet 3.0b3 (published on 9 july 2021)#
New features#
The
netket.utils.group
submodule provides utilities for geometrical and permutation groups.Lattice
(and its specialisations likeGrid
) use these to automatically construct the space groups of lattices, as well as their character tables for generating wave functions with broken symmetry. #724Autoregressive neural networks, sampler, and masked linear layers have been added to
models
,sampler
andnn
#705.
Breaking Changes#
The
netket.graph.Grid
class has been removed. netket.graph.Grid will now return an instance ofgraph.Lattice
supporting the same API but with new functionalities related to spatial symmetries. Thecolor_edges
optional keyword argument has been removed without deprecation. #724MCState.n_discard
has been renamedMCState.n_discard_per_chain
and the old binding has been deprecated #739.nk.optimizer.qgt.QGTOnTheFly
optioncentered=True
has been removed because we are now convinced the two options yielded equivalent results.QGTOnTheFly
now always behaves as ifcentered=False
#706.
Internal Changes#
networkX
has been replaced byigraph
, yielding a considerable speedup for some graph-related operations #729.netket.hilbert.random
module now usesplum-dispatch
(throughnetket.utils.dispatch
) to select the correct implementation ofrandom_state
andflip_state
. This makes it easy to define new hilbert states and extend their functionality easily. #734.The AbstractHilbert interface is now much smaller in order to also support continuous Hilbert spaces. Any functionality specific to discrete hilbert spaces (what was previously supported) has been moved to a new abstract type
netket.hilbert.DiscreteHilbert
. Any Hilbert space previously subclassing netket.hilbert.AbstractHilbert should be modified to subclass netket.hilbert.DiscreteHilbert #800.
Bug Fixes#
nn.to_array
andMCState.to_array
, ifnormalize=False
, do not subtract the logarithm of the maximum value from the state #705.Autoregressive networks now work with Fock space and give correct errors if the hilbert space is not supported #806.
Autoregressive networks are now much (x10-x100) faster #705.
Do not throw errors when calling
operator.get_conn_flattened(states)
with a jax array #764.Fix bug with the driver progress bar when
step_size != 1
#747.
NetKet 3.0b2 (published on 31 May 2021)#
New features#
Group Equivariant Neural Networks have been added to
models
#620Permutation invariant RBM and Permutation invariant dense layer have been added to
models
andnn.linear
#573Add the property
acceptance
toMetropolisSampler
’sSamplerState
, computing the MPI-enabled acceptance ratio. #592.Add
StateLog
, a new logger that stores the parameters of the model during the optimization in a folder or in a tar file. #645A warning is now issued if NetKet detects to be running under
mpirun
but MPI dependencies are not installed #631operator.LocalOperator
s now do not return a zero matrix element on the diagonal if the whole diagonal is zero. #623.logger.JSONLog
now automatically flushes at every iteration if it does not consume significant CPU cycles. #599The interface of Stochastic Reconfiguration has been overhauled and made more modular. You can now specify the solver you wish to use, NetKet provides some dense solvers out of the box, and there are 3 different ways to compute the Quantum Geometric Tensor. Read the documentation to learn more about it. #674
Unless you specify the QGT implementation you wish to use with SR, we use an automatic heuristic based on your model and the solver to pick one. This might affect SR performance. #674
Breaking Changes#
For all samplers,
n_chains
now sets the total number of chains across all MPI ranks. This is a breaking change compared to the old API, wheren_chains
would set the number of chains on a single MPI rank. It is still possible to set the number of chains per MPI rank by specifyingn_chains_per_rank
instead ofn_chains
. This change, while breaking allows us to be consistent with the interface ofvariational.MCState
, wheren_samples
is the total number of samples across MPI nodes.MetropolisSampler.reset_chain
has been renamed toMetropolisSampler.reset_chains
. Likewise in the constructor of all samplers.Briefly during development releases
MetropolisSamplerState.acceptance_ratio
returned the percentage (not ratio) of acceptance.acceptance_ratio
is now deprecated in favour of the correctacceptance
.models.Jastrow
now internally symmetrizes the matrix before computing its value #644MCState.evaluate
has been renamed toMCState.log_value
#632nk.optimizer.SR
no longer accepts keyword argument relative to the sparse solver. Those should be passed inside the closure orfunctools.partial
passed assolver
argument.nk.optimizer.sr.SRLazyCG
andnk.optimizer.sr.SRLazyGMRES
have been deprecated and will soon be removed.Parts of the
Lattice
API have been overhauled, with deprecations of several methods in favor of a consistent usage ofLattice.position
for real-space location of sites andLattice.basis_coords
for location of sites in terms of basis vectors.Lattice.sites
has been added, which provides a sequence ofLatticeSite
objects combining all site properties. Furthermore,Lattice
now provides lookup of sites from their position viaid_from_position
using a hashing scheme that works across periodic boundaries. #703 #715nk.variational
has been renamed tonk.vqs
and will be removed in a future release.
Bug Fixes#
Fix
operator.BoseHubbard
usage under jax Hamiltonian Sampling #662Fix
SROnTheFly
forR->C
models with non homogeneous parameters #661Fix MPI Compilation deadlock when computing expectation values #655
Fix bug preventing the creation of a
hilbert.Spin
Hilbert space with odd sites and evenS
. #641Fix bug #635 preventing the usage of
NumpyMetropolisSampler
withMCState.expect
#635Fix bug #635 where the
graph.Lattice
was not correctly computing neighbours because of floating point issues. #633Fix bug the Y Pauli matrix, which was stored as its conjugate. #618 #617 #615
NetKet 3.0b1 (published beta release)#
API Changes#
Hilbert space constructors do not store the lattice graph anymore. As a consequence, the constructor does not accept the graph anymore.
Special Hamiltonians defined on a lattice, such as
operator.BoseHubbard
,operator.Ising
andoperator.Heisenberg
, now require the graph to be passed explicitly through agraph
keyword argument.operator.LocalOperator
now default to real-valued matrix elements, except if you construct them with a complex-valued matrix. This is also valid for operators such as :func:operator.spin.sigmax
and similars.When performing algebraic operations
*, -, +
on pairs ofoperator.LocalOperator
, the dtype of the result iscomputed using standard numpy promotion logic.Doing an operation in-place
+=, -=, *=
on a real-valued operator will now fail if the other is complex. While this might seem annoying, it’s useful to ensure that smaller types such asfloat32
orcomplex64
are preserved if the user desires to do so.
AbstractMachine
has been removed. It’s functionality is now split among the model itself, which is defined by the user andvariational.MCState
for pure states orvariational.MCMixedState
for mixed states.The model, in general is composed by two functions, or an object with two functions: an
init(rng, sample_val)
function, accepting ajax.random.PRNGKey()
object and an input, returning the parameters and the state of the model for that particular sample shape, and aapply(params, samples, **kwargs)
function, evaluating the model for the given parameters and inputs.Some models (previously machines) such as the RBM (Restricted Boltzmann Machine) Machine, NDM (Neural Density Matrix) or MPS (Matrix Product State ansatz) are available in
Pre-built models
.Machines, now called models, should be written using Flax or another jax framework.
Serialization and deserialization functionality has now been moved to
netket.variational.MCState
, which support the standard Flax interface through MsgPack. See Flax docs for more informationAbstractMachine.init_random_parameters
functionality has now been absorbed intonetket.vqs.VariationalState.init_parameters()
, which however has a different syntax.
Samplers now require the Hilbert space upon which they sample to be passed in to the constructor. Also note that several keyword arguments of the samplers have changed, and new one are available.
It’s now possible to change Samplers dtype, which controls the type of the output. By default they use double-precision samples (
np.float64
). Be wary of type promotion issues with your models.Samplers no longer take a machine as an argument.
Samplers are now immutable (frozen)
dataclasses
(defined throughflax.struct.dataclass
) that only hold the sampling parameters. As a consequence it is no longer possible to change their settings such asn_chains
orn_sweeps
without creating a new sampler. If you wish to update only one parameter, it is possible to construct the new sampler with the updated value by using thesampler.replace(parameter=new_value)
function.Samplers are no longer stateful objects. Instead, they can construct an immutable state object
netket.sampler.init_state
, which can be passed to sampling functions such asnetket.sampler.sample
, which now return also the updated state. However, unless you have particular use-cases we advise you use the variational stateMCState
instead.The netket.optimizer module has been overhauled, and now only re-exports flax optim module. We advise not to use netket’s optimizer but instead to use optax .
The netket.optimizer.SR object now is only a set of options used to compute the SR matrix. The SR matrix, now called
quantum_geometric_tensor
can be obtained by callingvariational.MCState.quantum_geometric_tensor()
. Depending on the settings, this can be a lazy object.netket.Vmc
has been renamed tonetket.VMC
netket.models.RBM
replaces the oldRBM
machine, but has real parameters by default.As we rely on Jax, using
dtype=float
ordtype=complex
, which are weak types, will sometimes lead to loss of precision because they might be converted tofloat32
. Usenp.float64
ornp.complex128
instead if you want double precision when defining your models.