netket.models.ARNNConv1D#
- class netket.models.ARNNConv1D[source]#
Bases:
ARNNSequential
Autoregressive neural network with 1D convolution layers.
- Attributes
-
-
precision:
Any
= None# numerical precision of the computation, see
jax.lax.Precision
for details.
- variables#
Returns the variables in this module.
-
features:
Union
[Tuple
[int
,...
],int
]# output feature density in each layer. If a single number is given, all layers except the last one will have the same number of features.
- hilbert: HomogeneousHilbert#
the Hilbert space. Only homogeneous unconstrained Hilbert spaces are supported.
-
precision:
- Methods
- activation()#
selu applied separately to the real andimaginary parts of itβs input.
The docstring to the original function follows.
Scaled exponential linear unit activation.
Computes the element-wise function:
\[\begin{split}\mathrm{selu}(x) = \lambda \begin{cases} x, & x > 0\\ \alpha e^x - \alpha, & x \le 0 \end{cases}\end{split}\]where \(\lambda = 1.0507009873554804934193349852946\) and \(\alpha = 1.6732632423543772848170429916717\).
For more information, see Self-Normalizing Neural Networks.
- Args:
x : input array
- Return type:
- bias_init(shape, dtype=<class 'jax.numpy.float64'>)#
An initializer that returns a constant array full of zeros.
The
key
argument is ignored.>>> import jax, jax.numpy as jnp >>> jax.nn.initializers.zeros(jax.random.PRNGKey(42), (2, 3), jnp.float32) :rtype: :py:data:`~typing.Any`
- Array([[0., 0., 0.],
[0., 0., 0.]], dtype=float32)
- conditional(inputs, index)#
Computes the conditional probabilities for one site to take each value.
It should only be called successively with indices 0, 1, 2, β¦, as in the autoregressive sampling procedure.
- Parameters:
- Return type:
- Returns:
The probabilities with dimensions (batch, Hilbert.local_size).
- conditionals(inputs)#
Computes the conditional probabilities for each site to take each value.
- Parameters:
inputs (
Union
[ndarray
,Array
]) β configurations with dimensions (batch, Hilbert.size).- Return type:
- Returns:
The probabilities with dimensions (batch, Hilbert.size, Hilbert.local_size).
Examples
>>> import pytest; pytest.skip("skip automated test of this docstring") >>> >>> p = model.apply(variables, Ο, method=model.conditionals) >>> print(p[2, 3, :]) [0.3 0.7] # For the 3rd spin of the 2nd sample in the batch, # it takes probability 0.3 to be spin down (local state index 0), # and probability 0.7 to be spin up (local state index 1).
- conditionals_log_psi(inputs)#
Computes the log of the conditional wave-functions for each site to take each value.
- has_rng(name)#
Returns true if a PRNGSequence with name name exists.
- is_initializing()#
Returns True if running under self.init(β¦) or nn.init(β¦)().
This is a helper method to handle the common case of simple initialization where we wish to have setup logic occur when only called under
module.init
ornn.init
. For more complicated multi-phase initialization scenarios it is better to test for the mutability of particular variable collections or for the presence of particular variables that potentially need to be initialized.- Return type:
- kernel_init(shape, dtype=<class 'jax.numpy.float64'>)#
- lazy_init(rngs, *args, method=None, mutable=DenyList(deny='intermediates'), **kwargs)#
Initializes a module without computing on an actual input.
lazy_init will initialize the variables without doing unnecessary compute. The input data should be passed as a
jax.ShapeDtypeStruct
which specifies the shape and dtype of the input but no concrete data.Example:
model = nn.Dense(features=256) variables = model.lazy_init(rng, jax.ShapeDtypeStruct((1, 128), jnp.float32))
The args and kwargs args passed to
lazy_init
can be a mix of concrete (jax arrays, scalars, bools) and abstract (ShapeDtypeStruct) values. Concrete values are only necessary for arguments that affect the initialization of variables. For example, the model might expect a keyword arg that enables/disables a subpart of the model. In this case, an explicit value (True/Flase) should be passed otherwiselazy_init
cannot infer which variables should be initialized.- Parameters:
rngs (
Union
[Any
,Dict
[str
,Any
]]) β The rngs for the variable collections.*args β arguments passed to the init function.
method (
Optional
[Callable
[...
,Any
]]) β An optional method. If provided, applies this method. If not provided, applies the__call__
method.mutable (
Union
[bool
,str
,Collection
[str
],DenyList
]) β Can be bool, str, or list. Specifies which collections should be treated as mutable:bool
: all/no collections are mutable.str
: The name of a single mutable collection.list
: A list of names of mutable collections. By default all collections except βintermediatesβ are mutable.**kwargs β Keyword arguments passed to the init function.
- Return type:
FrozenDict
[str
,Mapping
[str
,Any
]]- Returns:
The initialized variable dict.
- perturb(name, value, collection='perturbations')#
Add an zero-value variable (βperturbationβ) to the intermediate value.
The gradient of value would be the same as the gradient of this perturbation variable. Therefore, if you define your loss function with both params and perturbations as standalone arguments, you can get the intermediate gradients of value by running jax.grad on the perturbation argument.
Note: this is an experimental API and may be tweaked later for better performance and usability. At its current stage, it creates extra dummy variables that occupies extra memory space. Use it only to debug gradients in training.
Example:
import jax import jax.numpy as jnp import flax.linen as nn class Foo(nn.Module): @nn.compact def __call__(self, x): x = nn.Dense(3)(x) x = self.perturb('dense3', x) return nn.Dense(2)(x) def loss(params, perturbations, inputs, targets): variables = {'params': params, 'perturbations': perturbations} preds = model.apply(variables, inputs) return jnp.square(preds - targets).mean() x = jnp.ones((2, 9)) y = jnp.ones((2, 2)) model = Foo() variables = model.init(jax.random.PRNGKey(0), x) intm_grads = jax.grad(loss, argnums=1)(variables['params'], variables['perturbations'], x, y) print(intm_grads['dense3']) # ==> [[-1.456924 -0.44332537 0.02422847] # [-1.456924 -0.44332537 0.02422847]]
If perturbations are not passed to apply, perturb behaves like a no-op so you can easily disable the behavior when not needed:
model.apply({'params': params, 'perturbations': perturbations}, x) # works as expected model.apply({'params': params}, x) # behaves like a no-op
- put_variable(col, name, value)#
Updates the value of the given variable if it is mutable, or an error otherwise.
- reshape_inputs(inputs)#
Reshapes the inputs from (batch_size, hilbert_size) to (batch_size, spatial_dimsβ¦) before sending them to the ARNN layers.
- tabulate(rngs, *args, depth=None, show_repeated=False, mutable=True, console_kwargs=None, **kwargs)#
Creates a summary of the Module represented as a table.
This method has the same signature and internally calls Module.init, but instead of returning the variables, it returns the string summarizing the Module in a table. tabulate uses jax.eval_shape to run the forward computation without consuming any FLOPs or allocating memory.
Additional arguments can be passed into the console_kwargs argument, for example, {βwidthβ: 120}. For a full list of console_kwargs arguments, see: https://rich.readthedocs.io/en/stable/reference/console.html#rich.console.Console
Example:
import jax import jax.numpy as jnp import flax.linen as nn class Foo(nn.Module): @nn.compact def __call__(self, x): h = nn.Dense(4)(x) return nn.Dense(2)(h) x = jnp.ones((16, 9)) print(Foo().tabulate(jax.random.PRNGKey(0), x))
This gives the following output:
Foo Summary βββββββββββ³βββββββββ³ββββββββββββββββ³ββββββββββββββββ³βββββββββββββββββββββββ β path β module β inputs β outputs β params β β‘ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ© β β Foo β float32[16,9] β float32[16,2] β β βββββββββββΌβββββββββΌββββββββββββββββΌββββββββββββββββΌβββββββββββββββββββββββ€ β Dense_0 β Dense β float32[16,9] β float32[16,4] β bias: float32[4] β β β β β β kernel: float32[9,4] β β β β β β β β β β β β 40 (160 B) β βββββββββββΌβββββββββΌββββββββββββββββΌββββββββββββββββΌβββββββββββββββββββββββ€ β Dense_1 β Dense β float32[16,4] β float32[16,2] β bias: float32[2] β β β β β β kernel: float32[4,2] β β β β β β β β β β β β 10 (40 B) β βββββββββββΌβββββββββΌββββββββββββββββΌββββββββββββββββΌβββββββββββββββββββββββ€ β β β β Total β 50 (200 B) β βββββββββββ΄βββββββββ΄ββββββββββββββββ΄ββββββββββββββββ΄βββββββββββββββββββββββ Total Parameters: 50 (200 B)
Note: rows order in the table does not represent execution order, instead it aligns with the order of keys in variables which are sorted alphabetically.
- Parameters:
rngs (
Union
[Any
,Dict
[str
,Any
]]) β The rngs for the variable collections as passed to Module.init.*args β The arguments to the forward computation.
depth (
Optional
[int
]) β controls how many submodule deep the summary can go. By default its None which means no limit. If a submodule is not shown because of the depth limit, its parameter count and bytes will be added to the row of its first shown ancestor such that the sum of all rows always adds up to the total number of parameters of the Module.show_repeated (
bool
) β If True, repeated calls to the same module will be shown in the table, otherwise only the first call will be shown. Default is False.mutable (
Union
[bool
,str
,Collection
[str
],DenyList
]) β Can be bool, str, or list. Specifies which collections should be treated as mutable:bool
: all/no collections are mutable.str
: The name of a single mutable collection.list
: A list of names of mutable collections. By default all collections except βintermediatesβ are mutable.console_kwargs (
Optional
[Mapping
[str
,Any
]]) β An optional dictionary with additional keyword arguments that are passed to rich.console.Console when rendering the table. Default arguments are {βforce_terminalβ: True, βforce_jupyterβ: False}.**kwargs β keyword arguments to pass to the forward computation.
- Return type:
- Returns:
A string summarizing the Module.
- unbind()#
Returns an unbound copy of a Module and its variables.
unbind
helps create a stateless version of a bound Module.An example of a common use case: to extract a sub-Module defined inside
setup()
and its corresponding variables: 1) temporarilybind
the parent Module; and then 2)unbind
the desired sub-Module. (Recall thatsetup()
is only called when the Module is bound.):class AutoEncoder(nn.Module): def setup(self): self.encoder = Encoder() self.decoder = Decoder() def __call__(self, x): return self.decoder(self.encoder(x)) module = AutoEncoder() variables = module.init(jax.random.PRNGKey(0), jnp.ones((1, 784))) ... # Extract the Encoder sub-Module and its variables encoder, encoder_vars = module.bind(variables).encoder.unbind()