netket.callbacks.EarlyStopping#
- class netket.callbacks.EarlyStopping[source]#
Bases:
Pytree
A simple callback to stop NetKet if there are no more improvements in the training. based on driver._loss_name.
- Inheritance
- __init__(min_delta=0.0, min_reldelta=0.0, patience=0, baseline=None, start_from_step=0, monitor='mean')[source]#
Construct an early stopping callback.
- Parameters:
min_delta (
float
) – Minimum change in the monitored quantity to qualify as an improvement.min_reldelta (
float
) – Minimum relative change in the monitored quantity to qualify as an improvement.patience (
int
|float
) – Number of epochs with no improvement after which training will be stopped.baseline (
float
|None
) – Baseline value for the monitored quantity. Training will stop if the driver does not drop below the baseline.monitor (
str
) – Loss statistic to monitor. Should be one of ‘mean’, ‘variance’, ‘error_of_mean’.start_from_step (
int
) – Number of steps to wait before the callback has any effect. Defaults to 0.
- Attributes
-
-
min_reldelta:
float
# Minimum relative change in the monitored quantity to qualify as an improvement.
This behaves similarly to min_delta but is more useful for intensive quantities that converge to 0, where absolute tolerances might not be effective.
-
baseline:
float
|None
# Baseline value for the monitored quantity. Training will stop if the driver is above the baseline.
-
min_reldelta:
- Methods
- __call__(step, log_data, driver)[source]#
A boolean function that determines whether or not to stop training.
- Parameters:
step – An integer corresponding to the step (iteration or epoch) in training.
log_data – A dictionary containing log data for training.
driver – A NetKet variational driver.
- Returns:
A boolean. If True, training continues, else, it does not.