EarlyStopper
- class EarlyStopper(model: ~pykeen.models.base.Model, evaluator: ~pykeen.evaluation.evaluator.Evaluator, training_triples_factory: ~pykeen.triples.triples_factory.CoreTriplesFactory, evaluation_triples_factory: ~pykeen.triples.triples_factory.CoreTriplesFactory, evaluation_batch_size: int | None = None, evaluation_slice_size: int | None = None, frequency: int = 10, patience: int = 2, metric: str = 'hits_at_k', relative_delta: float = 0.01, results: list[float] = <factory>, larger_is_better: bool = True, result_tracker: ~pykeen.trackers.base.ResultTracker | None = None, result_callbacks: list[~collections.abc.Callable[[~pykeen.stoppers.stopper.Stopper, int | float, int], None]] = <factory>, continue_callbacks: list[~collections.abc.Callable[[~pykeen.stoppers.stopper.Stopper, int | float, int], None]] = <factory>, stopped_callbacks: list[~collections.abc.Callable[[~pykeen.stoppers.stopper.Stopper, int | float, int], None]] = <factory>, stopped: bool = False, best_model_path: ~pathlib.Path | None = None, clean_up_checkpoint: bool = True, use_tqdm: bool = False, tqdm_kwargs: dict[str, ~typing.Any] = <factory>)[source]
Bases:
Stopper
A harness for early stopping.
Initialize the stopper.
- Parameters:
args – ignored positional parameters
kwargs – ignored keyword-based parameters
model (Model)
evaluator (Evaluator)
training_triples_factory (CoreTriplesFactory)
evaluation_triples_factory (CoreTriplesFactory)
evaluation_batch_size (int | None)
evaluation_slice_size (int | None)
frequency (int)
patience (int)
metric (str)
relative_delta (float)
larger_is_better (bool)
result_tracker (ResultTracker | None)
result_callbacks (list[Callable[[Stopper, int | float, int], None]])
continue_callbacks (list[Callable[[Stopper, int | float, int], None]])
stopped_callbacks (list[Callable[[Stopper, int | float, int], None]])
stopped (bool)
best_model_path (Path | None)
clean_up_checkpoint (bool)
use_tqdm (bool)
Attributes Summary
Return the epoch at which the best result occurred.
Return the best result so far.
The path to the weights of the best model
Whether to delete the file with the best model weights after termination note: the weights will be re-loaded into the model before
Size of the evaluation batches
Slice size of the evaluation batches
The number of epochs after which the model is evaluated on validation set
Whether a larger value is better, or a smaller
The name of the metric to use
Count the number of results stored in the early stopper.
The number of iterations (one iteration can correspond to various epochs) with no improvement after which training will be stopped.
The minimum relative improvement necessary to consider it an improved result
Return the remaining patience.
The result tracker
Did the stopper ever decide to stop?
Whether to use a tqdm progress bar for evaluation
Methods Summary
Get a summary dict.
should_evaluate
(epoch)Decide if evaluation should be done based on the current epoch and the internal frequency.
should_stop
(epoch)Evaluate on a metric and compare to past evaluations to decide if training should stop.
Attributes Documentation
- best_epoch
Return the epoch at which the best result occurred.
- best_metric
Return the best result so far.
- clean_up_checkpoint: bool = True
Whether to delete the file with the best model weights after termination note: the weights will be re-loaded into the model before
- number_results
Count the number of results stored in the early stopper.
- patience: int = 2
The number of iterations (one iteration can correspond to various epochs) with no improvement after which training will be stopped.
- relative_delta: float = 0.01
The minimum relative improvement necessary to consider it an improved result
- remaining_patience
Return the remaining patience.
- result_tracker: ResultTracker | None = None
The result tracker
Methods Documentation