EarlyStopper

class EarlyStopper(model: ~pykeen.models.base.Model, evaluator: ~pykeen.evaluation.evaluator.Evaluator, training_triples_factory: ~pykeen.triples.triples_factory.CoreTriplesFactory, evaluation_triples_factory: ~pykeen.triples.triples_factory.CoreTriplesFactory, evaluation_batch_size: int | None = None, evaluation_slice_size: int | None = None, frequency: int = 10, patience: int = 2, metric: str = 'hits_at_k', relative_delta: float = 0.01, results: list[float] = <factory>, larger_is_better: bool = True, result_tracker: ~pykeen.trackers.base.ResultTracker | None = None, result_callbacks: list[~collections.abc.Callable[[~pykeen.stoppers.stopper.Stopper, int | float, int], None]] = <factory>, continue_callbacks: list[~collections.abc.Callable[[~pykeen.stoppers.stopper.Stopper, int | float, int], None]] = <factory>, stopped_callbacks: list[~collections.abc.Callable[[~pykeen.stoppers.stopper.Stopper, int | float, int], None]] = <factory>, stopped: bool = False, best_model_path: ~pathlib.Path | None = None, clean_up_checkpoint: bool = True, use_tqdm: bool = False, tqdm_kwargs: dict[str, ~typing.Any] = <factory>)[source]

Bases: Stopper

A harness for early stopping.

Initialize the stopper.

Parameters:

Attributes Summary

best_epoch

Return the epoch at which the best result occurred.

best_metric

Return the best result so far.

best_model_path

The path to the weights of the best model

clean_up_checkpoint

Whether to delete the file with the best model weights after termination note: the weights will be re-loaded into the model before

evaluation_batch_size

Size of the evaluation batches

evaluation_slice_size

Slice size of the evaluation batches

frequency

The number of epochs after which the model is evaluated on validation set

larger_is_better

Whether a larger value is better, or a smaller

metric

The name of the metric to use

number_results

Count the number of results stored in the early stopper.

patience

The number of iterations (one iteration can correspond to various epochs) with no improvement after which training will be stopped.

relative_delta

The minimum relative improvement necessary to consider it an improved result

remaining_patience

Return the remaining patience.

result_tracker

The result tracker

stopped

Did the stopper ever decide to stop?

use_tqdm

Whether to use a tqdm progress bar for evaluation

Methods Summary

get_summary_dict()

Get a summary dict.

should_evaluate(epoch)

Decide if evaluation should be done based on the current epoch and the internal frequency.

should_stop(epoch)

Evaluate on a metric and compare to past evaluations to decide if training should stop.

Attributes Documentation

best_epoch

Return the epoch at which the best result occurred.

best_metric

Return the best result so far.

best_model_path: Path | None = None

The path to the weights of the best model

clean_up_checkpoint: bool = True

Whether to delete the file with the best model weights after termination note: the weights will be re-loaded into the model before

evaluation_batch_size: int | None = None

Size of the evaluation batches

evaluation_slice_size: int | None = None

Slice size of the evaluation batches

frequency: int = 10

The number of epochs after which the model is evaluated on validation set

larger_is_better: bool = True

Whether a larger value is better, or a smaller

metric: str = 'hits_at_k'

The name of the metric to use

number_results

Count the number of results stored in the early stopper.

patience: int = 2

The number of iterations (one iteration can correspond to various epochs) with no improvement after which training will be stopped.

relative_delta: float = 0.01

The minimum relative improvement necessary to consider it an improved result

remaining_patience

Return the remaining patience.

result_tracker: ResultTracker | None = None

The result tracker

stopped: bool = False

Did the stopper ever decide to stop?

use_tqdm: bool = False

Whether to use a tqdm progress bar for evaluation

Methods Documentation

get_summary_dict() Mapping[str, Any][source]

Get a summary dict.

Return type:

Mapping[str, Any]

should_evaluate(epoch: int) bool[source]

Decide if evaluation should be done based on the current epoch and the internal frequency.

Parameters:

epoch (int)

Return type:

bool

should_stop(epoch: int) bool[source]

Evaluate on a metric and compare to past evaluations to decide if training should stop.

Parameters:

epoch (int)

Return type:

bool