Stoppers

Early stoppers.

The following code will create a scenario in which training will stop (quite) early when training pykeen.models.TransE on the pykeen.datasets.Nations dataset.

>>> from pykeen.pipeline import pipeline
>>> pipeline_result = pipeline(
...     dataset='nations',
...     model='transe',
...     model_kwargs=dict(embedding_dim=20, scoring_fct_norm=1),
...     optimizer='SGD',
...     optimizer_kwargs=dict(lr=0.01),
...     loss='marginranking',
...     loss_kwargs=dict(margin=1),
...     training_loop='slcwa',
...     training_kwargs=dict(num_epochs=100, batch_size=128),
...     negative_sampler='basic',
...     negative_sampler_kwargs=dict(num_negs_per_pos=1),
...     evaluator_kwargs=dict(filtered=True),
...     evaluation_kwargs=dict(batch_size=128),
...     stopper='early',
...     stopper_kwargs=dict(frequency=5, patience=2, relative_delta=0.002),
... )
class NopStopper(*args, **kwargs)[source]

A stopper that does nothing.

Initialize the stopper.

Parameters:
  • args – ignored positional parameters

  • kwargs – ignored keyword-based parameters

get_summary_dict()[source]

Return empty mapping, doesn’t have any attributes.

Return type:

Mapping[str, Any]

should_evaluate(epoch)[source]

Return false; should never evaluate.

Return type:

bool

Parameters:

epoch (int) –

should_stop(epoch)[source]

Return false; should never stop.

Return type:

bool

Parameters:

epoch (int) –

class EarlyStopper(model, evaluator, training_triples_factory, evaluation_triples_factory, evaluation_batch_size=None, evaluation_slice_size=None, frequency=10, patience=2, metric='hits_at_k', relative_delta=0.01, results=<factory>, larger_is_better=True, result_tracker=None, result_callbacks=<factory>, continue_callbacks=<factory>, stopped_callbacks=<factory>, stopped=False, best_model_path=None, clean_up_checkpoint=True)[source]

A harness for early stopping.

Initialize the stopper.

Parameters:
property best_epoch: int | None

Return the epoch at which the best result occurred.

Return type:

Optional[int]

property best_metric: float

Return the best result so far.

Return type:

float

best_model_path: Path | None = None

the path to the weights of the best model

clean_up_checkpoint: bool = True

whether to delete the file with the best model weights after termination note: the weights will be re-loaded into the model before

continue_callbacks: List[Callable[[Stopper, int | float, int], None]]

Callbacks when training gets continued

evaluation_batch_size: int | None = None

Size of the evaluation batches

evaluation_slice_size: int | None = None

Slice size of the evaluation batches

evaluation_triples_factory: CoreTriplesFactory

The triples to use for evaluation

evaluator: Evaluator

The evaluator

frequency: int = 10

The number of epochs after which the model is evaluated on validation set

get_summary_dict()[source]

Get a summary dict.

Return type:

Mapping[str, Any]

larger_is_better: bool = True

Whether a larger value is better, or a smaller

metric: str = 'hits_at_k'

The name of the metric to use

model: Model

The model

property number_results: int

Count the number of results stored in the early stopper.

Return type:

int

patience: int = 2

The number of iterations (one iteration can correspond to various epochs) with no improvement after which training will be stopped.

relative_delta: float = 0.01

The minimum relative improvement necessary to consider it an improved result

property remaining_patience: int

Return the remaining patience.

Return type:

int

result_callbacks: List[Callable[[Stopper, int | float, int], None]]

Callbacks when after results are calculated

result_tracker: ResultTracker | None = None

The result tracker

results: List[float]

The metric results from all evaluations

should_evaluate(epoch)[source]

Decide if evaluation should be done based on the current epoch and the internal frequency.

Return type:

bool

Parameters:

epoch (int) –

should_stop(epoch)[source]

Evaluate on a metric and compare to past evaluations to decide if training should stop.

Return type:

bool

Parameters:

epoch (int) –

stopped: bool = False

Did the stopper ever decide to stop?

stopped_callbacks: List[Callable[[Stopper, int | float, int], None]]

Callbacks when training is stopped early

training_triples_factory: CoreTriplesFactory

The triples to use for training (to be used during filtered evaluation)

Base Classes

class Stopper(*args, **kwargs)[source]

A harness for stopping training.

Initialize the stopper.

Parameters:
  • args – ignored positional parameters

  • kwargs – ignored keyword-based parameters

abstract get_summary_dict()[source]

Get a summary dict.

Return type:

Mapping[str, Any]

static load_summary_dict_from_training_loop_checkpoint(path)[source]

Load the summary dict from a training loop checkpoint.

Parameters:

path (Union[str, Path]) – Path of the file where to store the state in.

Return type:

Mapping[str, Any]

Returns:

The summary dict of the stopper at the time of saving the checkpoint.

should_evaluate(epoch)[source]

Check if the stopper should be evaluated on the given epoch.

Return type:

bool

Parameters:

epoch (int) –

abstract should_stop(epoch)[source]

Validate on validation set and check for termination condition.

Return type:

bool

Parameters:

epoch (int) –