Training

Stochastic Local Closed World Assumption

class SLCWATrainingLoop(model, optimizer=None, negative_sampler_cls=None, negative_sampler_kwargs=None)[source]

A training loop that uses the stochastic local closed world assumption training approach.

Initialize the training loop.

Parameters
  • model (Model) – The model to train

  • optimizer (Optional[Optimizer]) – The optimizer to use while training the model

  • negative_sampler_cls (Optional[Type[NegativeSampler]]) – The class of the negative sampler

  • negative_sampler_kwargs (Optional[Mapping[str, Any]]) – Keyword arguments to pass to the negative sampler class on instantiation for every positive one

Find the maximum batch size for training with the current setting.

This method checks how big the batch size can be for the current model with the given training data and the hardware at hand. If possible, the method will output the determined batch size and a boolean value indicating that this batch size was successfully evaluated. Otherwise, the output will be batch size 1 and the boolean value will be False.

Parameters

batch_size (Optional[int]) – The batch size to start the search with. If None, set batch_size=num_triples (i.e. full batch training).

Return type

Tuple[int, bool]

Returns

Tuple containing the maximum possible batch size as well as an indicator if the evaluation with that size was successful.

property device

The device used by the model.

classmethod get_normalized_name()

Get the normalized name of the training loop.

Return type

str

property num_negs_per_pos: int

Return number of negatives per positive from the sampler.

Property for API compatibility

Return type

int

sub_batch_and_slice(batch_size)

Check if sub-batching and/or slicing is necessary to train the model on the hardware at hand.

Return type

Tuple[int, int]

to_embeddingdb(session=None, use_tqdm=False)

Upload to the embedding database.

Parameters
  • session – Optional SQLAlchemy session

  • use_tqdm (bool) – Use tqdm progress bar?

Return type

embeddingdb.sql.models.Collection

train(num_epochs=1, batch_size=None, slice_size=None, label_smoothing=0.0, sampler=None, continue_training=False, only_size_probing=False, use_tqdm=True, use_tqdm_batch=True, tqdm_kwargs=None, stopper=None, result_tracker=None, sub_batch_size=None, num_workers=None, clear_optimizer=False)

Train the KGE model.

Parameters
  • num_epochs (int) – The number of epochs to train the model.

  • batch_size (Optional[int]) – If set the batch size to use for mini-batch training. Otherwise find the largest possible batch_size automatically.

  • slice_size (Optional[int]) – >0 The divisor for the scoring function when using slicing. This is only possible for LCWA training loops in general and only for models that have the slicing capability implemented.

  • label_smoothing (float) – (0 <= label_smoothing < 1) If larger than zero, use label smoothing.

  • sampler (Optional[str]) – (None or ‘schlichtkrull’) The type of sampler to use. At the moment sLCWA in R-GCN is the only user of schlichtkrull sampling.

  • continue_training (bool) – If set to False, (re-)initialize the model’s weights. Otherwise continue training.

  • only_size_probing (bool) – The evaluation is only performed for two batches to test the memory footprint, especially on GPUs.

  • tqdm_kwargs (Optional[Mapping[str, Any]]) – Keyword arguments passed to tqdm managing the progress bar.

  • stopper (Optional[Stopper]) – An instance of pykeen.stopper.EarlyStopper with settings for checking if training should stop early

  • result_tracker (Optional[ResultTracker]) – The result tracker.

  • sub_batch_size (Optional[int]) – If provided split each batch into sub-batches to avoid memory issues for large models / small GPUs.

  • num_workers (Optional[int]) – The number of child CPU workers used for loading data. If None, data are loaded in the main process.

  • clear_optimizer (bool) – Whether to delete the optimizer instance after training (as the optimizer might have additional memory consumption due to e.g. moments in Adam).

Return type

List[float]

Returns

A pair of the KGE model and the losses per epoch.

property triples_factory: pykeen.triples.triples_factory.TriplesFactory

The triples factory in the model.

Return type

TriplesFactory

Local Closed World Assumption

class LCWATrainingLoop(model, optimizer=None)[source]

A training loop that uses the local closed world assumption training approach.

Initialize the training loop.

Parameters
  • model (Model) – The model to train

  • optimizer (Optional[Optimizer]) – The optimizer to use while training the model

Find the maximum batch size for training with the current setting.

This method checks how big the batch size can be for the current model with the given training data and the hardware at hand. If possible, the method will output the determined batch size and a boolean value indicating that this batch size was successfully evaluated. Otherwise, the output will be batch size 1 and the boolean value will be False.

Parameters

batch_size (Optional[int]) – The batch size to start the search with. If None, set batch_size=num_triples (i.e. full batch training).

Return type

Tuple[int, bool]

Returns

Tuple containing the maximum possible batch size as well as an indicator if the evaluation with that size was successful.

property device

The device used by the model.

classmethod get_normalized_name()

Get the normalized name of the training loop.

Return type

str

sub_batch_and_slice(batch_size)

Check if sub-batching and/or slicing is necessary to train the model on the hardware at hand.

Return type

Tuple[int, int]

to_embeddingdb(session=None, use_tqdm=False)

Upload to the embedding database.

Parameters
  • session – Optional SQLAlchemy session

  • use_tqdm (bool) – Use tqdm progress bar?

Return type

embeddingdb.sql.models.Collection

train(num_epochs=1, batch_size=None, slice_size=None, label_smoothing=0.0, sampler=None, continue_training=False, only_size_probing=False, use_tqdm=True, use_tqdm_batch=True, tqdm_kwargs=None, stopper=None, result_tracker=None, sub_batch_size=None, num_workers=None, clear_optimizer=False)

Train the KGE model.

Parameters
  • num_epochs (int) – The number of epochs to train the model.

  • batch_size (Optional[int]) – If set the batch size to use for mini-batch training. Otherwise find the largest possible batch_size automatically.

  • slice_size (Optional[int]) – >0 The divisor for the scoring function when using slicing. This is only possible for LCWA training loops in general and only for models that have the slicing capability implemented.

  • label_smoothing (float) – (0 <= label_smoothing < 1) If larger than zero, use label smoothing.

  • sampler (Optional[str]) – (None or ‘schlichtkrull’) The type of sampler to use. At the moment sLCWA in R-GCN is the only user of schlichtkrull sampling.

  • continue_training (bool) – If set to False, (re-)initialize the model’s weights. Otherwise continue training.

  • only_size_probing (bool) – The evaluation is only performed for two batches to test the memory footprint, especially on GPUs.

  • tqdm_kwargs (Optional[Mapping[str, Any]]) – Keyword arguments passed to tqdm managing the progress bar.

  • stopper (Optional[Stopper]) – An instance of pykeen.stopper.EarlyStopper with settings for checking if training should stop early

  • result_tracker (Optional[ResultTracker]) – The result tracker.

  • sub_batch_size (Optional[int]) – If provided split each batch into sub-batches to avoid memory issues for large models / small GPUs.

  • num_workers (Optional[int]) – The number of child CPU workers used for loading data. If None, data are loaded in the main process.

  • clear_optimizer (bool) – Whether to delete the optimizer instance after training (as the optimizer might have additional memory consumption due to e.g. moments in Adam).

Return type

List[float]

Returns

A pair of the KGE model and the losses per epoch.

property triples_factory: pykeen.triples.triples_factory.TriplesFactory

The triples factory in the model.

Return type

TriplesFactory

Base Classes

class TrainingLoop(model, optimizer=None)[source]

A training loop.

Initialize the training loop.

Parameters
  • model (Model) – The model to train

  • optimizer (Optional[Optimizer]) – The optimizer to use while training the model

Find the maximum batch size for training with the current setting.

This method checks how big the batch size can be for the current model with the given training data and the hardware at hand. If possible, the method will output the determined batch size and a boolean value indicating that this batch size was successfully evaluated. Otherwise, the output will be batch size 1 and the boolean value will be False.

Parameters

batch_size (Optional[int]) – The batch size to start the search with. If None, set batch_size=num_triples (i.e. full batch training).

Return type

Tuple[int, bool]

Returns

Tuple containing the maximum possible batch size as well as an indicator if the evaluation with that size was successful.

property device

The device used by the model.

classmethod get_normalized_name()[source]

Get the normalized name of the training loop.

Return type

str

sub_batch_and_slice(batch_size)[source]

Check if sub-batching and/or slicing is necessary to train the model on the hardware at hand.

Return type

Tuple[int, int]

to_embeddingdb(session=None, use_tqdm=False)[source]

Upload to the embedding database.

Parameters
  • session – Optional SQLAlchemy session

  • use_tqdm (bool) – Use tqdm progress bar?

Return type

embeddingdb.sql.models.Collection

train(num_epochs=1, batch_size=None, slice_size=None, label_smoothing=0.0, sampler=None, continue_training=False, only_size_probing=False, use_tqdm=True, use_tqdm_batch=True, tqdm_kwargs=None, stopper=None, result_tracker=None, sub_batch_size=None, num_workers=None, clear_optimizer=False)[source]

Train the KGE model.

Parameters
  • num_epochs (int) – The number of epochs to train the model.

  • batch_size (Optional[int]) – If set the batch size to use for mini-batch training. Otherwise find the largest possible batch_size automatically.

  • slice_size (Optional[int]) – >0 The divisor for the scoring function when using slicing. This is only possible for LCWA training loops in general and only for models that have the slicing capability implemented.

  • label_smoothing (float) – (0 <= label_smoothing < 1) If larger than zero, use label smoothing.

  • sampler (Optional[str]) – (None or ‘schlichtkrull’) The type of sampler to use. At the moment sLCWA in R-GCN is the only user of schlichtkrull sampling.

  • continue_training (bool) – If set to False, (re-)initialize the model’s weights. Otherwise continue training.

  • only_size_probing (bool) – The evaluation is only performed for two batches to test the memory footprint, especially on GPUs.

  • tqdm_kwargs (Optional[Mapping[str, Any]]) – Keyword arguments passed to tqdm managing the progress bar.

  • stopper (Optional[Stopper]) – An instance of pykeen.stopper.EarlyStopper with settings for checking if training should stop early

  • result_tracker (Optional[ResultTracker]) – The result tracker.

  • sub_batch_size (Optional[int]) – If provided split each batch into sub-batches to avoid memory issues for large models / small GPUs.

  • num_workers (Optional[int]) – The number of child CPU workers used for loading data. If None, data are loaded in the main process.

  • clear_optimizer (bool) – Whether to delete the optimizer instance after training (as the optimizer might have additional memory consumption due to e.g. moments in Adam).

Return type

List[float]

Returns

A pair of the KGE model and the losses per epoch.

property triples_factory: pykeen.triples.triples_factory.TriplesFactory

The triples factory in the model.

Return type

TriplesFactory

Lookup

get_training_loop_cls(query)[source]

Look up a training loop class by name (case/punctuation insensitive) in pykeen.training.training_loops.

Parameters

query (Union[None, str, Type[TrainingLoop]]) – The name of the training loop (case insensitive, punctuation insensitive).

Return type

Type[TrainingLoop]

Returns

The training loop class