pipeline
- pipeline(*, dataset: None | str | Dataset | type[Dataset] = None, dataset_kwargs: Mapping[str, Any] | None = None, training: str | CoreTriplesFactory | None = None, testing: str | CoreTriplesFactory | None = None, validation: str | CoreTriplesFactory | None = None, evaluation_entity_whitelist: Collection[str] | None = None, evaluation_relation_whitelist: Collection[str] | None = None, model: None | str | Model | type[Model] = None, model_kwargs: Mapping[str, Any] | None = None, interaction: None | str | Interaction | type[Interaction] = None, interaction_kwargs: Mapping[str, Any] | None = None, dimensions: None | int | Mapping[str, int] = None, loss: str | type[Loss] | None = None, loss_kwargs: Mapping[str, Any] | None = None, regularizer: str | type[Regularizer] | None = None, regularizer_kwargs: Mapping[str, Any] | None = None, optimizer: str | type[Optimizer] | None = None, optimizer_kwargs: Mapping[str, Any] | None = None, clear_optimizer: bool = True, lr_scheduler: str | type[LRScheduler] | None = None, lr_scheduler_kwargs: Mapping[str, Any] | None = None, training_loop: str | type[TrainingLoop] | None = None, training_loop_kwargs: Mapping[str, Any] | None = None, negative_sampler: str | type[NegativeSampler] | None = None, negative_sampler_kwargs: Mapping[str, Any] | None = None, epochs: int | None = None, training_kwargs: Mapping[str, Any] | None = None, stopper: str | type[Stopper] | None = None, stopper_kwargs: Mapping[str, Any] | None = None, evaluator: str | type[Evaluator] | None = None, evaluator_kwargs: Mapping[str, Any] | None = None, evaluation_kwargs: Mapping[str, Any] | None = None, result_tracker: str | ResultTracker | type[ResultTracker] | None | Sequence[str | ResultTracker | type[ResultTracker] | None] = None, result_tracker_kwargs: Mapping[str, Any] | None | Sequence[Mapping[str, Any] | None] = None, metadata: dict[str, Any] | None = None, device: str | device | None = None, random_seed: int | None = None, use_testing_data: bool = True, evaluation_fallback: bool = False, filter_validation_when_testing: bool = True, use_tqdm: bool | None = None) PipelineResult [source]
Train and evaluate a model.
- Parameters:
dataset (None | str | Dataset | type[Dataset]) – The name of the dataset (a key for the
pykeen.datasets.dataset_resolver
) or thepykeen.datasets.Dataset
instance. Alternatively, the training triples factory (training
), testing triples factory (testing
), and validation triples factory (validation
; optional) can be specified.dataset_kwargs (Mapping[str, Any] | None) – The keyword arguments passed to the dataset upon instantiation
training (str | CoreTriplesFactory | None) – A triples factory with training instances or path to the training file if a a dataset was not specified
testing (str | CoreTriplesFactory | None) – A triples factory with training instances or path to the test file if a dataset was not specified
validation (str | CoreTriplesFactory | None) – A triples factory with validation instances or path to the validation file if a dataset was not specified
evaluation_entity_whitelist (Collection[str] | None) – Optional restriction of evaluation to triples containing only these entities. Useful if the downstream task is only interested in certain entities, but the relational patterns with other entities improve the entity embedding quality.
evaluation_relation_whitelist (Collection[str] | None) – Optional restriction of evaluation to triples containing only these relations. Useful if the downstream task is only interested in certain relation, but the relational patterns with other relations improve the entity embedding quality.
model (None | str | Model | type[Model]) – The name of the model, subclass of
pykeen.models.Model
, or an instance ofpykeen.models.Model
. Can be given as None if theinteraction
keyword is used.model_kwargs (Mapping[str, Any] | None) – Keyword arguments to pass to the model class on instantiation
interaction (None | str | Interaction | type[Interaction]) – The name of the interaction class, a subclass of
pykeen.nn.modules.Interaction
, or an instance ofpykeen.nn.modules.Interaction
. Can not be given when there is also a model.interaction_kwargs (Mapping[str, Any] | None) – Keyword arguments to pass during instantiation of the interaction class. Only use with
interaction
.dimensions (None | int | Mapping[str, int]) – Dimensions to assign to the embeddings of the interaction. Only use with
interaction
.loss (str | type[Loss] | None) – The name of the loss or the loss class.
loss_kwargs (Mapping[str, Any] | None) – Keyword arguments to pass to the loss on instantiation
regularizer (str | type[Regularizer] | None) – The name of the regularizer or the regularizer class.
regularizer_kwargs (Mapping[str, Any] | None) – Keyword arguments to pass to the regularizer on instantiation
optimizer (str | type[Optimizer] | None) – The name of the optimizer or the optimizer class. Defaults to
torch.optim.Adagrad
.optimizer_kwargs (Mapping[str, Any] | None) – Keyword arguments to pass to the optimizer on instantiation
clear_optimizer (bool) – Whether to delete the optimizer instance after training. As the optimizer might have additional memory consumption due to e.g. moments in Adam, this is the default option. If you want to continue training, you should set it to False, as the optimizer’s internal parameter will get lost otherwise.
lr_scheduler (str | type[LRScheduler] | None) – The name of the lr_scheduler or the lr_scheduler class. Defaults to
torch.optim.lr_scheduler.ExponentialLR
.lr_scheduler_kwargs (Mapping[str, Any] | None) – Keyword arguments to pass to the lr_scheduler on instantiation
training_loop (str | type[TrainingLoop] | None) – The name of the training loop’s training approach (
'slcwa'
or'lcwa'
) or the training loop class. Defaults topykeen.training.SLCWATrainingLoop
.training_loop_kwargs (Mapping[str, Any] | None) – Keyword arguments to pass to the training loop on instantiation
negative_sampler (str | type[NegativeSampler] | None) – The name of the negative sampler (
'basic'
or'bernoulli'
) or the negative sampler class. Only allowed when training with sLCWA. Defaults topykeen.sampling.BasicNegativeSampler
.negative_sampler_kwargs (Mapping[str, Any] | None) – Keyword arguments to pass to the negative sampler class on instantiation
epochs (int | None) – A shortcut for setting the
num_epochs
key in thetraining_kwargs
dict.training_kwargs (Mapping[str, Any] | None) – Keyword arguments to pass to the training loop’s train function on call
stopper (str | type[Stopper] | None) – What kind of stopping to use. Default to no stopping, can be set to ‘early’.
stopper_kwargs (Mapping[str, Any] | None) – Keyword arguments to pass to the stopper upon instantiation.
evaluator (str | type[Evaluator] | None) – The name of the evaluator or an evaluator class. Defaults to
pykeen.evaluation.RankBasedEvaluator
.evaluator_kwargs (Mapping[str, Any] | None) – Keyword arguments to pass to the evaluator on instantiation
evaluation_kwargs (Mapping[str, Any] | None) – Keyword arguments to pass to the evaluator’s evaluate function on call
result_tracker (str | ResultTracker | type[ResultTracker] | None | Sequence[str | ResultTracker | type[ResultTracker] | None]) – Either none (will result in a Python result tracker), a single tracker (as either a class, instance, or string for class name), or a list of trackers (as either a class, instance, or string for class name
result_tracker_kwargs (Mapping[str, Any] | None | Sequence[Mapping[str, Any] | None]) – Either none (will use all defaults), a single dictionary (will be used for all trackers), or a list of dictionaries with the same length as the result trackers
metadata (dict[str, Any] | None) – A JSON dictionary to store with the experiment
use_testing_data (bool) – If true, use the testing triples. Otherwise, use the validation triples. Defaults to true - use testing triples.
device (str | device | None) – The device or device name to run on. If none is given, the device will be looked up with
pykeen.utils.resolve_device()
.random_seed (int | None) – The random seed to use. If none is specified, one will be assigned before any code is run for reproducibility purposes. In the returned
PipelineResult
instance, it can be accessed throughPipelineResult.random_seed
.evaluation_fallback (bool) – If true, in cases where the evaluation failed using the GPU it will fall back to using a smaller batch size or in the last instance evaluate on the CPU, if even the smallest possible batch size is too big for the GPU.
filter_validation_when_testing (bool) – If true, during the evaluating of the test dataset, validation triples are added to the set of known positive triples, which are filtered out when performing filtered evaluation following the approach described by [bordes2013]. This should be explicitly set to false only in the scenario that you are training a single model using the pipeline and evaluating with the testing set, but never using the validation set for optimization at all. This is a very atypical scenario, so it is left as true by default to promote comparability to previous publications.
use_tqdm (bool | None) – Globally set the usage of tqdm progress bars. Typically more useful to set to false, since the training loop and evaluation have it turned on by default.
- Returns:
A pipeline result package.
- Return type: