_NewAbstractModel

class _NewAbstractModel(triples_factory, loss=None, loss_kwargs=None, predict_with_sigmoid=False, preferred_device=None, random_seed=None)[source]

Bases: pykeen.models.base.Model, abc.ABC

An abstract class for knowledge graph embedding models (KGEMs).

The only function that needs to be implemented for a given subclass is Model.forward(). The job of the Model.forward() function, as opposed to the completely general torch.nn.Module.forward() is to take indices for the head, relation, and tails’ respective representation(s) and to determine a score.

Subclasses of Model can decide however they want on how to store entities’ and relations’ representations, how they want to be looked up, and how they should be scored. The ERModel provides a commonly useful implementation which allows for the specification of one or more entity representations and one or more relation representations in the form of pykeen.nn.Embedding as well as a matching instance of a pykeen.nn.Interaction.

Initialize the module.

Parameters
  • triples_factory (CoreTriplesFactory) – The triples factory facilitates access to the dataset.

  • loss (Union[str, Loss, Type[Loss], None]) – The loss to use. If None is given, use the loss default specific to the model subclass.

  • predict_with_sigmoid (bool) – Whether to apply sigmoid onto the scores when predicting scores. Applying sigmoid at prediction time may lead to exactly equal scores for certain triples with very high, or very low score. When not trained with applying sigmoid (or using BCEWithLogitsLoss), the scores are not calibrated to perform well with sigmoid.

  • preferred_device (Union[str, device, None]) – The preferred device for model training and inference.

  • random_seed (Optional[int]) – A random seed to use for initialising the model’s weights. Should be set when aiming at reproducibility.

Attributes Summary

regularizer_default

The default regularizer class

regularizer_default_kwargs

The default parameters for the default regularizer class

Methods Summary

collect_regularization_term()

Get the regularization term for the loss function.

forward(h_indices, r_indices, t_indices[, ...])

Forward pass.

post_parameter_update()

Has to be called after each parameter update.

score_h(rt_batch[, slice_size])

Forward pass using left side (head) prediction.

score_hrt(hrt_batch)

Forward pass.

score_r(ht_batch[, slice_size])

Forward pass using middle (relation) prediction.

score_t(hr_batch[, slice_size])

Forward pass using right side (tail) prediction.

Attributes Documentation

regularizer_default: ClassVar[Optional[Type[Regularizer]]] = None

The default regularizer class

regularizer_default_kwargs: ClassVar[Optional[Mapping[str, Any]]] = None

The default parameters for the default regularizer class

Methods Documentation

collect_regularization_term()[source]

Get the regularization term for the loss function.

abstract forward(h_indices, r_indices, t_indices, slice_size=None, slice_dim=None)[source]

Forward pass.

This method takes head, relation and tail indices and calculates the corresponding score.

Note

All indices which are not None, have to be either 1-element, be of shape (batch_size,) or (batch_size, n), where batch_size has to be the same for all tensors, but n may be different.

Note

If slicing is requested, the corresponding indices have to be None.

Parameters
  • h_indices (Optional[LongTensor]) – The head indices. None indicates to use all.

  • r_indices (Optional[LongTensor]) – The relation indices. None indicates to use all.

  • t_indices (Optional[LongTensor]) – The tail indices. None indicates to use all.

  • slice_size (Optional[int]) – The slice size.

  • slice_dim (Optional[str]) – The dimension along which to slice. From {“h”, “r”, “t”}.

Return type

FloatTensor

Returns

shape: (batch_size, num_heads, num_relations, num_tails) The score for each triple.

post_parameter_update()[source]

Has to be called after each parameter update.

Return type

None

score_h(rt_batch, slice_size=None)[source]

Forward pass using left side (head) prediction.

This method calculates the score for all possible heads for each (relation, tail) pair.

Parameters
  • rt_batch (LongTensor) – shape: (batch_size, 2), dtype: long The indices of (relation, tail) pairs.

  • slice_size (Optional[int]) – The slice size.

Return type

FloatTensor

Returns

shape: (batch_size, num_entities), dtype: float For each r-t pair, the scores for all possible heads.

score_hrt(hrt_batch)[source]

Forward pass.

This method takes head, relation and tail of each triple and calculates the corresponding score.

Parameters

hrt_batch (LongTensor) – shape: (batch_size, 3), dtype: long The indices of (head, relation, tail) triples.

Return type

FloatTensor

Returns

shape: (batch_size, 1), dtype: float The score for each triple.

score_r(ht_batch, slice_size=None)[source]

Forward pass using middle (relation) prediction.

This method calculates the score for all possible relations for each (head, tail) pair.

Parameters
  • ht_batch (LongTensor) – shape: (batch_size, 2), dtype: long The indices of (head, tail) pairs.

  • slice_size (Optional[int]) – The slice size.

Return type

FloatTensor

Returns

shape: (batch_size, num_relations), dtype: float For each h-t pair, the scores for all possible relations.

score_t(hr_batch, slice_size=None)[source]

Forward pass using right side (tail) prediction.

This method calculates the score for all possible tails for each (head, relation) pair.

Parameters
  • hr_batch (LongTensor) – shape: (batch_size, 2), dtype: long The indices of (head, relation) pairs.

  • slice_size (Optional[int]) – The slice size.

Return type

FloatTensor

Returns

shape: (batch_size, num_entities), dtype: float For each h-r pair, the scores for all possible tails.