ERModel

class ERModel(*, triples_factory, interaction, interaction_kwargs=None, entity_representations=None, entity_representations_kwargs=None, relation_representations=None, relation_representations_kwargs=None, skip_checks=False, **kwargs)[source]

Bases: Generic[HeadRepresentation, RelationRepresentation, TailRepresentation], _NewAbstractModel

A commonly useful base for KGEMs using embeddings and interaction modules.

This model does not use post-init hooks to automatically initialize all of its parameters. Rather, the call to Model.reset_parameters_() happens at the end of ERModel.__init__. This is possible because all trainable parameters should necessarily be passed through the super().__init__() in subclasses of ERModel.

Other code can still be put after the call to super().__init__() in subclasses, such as registering regularizers (as done in pykeen.models.ConvKB and pykeen.models.TransH).

Initialize the module.

Parameters:

Methods Summary

append_weight_regularizer(parameter, regularizer)

Add a model weight to a regularizer's weight list, and register the regularizer with the model.

forward(h_indices, r_indices, t_indices[, ...])

Forward pass.

score_h(rt_batch, *[, slice_size, mode, heads])

Forward pass using left side (head) prediction.

score_hrt(hrt_batch, *[, mode])

Forward pass.

score_r(ht_batch, *[, slice_size, mode, ...])

Forward pass using middle (relation) prediction.

score_t(hr_batch, *[, slice_size, mode, tails])

Forward pass using right side (tail) prediction.

Methods Documentation

append_weight_regularizer(parameter, regularizer, regularizer_kwargs=None, default_regularizer=None, default_regularizer_kwargs=None)[source]

Add a model weight to a regularizer’s weight list, and register the regularizer with the model.

Parameters:
Raises:

KeyError – If an invalid parameter name was given

Return type:

None

forward(h_indices, r_indices, t_indices, slice_size=None, slice_dim=0, *, mode)[source]

Forward pass.

This method takes head, relation and tail indices and calculates the corresponding scores. It supports broadcasting.

Parameters:
  • h_indices (LongTensor) – The head indices.

  • r_indices (LongTensor) – The relation indices.

  • t_indices (LongTensor) – The tail indices.

  • slice_size (Optional[int]) – The slice size.

  • slice_dim (int) – The dimension along which to slice

  • mode (Optional[Literal[‘training’, ‘validation’, ‘testing’]]) – The pass mode, which is None in the transductive setting and one of “training”, “validation”, or “testing” in the inductive setting.

Return type:

FloatTensor

Returns:

The scores

Raises:

NotImplementedError – if score repetition becomes necessary

score_h(rt_batch, *, slice_size=None, mode=None, heads=None)[source]

Forward pass using left side (head) prediction.

This method calculates the score for all possible heads for each (relation, tail) pair.

Parameters:
  • rt_batch (LongTensor) – shape: (batch_size, 2), dtype: long The indices of (relation, tail) pairs.

  • slice_size (Optional[int]) – >0 The divisor for the scoring function when using slicing.

  • mode (Optional[Literal[‘training’, ‘validation’, ‘testing’]]) – The pass mode, which is None in the transductive setting and one of “training”, “validation”, or “testing” in the inductive setting.

  • heads (Optional[LongTensor]) – shape: (num_heads,) | (batch_size, num_heads) head entity indices to score against. If None, scores against all entities (from the given mode).

Return type:

FloatTensor

Returns:

shape: (batch_size, num_heads), dtype: float For each r-t pair, the scores for all possible heads.

score_hrt(hrt_batch, *, mode=None)[source]

Forward pass.

This method takes head, relation and tail of each triple and calculates the corresponding score.

Parameters:
  • hrt_batch (LongTensor) – shape: (batch_size, 3), dtype: long The indices of (head, relation, tail) triples.

  • mode (Optional[Literal[‘training’, ‘validation’, ‘testing’]]) – The pass mode, which is None in the transductive setting and one of “training”, “validation”, or “testing” in the inductive setting.

Return type:

FloatTensor

Returns:

shape: (batch_size, 1), dtype: float The score for each triple.

score_r(ht_batch, *, slice_size=None, mode=None, relations=None)[source]

Forward pass using middle (relation) prediction.

This method calculates the score for all possible relations for each (head, tail) pair.

Parameters:
  • ht_batch (LongTensor) – shape: (batch_size, 2), dtype: long The indices of (head, tail) pairs.

  • slice_size (Optional[int]) – >0 The divisor for the scoring function when using slicing.

  • mode (Optional[Literal[‘training’, ‘validation’, ‘testing’]]) – The pass mode, which is None in the transductive setting and one of “training”, “validation”, or “testing” in the inductive setting.

  • relations (Optional[LongTensor]) – shape: (num_relations,) | (batch_size, num_relations) relation indices to score against. If None, scores against all relations (from the given mode).

Return type:

FloatTensor

Returns:

shape: (batch_size, num_real_relations), dtype: float For each h-t pair, the scores for all possible relations.

score_t(hr_batch, *, slice_size=None, mode=None, tails=None)[source]

Forward pass using right side (tail) prediction.

This method calculates the score for all possible tails for each (head, relation) pair.

Parameters:
  • hr_batch (LongTensor) – shape: (batch_size, 2), dtype: long The indices of (head, relation) pairs.

  • slice_size (Optional[int]) – >0 The divisor for the scoring function when using slicing.

  • mode (Optional[Literal[‘training’, ‘validation’, ‘testing’]]) – The pass mode, which is None in the transductive setting and one of “training”, “validation”, or “testing” in the inductive setting.

  • tails (Optional[LongTensor]) – shape: (num_tails,) | (batch_size, num_tails) tail entity indices to score against. If None, scores against all entities (from the given mode).

Return type:

FloatTensor

Returns:

shape: (batch_size, num_tails), dtype: float For each h-r pair, the scores for all possible tails.