CrossEInteraction

class CrossEInteraction(embedding_dim=50, combination_activation=<class 'torch.nn.modules.activation.Tanh'>, combination_activation_kwargs=None, combination_dropout=0.5)[source]

Bases: FunctionalInteraction[FloatTensor, Tuple[FloatTensor, FloatTensor], FloatTensor]

A module wrapper for the CrossE interaction function.

Instantiate the interaction module.

Parameters:
  • embedding_dim (int) – The embedding dimension.

  • combination_activation (Union[str, Module, Type[Module], None]) – The combination activation function.

  • combination_activation_kwargs (Optional[Mapping[str, Any]]) – Additional keyword-based arguments passed to the constructor of the combination activation function (if not already instantiated).

  • combination_dropout (Optional[float]) – An optional dropout applied to the combination.

Attributes Summary

relation_shape

The symbolic shapes for relation representations

Methods Summary

func(r, c_r, t, bias, activation[, dropout])

Evaluate the interaction function of CrossE for the given representations from [zhang2019b].

Attributes Documentation

relation_shape: Sequence[str] = ('d', 'd')

The symbolic shapes for relation representations

Methods Documentation

func(r, c_r, t, bias, activation, dropout=None)

Evaluate the interaction function of CrossE for the given representations from [zhang2019b].

\[Dropout(Activation(c_r \odot h + c_r \odot h \odot r + b))^T t)\]

Note

The representations have to be in a broadcastable shape.

Note

The CrossE paper described an additional sigmoid activation as part of the interaction function. Since using a log-likelihood loss can cause numerical problems (due to explicitly calling sigmoid before log), we do not apply this in our implementation but rather opt for the numerically stable variant. However, the model itself has an option predict_with_sigmoid, which can be used to enforce application of sigmoid during inference. This can also have an impact of rank-based evaluation, since limited numerical precision can lead to exactly equal scores for multiple choices. The definition of a rank is not unambiguous in such case, and there exist multiple competing variants how to break the ties. More information on this can be found in the documentation of rank-based evaluation.

Parameters:
  • h (FloatTensor) – shape: (*batch_dims, dim) The head representations.

  • r (FloatTensor) – shape: (*batch_dims, dim) The relation representations.

  • c_r (FloatTensor) – shape: (*batch_dims, dim) The relation-specific interaction vector.

  • t (FloatTensor) – shape: (*batch_dims, dim) The tail representations.

  • bias (FloatTensor) – shape: (dim,) The combination bias.

  • activation (Module) – The combination activation. Should be torch.nn.Tanh for consistency with the CrossE paper.

  • dropout (Optional[Dropout]) – Dropout applied after the combination.

Return type:

FloatTensor

Returns:

shape: batch_dims The scores.