SEInteraction

class SEInteraction(p: int, power_norm: bool = False)[source]

Bases: NormBasedInteraction[Tensor, tuple[Tensor, Tensor], Tensor]

The Structured Embedding (SE) interaction function.

SE applies role- and relation-specific projection matrices \(\textbf{M}_{r}^{h}, \textbf{M}_{r}^{t} \in \mathbb{R}^{d \times d}\) to the head and tail entities’ representations \(\mathbf{h}, \mathbf{t} \in \mathbb{R}^d\) before computing their distance.

\[f(\textbf{h}, (\textbf{M}_{r}^{h}, \textbf{M}_{r}^{t}), \textbf{t}) = -\|\textbf{M}_{r}^{h} \textbf{h} - \textbf{M}_{r}^{t} \textbf{t}\|_p\]

Initialize the norm-based interaction function.

Parameters:
  • p (int) – The norm used with torch.linalg.vector_norm(). Typically is 1 or 2.

  • power_norm (bool) – Whether to use the p-th power of the \(L_p\) norm. It has the advantage of being differentiable around 0, and numerically more stable.

Attributes Summary

relation_shape

The symbolic shapes for relation representations

Methods Summary

forward(h, r, t)

Evaluate the interaction function.

Attributes Documentation

relation_shape: Sequence[str] = ('dd', 'dd')

The symbolic shapes for relation representations

Methods Documentation

forward(h: Tensor, r: Tensor, t: Tensor) Tensor[source]

Evaluate the interaction function.

See also

Interaction.forward for a detailed description about the generic batched form of the interaction function.

Parameters:
  • h (Tensor) – shape: (*batch_dims, d) The head representations.

  • r (Tensor) – shape: (*batch_dims, d, d) and (*batch_dims, d, d). The relation representations.

  • t (Tensor) – shape: (*batch_dims, d) The tail representations.

Returns:

shape: batch_dims The scores.

Return type:

Tensor