SE

class SE(*, embedding_dim: int = 50, scoring_fct_norm: int = 1, power_norm: bool = False, entity_initializer: str | ~typing.Callable[[~torch.Tensor], ~torch.Tensor] | None = <function xavier_uniform_>, entity_constrainer: str | ~typing.Callable[[~torch.Tensor], ~torch.Tensor] | None = <function normalize>, entity_constrainer_kwargs: ~collections.abc.Mapping[str, ~typing.Any] | None = None, relation_initializer: str | ~typing.Callable[[~torch.Tensor], ~torch.Tensor] | None = <pykeen.utils.compose object>, **kwargs)[source]

Bases: ERModel[Tensor, tuple[Tensor, Tensor], Tensor]

An implementation of the Structured Embedding (SE) published by [bordes2011].

This model represents entities as \(d\)-dimensional vectors, and relations by two projection matrices \(\textbf{M}_{r}^{h}, \textbf{M}_{r}^{t} \in \mathbb{R}^{d \times d}\) for the head and tail role respectively. They are stored in an Embedding matrix. The representations are then passed to the SEInteraction function to obtain scores.

Initialize SE.

Parameters:
  • embedding_dim (int) – The entity embedding dimension \(d\). Is usually \(d \in [50, 300]\).

  • scoring_fct_norm (int) – The norm used with torch.linalg.vector_norm(). Typically is 1 or 2.

  • power_norm (bool) – Whether to use the p-th power of the \(L_p\) norm. It has the advantage of being differentiable around 0, and numerically more stable.

  • entity_initializer (str | Callable[[Tensor], Tensor] | None) – Entity initializer function. Defaults to pykeen.nn.init.xavier_uniform_().

  • entity_constrainer (str | Callable[[Tensor], Tensor] | None) – Entity constrainer function. Defaults to torch.nn.functional.normalize().

  • entity_constrainer_kwargs (Mapping[str, Any] | None) – Keyword arguments to be used when calling the entity constrainer.

  • relation_initializer (str | Callable[[Tensor], Tensor] | None) – Relation initializer function. Defaults to pykeen.nn.init.xavier_uniform_norm_()

  • kwargs – Remaining keyword arguments to forward to ERModel

Attributes Summary

hpo_default

The default strategy for optimizing the model's hyper-parameters

Attributes Documentation

hpo_default: ClassVar[Mapping[str, Any]] = {'embedding_dim': {'high': 256, 'low': 16, 'q': 16, 'type': <class 'int'>}, 'scoring_fct_norm': {'high': 2, 'low': 1, 'type': <class 'int'>}}

The default strategy for optimizing the model’s hyper-parameters