SE

class SE(*, embedding_dim: int = 50, scoring_fct_norm: int = 1, power_norm: bool = False, entity_initializer: str | ~typing.Callable[[~torch.Tensor], ~torch.Tensor] | None = <function xavier_uniform_>, entity_constrainer: str | ~typing.Callable[[~torch.Tensor], ~torch.Tensor] | None = <function normalize>, entity_constrainer_kwargs: ~collections.abc.Mapping[str, ~typing.Any] | None = None, relation_initializer: str | ~typing.Callable[[~torch.Tensor], ~torch.Tensor] | None = <pykeen.utils.compose object>, **kwargs)[source]

Bases: ERModel[Tensor, tuple[Tensor, Tensor], Tensor]

An implementation of the Structured Embedding (SE) published by [bordes2011].

This model represents entities as \(d\)-dimensional vectors, and relations by two projection matrices \(\textbf{M}_{r}^{h}, \textbf{M}_{r}^{t} \in \mathbb{R}^{d \times d}\) for the head and tail role respectively. They are stored in an Embedding matrix. The representations are then passed to the SEInteraction function to obtain scores.

Initialize SE.

Parameters:

Attributes Summary

hpo_default

The default strategy for optimizing the model's hyper-parameters

Attributes Documentation

hpo_default: ClassVar[Mapping[str, Any]] = {'embedding_dim': {'high': 256, 'low': 16, 'q': 16, 'type': <class 'int'>}, 'scoring_fct_norm': {'high': 2, 'low': 1, 'type': <class 'int'>}}

The default strategy for optimizing the model’s hyper-parameters