TransR
- class TransR(*, embedding_dim: int = 50, relation_dim: int = 30, max_projection_norm: float = 1.0, scoring_fct_norm: int = 1, power_norm: bool = False, entity_initializer: str | ~typing.Callable[[~torch.Tensor], ~torch.Tensor] | None = <function xavier_uniform_>, entity_initializer_kwargs: ~collections.abc.Mapping[str, ~typing.Any] | None = None, entity_constrainer: str | ~typing.Callable[[~torch.Tensor], ~torch.Tensor] | None = <function clamp_norm>, relation_initializer: str | ~typing.Callable[[~torch.Tensor], ~torch.Tensor] | None = <pykeen.utils.compose object>, relation_initializer_kwargs: ~collections.abc.Mapping[str, ~typing.Any] | None = None, relation_constrainer: str | ~typing.Callable[[~torch.Tensor], ~torch.Tensor] | None = <function clamp_norm>, relation_projection_initializer: str | ~typing.Callable[[~torch.Tensor], ~torch.Tensor] | None = <function xavier_uniform_>, relation_projection_initializer_kwargs: ~collections.abc.Mapping[str, ~typing.Any] | None = None, **kwargs)[source]
Bases:
ERModel
[Tensor
,tuple
[Tensor
,Tensor
],Tensor
]An implementation of TransR from [lin2015].
This model represents entities as \(d\)-dimensional vectors, and relations as \(k\)-dimensional vectors. To bring them into the same vector space, a relation-specific projection is learned, too. All representations are stored in
Embedding
matrices.The representations are then passed to the
TransRInteraction
function to obtain scores.The following constraints are applied:
\(\|\textbf{e}_h\|_2 \leq 1\)
\(\|\textbf{r}_r\|_2 \leq 1\)
\(\|\textbf{e}_t\|_2 \leq 1\)
as well as inside the
TransRInteraction
\(\|\textbf{M}_{r}\textbf{e}_h\|_2 \leq 1\)
\(\|\textbf{M}_{r}\textbf{e}_t\|_2 \leq 1\)
See also
Initialize the model.
- Parameters:
embedding_dim (int) – The entity embedding dimension \(d\).
relation_dim (int) – The relation embedding dimension \(k\).
max_projection_norm (float) – The maximum norm to be clamped after projection.
scoring_fct_norm (int) – The norm used with
torch.linalg.vector_norm()
. Typically is 1 or 2.power_norm (bool) – Whether to use the p-th power of the \(L_p\) norm. It has the advantage of being differentiable around 0, and numerically more stable.
entity_initializer (str | Callable[[Tensor], Tensor] | None) – Entity initializer function. Defaults to
pykeen.nn.init.xavier_uniform_()
.entity_initializer_kwargs (Mapping[str, Any] | None) – Keyword arguments to be used when calling the entity initializer.
entity_constrainer (str | Callable[[Tensor], Tensor] | None) – The entity constrainer. Defaults to
pykeen.utils.clamp_norm()
.relation_initializer (str | Callable[[Tensor], Tensor] | None) – Relation initializer function. Defaults to
pykeen.nn.init.xavier_uniform_norm_()
.relation_initializer_kwargs (Mapping[str, Any] | None) – Keyword arguments to be used when calling the relation initializer.
relation_constrainer (str | Callable[[Tensor], Tensor] | None) – The relation constrainer. Defaults to
pykeen.utils.clamp_norm()
.relation_projection_initializer (str | Callable[[Tensor], Tensor] | None) – Relation projection initializer function. Defaults to
torch.nn.init.xavier_uniform_()
.relation_projection_initializer_kwargs (Mapping[str, Any] | None) – Keyword arguments to be used when calling the relation projection initializer.
kwargs – Remaining keyword arguments passed through to
ERModel
.
Attributes Summary
The default strategy for optimizing the model's hyper-parameters
Attributes Documentation
- hpo_default: ClassVar[Mapping[str, Any]] = {'embedding_dim': {'high': 256, 'low': 16, 'q': 16, 'type': <class 'int'>}, 'relation_dim': {'high': 256, 'low': 16, 'q': 16, 'type': <class 'int'>}, 'scoring_fct_norm': {'high': 2, 'low': 1, 'type': <class 'int'>}}
The default strategy for optimizing the model’s hyper-parameters