Embedding
- class Embedding(max_id: int | None = None, num_embeddings: int | None = None, embedding_dim: int | None = None, shape: None | int | Sequence[int] = None, initializer: str | Callable[[Tensor], Tensor] | None = None, initializer_kwargs: Mapping[str, Any] | None = None, constrainer: str | Callable[[Tensor], Tensor] | None = None, constrainer_kwargs: Mapping[str, Any] | None = None, trainable: bool = True, dtype: dtype | None = None, **kwargs)[source]
Bases:
Representation
Trainable embeddings.
This class provides the same interface as
torch.nn.Embedding
and can be used throughout PyKEEN as a more fully featured drop-in replacement.It extends it by adding additional options for normalizing, constraining, or applying dropout.
When a normalizer is selected, it is applied in every forward pass. It can be used, e.g., to ensure that the embedding vectors are of unit length. A constrainer can be used similarly, but it is applied after each parameter update (using the post_parameter_update hook), i.e., outside of the automatic gradient computation.
The optional dropout can also be used as a regularization technique. Moreover, it enables to obtain uncertainty estimates via techniques such as Monte-Carlo dropout. The following simple example shows how to obtain different scores for a single triple from an (untrained) model. These scores can be considered as samples from a distribution over the scores.
>>> from pykeen.datasets import Nations >>> dataset = Nations() >>> from pykeen.models import ERModel >>> model = ERModel( ... triples_factory=dataset.training, ... interaction='distmult', ... entity_representations_kwargs=dict(embedding_dim=3, dropout=0.1), ... relation_representations_kwargs=dict(embedding_dim=3, dropout=0.1), ... ) >>> import torch >>> batch = torch.as_tensor(data=[[0, 1, 0]]).repeat(10, 1) >>> scores = model.score_hrt(batch)
Instantiate an embedding with extended functionality.
Note
the difference between a normalizer (cf.
Representation
) and a constrainer is that the normalizer is applied to the retrieved representations, and part of the forward call. Thus, it is part of the computational graph, and may contribute towards the gradients received by the weight. A constrainer on the other hand, is applied after a parameter update (using thepost_parameter_update()
hook), and hence not part of the computational graph.- Parameters:
max_id (int) – >0 The number of embeddings.
num_embeddings (int | None) – >0 The number of embeddings.
embedding_dim (int | None) – >0 The embedding dimensionality.
shape (tuple[int, ...]) – The shape of an individual representation.
initializer (Hint[Initializer]) –
An optional initializer, which takes an uninitialized (num_embeddings, embedding_dim) tensor as input, and returns an initialized tensor of same shape and dtype (which may be the same, i.e. the initialization may be in-place). Can be passed as a function, or as string corresponding to a key in
pykeen.nn.representation.initializers
such as:"xavier_uniform"
"xavier_uniform_norm"
"xavier_normal"
"xavier_normal_norm"
"normal"
"normal_norm"
"uniform"
"uniform_norm"
"init_phases"
initializer_kwargs (Mapping[str, Any] | None) – Additional keyword arguments passed to the initializer
constrainer (Callable[[Tensor], Tensor] | None) –
A function which is applied to the weights after each parameter update, without tracking gradients. It may be used to enforce model constraints outside gradient-based training. The function does not need to be in-place, but the weight tensor is modified in-place. Can be passed as a function, or as a string corresponding to a key in
pykeen.nn.representation.constrainer_resolver
such as:'normalize'
'complex_normalize'
'clamp'
'clamp_norm'
constrainer_kwargs (Mapping[str, Any] | None) – Additional keyword arguments passed to the constrainer
trainable (bool) – Should the wrapped embeddings be marked to require gradient. Defaults to True.
dtype (torch.dtype | None) – The datatype (otherwise uses
torch.get_default_dtype()
to look up)kwargs – additional keyword-based parameters passed to Representation.__init__
Methods Summary
Apply constraints which should not be included in gradients.
Reset the module's parameters.
Methods Documentation