Representation

Embedding modules.

class CombinedCompGCNRepresentations(*, triples_factory, embedding_specification, num_layers=1, dims=None, layer_kwargs=None)[source]

A sequence of CompGCN layers.

Initialize the combined entity and relation representation module.

Parameters
  • triples_factory (CoreTriplesFactory) – The triples factory containing the training triples.

  • embedding_specification (EmbeddingSpecification) – An embedding specification for the base entity and relation representations.

  • num_layers (Optional[int]) – The number of message passing layers to use. If None, will be inferred by len(dims), i.e., requires dims to be a sequence / list.

  • dims (Union[None, int, Sequence[int]]) – The hidden dimensions to use. If None, defaults to the embedding dimension of the base representations. If an integer, is the same for all layers. The last dimension is equal to the output dimension.

  • layer_kwargs (Optional[Mapping[str, Any]]) – Additional key-word based parameters passed to the individual layers; cf. CompGCNLayer.

forward()[source]

Compute enriched representations.

Return type

Tuple[FloatTensor, FloatTensor]

split()[source]

Return the separated representations.

Return type

Tuple[ForwardRef, ForwardRef]

train(mode=True)[source]

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Args:
mode (bool): whether to set training mode (True) or evaluation

mode (False). Default: True.

Returns:

Module: self

class CompGCNLayer(input_dim, output_dim=None, dropout=0.0, use_bias=True, use_relation_bias=False, composition=None, activation=<class 'torch.nn.modules.linear.Identity'>, activation_kwargs=None, edge_weighting=<class 'pykeen.nn.weighting.SymmetricEdgeWeighting'>)[source]

A single layer of the CompGCN model.

Initialize the module.

Parameters
  • input_dim (int) – The input dimension.

  • output_dim (Optional[int]) – The output dimension. If None, equals the input dimension.

  • dropout (float) – The dropout to use for forward and backward edges.

  • use_bias (bool) – # TODO: do we really need this? it comes before a mandatory batch norm layer Whether to use bias.

  • use_relation_bias (bool) – Whether to use a bias for the relation transformation.

  • composition (Union[str, CompositionModule, None]) – The composition function.

  • activation (Union[str, Module, None]) – The activation to use.

  • activation_kwargs (Optional[Mapping[str, Any]]) – Additional key-word based arguments passed to the activation.

forward(x_e, x_r, edge_index, edge_type)[source]

Update entity and relation representations.

\[X_E'[e] = \frac{1}{3} \left( X_E W_s + \left( \sum_{h,r,e \in T} \alpha(h, e) \phi(X_E[h], X_R[r]) W_f \right) + \left( \sum_{e,r,t \in T} \alpha(e, t) \phi(X_E[t], X_R[r^{-1}]) W_b \right) \right)\]
Parameters
  • x_e (FloatTensor) – shape: (num_entities, input_dim) The entity representations.

  • x_r (FloatTensor) – shape: (2 * num_relations, input_dim) The relation representations (including inverse relations).

  • edge_index (LongTensor) – shape: (2, num_edges) The edge index, pairs of source and target entity for each triple.

  • edge_type (LongTensor) – shape (num_edges,) The edge type, i.e., relation ID, for each triple.

Return type

Tuple[FloatTensor, FloatTensor]

Returns

shape: (num_entities, output_dim) / (2 * num_relations, output_dim) The updated entity and relation representations.

message(x_e, x_r, edge_index, edge_type, weight)[source]

Perform message passing.

Parameters
  • x_e (FloatTensor) – shape: (num_entities, input_dim) The entity representations.

  • x_r (FloatTensor) – shape: (2 * num_relations, input_dim) The relation representations (including inverse relations).

  • edge_index (LongTensor) – shape: (2, num_edges) The edge index, pairs of source and target entity for each triple.

  • edge_type (LongTensor) – shape (num_edges,) The edge type, i.e., relation ID, for each triple.

  • weight (Parameter) – The transformation weight.

Return type

FloatTensor

Returns

The updated entity representations.

reset_parameters()[source]

Reset the model’s parameters.

class Embedding(num_embeddings, embedding_dim=None, shape=None, initializer=None, initializer_kwargs=None, normalizer=None, normalizer_kwargs=None, constrainer=None, constrainer_kwargs=None, regularizer=None, regularizer_kwargs=None, trainable=True, dtype=None, dropout=None)[source]

Trainable embeddings.

This class provides the same interface as torch.nn.Embedding and can be used throughout PyKEEN as a more fully featured drop-in replacement.

It extends it by adding additional options for normalizing, constraining, or applying dropout.

When a normalizer is selected, it is applied in every forward pass. It can be used, e.g., to ensure that the embedding vectors are of unit length. A constrainer can be used similarly, but it is applied after each parameter update (using the post_parameter_update hook), i.e., outside of the automatic gradient computation.

The optional dropout can also be used as a regularization technique. Moreover, it enables to obtain uncertainty estimates via techniques such as Monte-Carlo dropout. The following simple example shows how to obtain different scores for a single triple from an (untrained) model. These scores can be considered as samples from a distribution over the scores.

>>> from pykeen.datasets import Nations
>>> dataset = Nations()
>>> from pykeen.nn.emb import EmbeddingSpecification
>>> spec = EmbeddingSpecification(embedding_dim=3, dropout=0.1)
>>> from pykeen.models import ERModel
>>> model = ERModel(
...     triples_factory=dataset.training,
...     interaction='distmult',
...     entity_representations=spec,
...     relation_representations=spec,
... )
>>> import torch
>>> batch = torch.as_tensor(data=[[0, 1, 0]]).repeat(10, 1)
>>> scores = model.score_hrt(batch)

Instantiate an embedding with extended functionality.

Parameters
  • num_embeddings (int) – >0 The number of embeddings.

  • embedding_dim (Optional[int]) – >0 The embedding dimensionality.

  • initializer (Union[str, Callable[[FloatTensor], FloatTensor], None]) –

    An optional initializer, which takes an uninitialized (num_embeddings, embedding_dim) tensor as input, and returns an initialized tensor of same shape and dtype (which may be the same, i.e. the initialization may be in-place). Can be passed as a function, or as string corresponding to a key in pykeen.nn.emb.initializers such as:

    • "xavier_uniform"

    • "xavier_uniform_norm"

    • "xavier_normal"

    • "xavier_normal_norm"

    • "normal"

    • "normal_norm"

    • "uniform"

    • "uniform_norm"

    • "init_phases"

  • initializer_kwargs (Optional[Mapping[str, Any]]) – Additional keyword arguments passed to the initializer

  • normalizer (Union[str, Callable[[FloatTensor], FloatTensor], None]) – A normalization function, which is applied in every forward pass.

  • normalizer_kwargs (Optional[Mapping[str, Any]]) – Additional keyword arguments passed to the normalizer

  • constrainer (Union[str, Callable[[FloatTensor], FloatTensor], None]) –

    A function which is applied to the weights after each parameter update, without tracking gradients. It may be used to enforce model constraints outside of gradient-based training. The function does not need to be in-place, but the weight tensor is modified in-place. Can be passed as a function, or as a string corresponding to a key in pykeen.nn.emb.constrainers such as:

    • 'normalize'

    • 'complex_normalize'

    • 'clamp'

    • 'clamp_norm'

  • constrainer_kwargs (Optional[Mapping[str, Any]]) – Additional keyword arguments passed to the constrainer

  • regularizer (Union[str, Regularizer, None]) – A regularizer, which is applied to the selected embeddings in forward pass

  • regularizer_kwargs (Optional[Mapping[str, Any]]) – Additional keyword arguments passed to the regularizer

  • dropout (Optional[float]) – A dropout value for the embeddings.

property embedding_dim: int

The representation dimension.

Return type

int

forward(indices=None)[source]

Get representations for indices.

Parameters

indices (Optional[LongTensor]) – shape: s The indices, or None. If None, this is interpreted as torch.arange(self.max_id) (although implemented more efficiently).

Return type

FloatTensor

Returns

shape: (*s, *self.shape) The representations.

classmethod init_with_device(num_embeddings, embedding_dim, device, initializer=None, initializer_kwargs=None, normalizer=None, normalizer_kwargs=None, constrainer=None, constrainer_kwargs=None)[source]

Create an embedding object on the given device by wrapping __init__().

This method is a hotfix for not being able to pass a device during initialization of torch.nn.Embedding. Instead the weight is always initialized on CPU and has to be moved to GPU afterwards.

Return type

ForwardRef

Returns

The embedding.

property num_embeddings: int

The total number of representations (i.e. the maximum ID).

Return type

int

post_parameter_update()[source]

Apply constraints which should not be included in gradients.

reset_parameters()[source]

Reset the module’s parameters.

Return type

None

class EmbeddingSpecification(embedding_dim=None, shape=None, initializer=None, initializer_kwargs=None, normalizer=None, normalizer_kwargs=None, constrainer=None, constrainer_kwargs=None, regularizer=None, regularizer_kwargs=None, dtype=None, dropout=None)[source]

An embedding specification.

make(*, num_embeddings, device=None)[source]

Create an embedding with this specification.

Return type

Embedding

class LabelBasedTransformerRepresentation(labels, pretrained_model_name_or_path='bert-base-cased', max_length=512)[source]

Label-based representations using a transformer encoder.

Example Usage:

Entity representations are obtained by encoding the labels with a Transformer model. The transformer model becomes part of the KGE model, and its parameters are trained jointly.

from pykeen.datasets import get_dataset
from pykeen.nn.emb import EmbeddingSpecification, LabelBasedTransformerRepresentation
from pykeen.models import ERModel

dataset = get_dataset(dataset="nations")
entity_representations = LabelBasedTransformerRepresentation.from_triples_factory(
    triples_factory=dataset.training,
)
model = ERModel(
    interaction="ermlp",
    entity_representations=entity_representations,
    relation_representations=EmbeddingSpecification(shape=entity_representations.shape),
)

Initialize the representation.

Parameters
  • labels (Sequence[str]) – the labels

  • pretrained_model_name_or_path (str) – the name of the pretrained model, or a path, cf. AutoModel.from_pretrained

  • max_length (int) – >0 the maximum number of tokens to pad/trim the labels to

forward(indices=None)[source]

Get representations for indices.

Parameters

indices (Optional[LongTensor]) – shape: s The indices, or None. If None, this is interpreted as torch.arange(self.max_id) (although implemented more efficiently).

Return type

FloatTensor

Returns

shape: (*s, *self.shape) The representations.

classmethod from_triples_factory(triples_factory, for_entities=True, **kwargs)[source]

Prepare a label-based transformer representations with labels from a triples factory.

Parameters
  • triples_factory (TriplesFactory) – the triples factory

  • for_entities (bool) – whether to create the initializer for entities (or relations)

  • kwargs – additional keyword-based arguments passed to LabelBasedTransformerRepresentation.__init__()

Raises

ImportError – if the transformers library could not be imported

Return type

ForwardRef

class LowRankEmbeddingRepresentation(*, max_id, shape, num_bases=3, weight_initializer=<pykeen.utils.compose object>, **kwargs)[source]

Low-rank embedding factorization.

This representation reduces the number of trainable parameters by not learning independent weights for each index, but rather having shared bases among all indices, and only learn the weights of the linear combination.

\[E[i] = \sum_k B[i, k] * W[k]\]

Initialize the representations.

Parameters
  • max_id (int) – the maximum ID (exclusively). Valid Ids reach from 0, …, max_id-1

  • shape (Sequence[int]) – the shape of an individual base representation.

  • num_bases (int) – the number of bases. More bases increase expressivity, but also increase the number of trainable parameters.

  • weight_initializer (Callable[[FloatTensor], FloatTensor]) – the initializer for basis weights

  • kwargs – additional keyword based arguments passed to pykeen.nn.emb.Embedding, which is used for the base representations.

forward(indices=None)[source]

Get representations for indices.

Parameters

indices (Optional[LongTensor]) – shape: s The indices, or None. If None, this is interpreted as torch.arange(self.max_id) (although implemented more efficiently).

Return type

FloatTensor

Returns

shape: (*s, *self.shape) The representations.

reset_parameters()[source]

Reset the module’s parameters.

Return type

None

class NodePieceRepresentation(*, triples_factory, token_representation, aggregation=None, num_tokens=2, shape=None)[source]

Basic implementation of node piece decomposition [galkin2021].

\[x_e = agg(\{T[t] \mid t \in tokens(e) \})\]

where \(T\) are token representations, \(tokens\) selects a fixed number of \(k\) tokens for each entity, and \(agg\) is an aggregation function, which aggregates the individual token representations to a single entity representation.

Note

This implementation currently only supports representation of entities by bag-of-relations.

Initialize the representation.

Parameters
  • triples_factory (CoreTriplesFactory) – the triples factory

  • token_representation (Union[EmbeddingSpecification, RepresentationModule]) – the token representation specification, or pre-instantiated representation module. For the latter, the number of representations must be \(2 * num_relations + 1\).

  • aggregation (Union[None, str, Callable[[FloatTensor, int], FloatTensor]]) –

    aggregation of multiple token representations to a single entity representation. By default, this uses torch.mean(). If a string is provided, the module assumes that this refers to a top-level torch function, e.g. “mean” for torch.mean(), or “sum” for func:torch.sum. An aggregation can also have trainable parameters, .e.g., MLP(mean(MLP(tokens))) (cf. DeepSets from [zaheer2017]). In this case, the module has to be created outside of this component.

    We could also have aggregations which result in differently shapes output, e.g. a concatenation of all token embeddings resulting in shape (num_tokens * d,). In this case, shape must be provided.

    The aggregation takes two arguments: the (batched) tensor of token representations, in shape (*, num_tokens, *dt), and the index along which to aggregate.

  • num_tokens (int) – the number of tokens for each entity.

  • shape (Optional[Sequence[int]]) – the shape of an individual representation. Only necessary, if aggregation results in a change of dimensions.

assignment: torch.LongTensor

the entity-to-token mapping

forward(indices=None)[source]

Get representations for indices.

Parameters

indices (Optional[LongTensor]) – shape: s The indices, or None. If None, this is interpreted as torch.arange(self.max_id) (although implemented more efficiently).

Return type

FloatTensor

Returns

shape: (*s, *self.shape) The representations.

tokens: RepresentationModule

the token representations

class RepresentationModule(max_id, shape)[source]

A base class for obtaining representations for entities/relations.

A representation module maps integer IDs to representations, which are tensors of floats.

max_id defines the upper bound of indices we are allowed to request (exclusively). For simple embeddings this is equivalent to num_embeddings, but more a more appropriate word for general non-embedding representations, where the representations could come from somewhere else, e.g. a GNN encoder.

shape describes the shape of a single representation. In case of a vector embedding, this is just a single dimension. For others, e.g. pykeen.models.RESCAL, we have 2-d representations, and in general it can be any fixed shape.

We can look at all representations as a tensor of shape (max_id, *shape), and this is exactly the result of passing indices=None to the forward method.

We can also pass multi-dimensional indices to the forward method, in which case the indices’ shape becomes the prefix of the result shape: (*indices.shape, *self.shape).

Initialize the representation module.

Parameters
  • max_id (int) – The maximum ID (exclusively). Valid Ids reach from 0, …, max_id-1

  • shape (Sequence[int]) – The shape of an individual representation.

property embedding_dim: int

Return the “embedding dimension”. Kept for backward compatibility.

Return type

int

abstract forward(indices=None)[source]

Get representations for indices.

Parameters

indices (Optional[LongTensor]) – shape: s The indices, or None. If None, this is interpreted as torch.arange(self.max_id) (although implemented more efficiently).

Return type

FloatTensor

Returns

shape: (*s, *self.shape) The representations.

get_in_canonical_shape(indices=None)[source]

Get representations in canonical shape.

Parameters

indices (Optional[LongTensor]) – None, shape: (b,) or (b, n) The indices. If None, return all representations.

Return type

FloatTensor

Returns

shape: (b?, n?, d) If indices is None, b=1, n=max_id. If indices is 1-dimensional, b=indices.shape[0] and n=1. If indices is 2-dimensional, b, n = indices.shape

get_in_more_canonical_shape(dim, indices=None)[source]

Get representations in canonical shape.

The canonical shape is given as

(batch_size, d_1, d_2, d_3, *)

fulfilling the following properties:

Let i = dim. If indices is None, the return shape is (1, d_1, d_2, d_3) with d_i = num_representations, d_i = 1 else. If indices is not None, then batch_size = indices.shape[0], and d_i = 1 if indices.ndimension() = 1 else d_i = indices.shape[1]

The canonical shape is given by (batch_size, 1, *) if indices is not None, where batch_size=len(indices), or (1, num, *) if indices is None with num equal to the total number of embeddings.

Examples: >>> emb = EmbeddingSpecification(shape=(20,)).make(num_embeddings=10) >>> # Get head representations for given batch indices >>> emb.get_in_more_canonical_shape(dim=”h”, indices=torch.arange(5)).shape (5, 1, 1, 1, 20) >>> # Get head representations for given 2D batch indices, as e.g. used by fast slcwa scoring >>> emb.get_in_more_canonical_shape(dim=”h”, indices=torch.arange(6).view(2, 3)).shape (2, 3, 1, 1, 20) >>> # Get head representations for 1:n scoring >>> emb.get_in_more_canonical_shape(dim=”h”, indices=None).shape (1, 10, 1, 1, 20)

Parameters
  • dim (Union[int, str]) – The dimension along which to expand for indices=None, or indices.ndimension() == 2.

  • indices (Optional[LongTensor]) – The indices. Either None, in which care all embeddings are returned, or a 1 or 2 dimensional index tensor.

Return type

FloatTensor

Returns

shape: (batch_size, d1, d2, d3, *self.shape)

max_id: int

the maximum ID (exclusively)

post_parameter_update()[source]

Apply constraints which should not be included in gradients.

reset_parameters()[source]

Reset the module’s parameters.

Return type

None

shape: Tuple[int, ...]

the shape of an individual representation

class SingleCompGCNRepresentation(combined, position=0)[source]

A wrapper around the combined representation module.

Initialize the module.

Parameters
forward(indices=None)[source]

Get representations for indices.

Parameters

indices (Optional[LongTensor]) – shape: s The indices, or None. If None, this is interpreted as torch.arange(self.max_id) (although implemented more efficiently).

Return type

FloatTensor

Returns

shape: (*s, *self.shape) The representations.

class SubsetRepresentationModule(base, max_id)[source]

A representation module, which only exposes a subset of representations of its base.

Initialize the representations.

Parameters
  • base (RepresentationModule) – the base representations. have to have a sufficient number of representations, i.e., at least max_id.

  • max_id (int) – the maximum number of relations.

forward(indices=None)[source]

Get representations for indices.

Parameters

indices (Optional[LongTensor]) – shape: s The indices, or None. If None, this is interpreted as torch.arange(self.max_id) (although implemented more efficiently).

Return type

FloatTensor

Returns

shape: (*s, *self.shape) The representations.

constrainers = {'clamp': <built-in method clamp of type object>, 'clamp_norm': <function clamp_norm>, 'complex_normalize': <function complex_normalize>, 'normalize': <function normalize>}

Constrainers

initializers = {'init_phases': <function init_phases>, 'normal': <function normal_>, 'normal_norm': <pykeen.utils.compose object>, 'phases': <function init_phases>, 'uniform': <function uniform_>, 'uniform_norm': <pykeen.utils.compose object>, 'xavier_normal': <function xavier_normal_>, 'xavier_normal_norm': <pykeen.utils.compose object>, 'xavier_uniform': <function xavier_uniform_>, 'xavier_uniform_norm': <pykeen.utils.compose object>}

Initializers