RGCN

class RGCN(triples_factory, embedding_dim=500, automatic_memory_optimization=None, loss=None, predict_with_sigmoid=False, preferred_device=None, random_seed=None, num_bases_or_blocks=5, num_layers=2, use_bias=True, use_batch_norm=False, activation_cls=None, activation_kwargs=None, base_model=None, sparse_messages_slcwa=True, edge_dropout=0.4, self_loop_dropout=0.2, edge_weighting=<function inverse_indegree_edge_weights>, decomposition='basis', buffer_messages=True)[source]

Bases: pykeen.models.base.Model

An implementation of R-GCN from [schlichtkrull2018].

This model uses graph convolutions with relation-specific weights.

Initialize the module.

Parameters
  • triples_factory (TriplesFactory) – The triples factory facilitates access to the dataset.

  • loss (Optional[Loss]) – The loss to use. If None is given, use the loss default specific to the model subclass.

  • predict_with_sigmoid (bool) – Whether to apply sigmoid onto the scores when predicting scores. Applying sigmoid at prediction time may lead to exactly equal scores for certain triples with very high, or very low score. When not trained with applying sigmoid (or using BCEWithLogitsLoss), the scores are not calibrated to perform well with sigmoid.

  • automatic_memory_optimization (Optional[bool]) – If set to True, the model derives the maximum possible batch sizes for the scoring of triples during evaluation and also training (if no batch size was given). This allows to fully utilize the hardware at hand and achieves the fastest calculations possible.

  • preferred_device (Optional[str]) – The preferred device for model training and inference.

  • random_seed (Optional[int]) – A random seed to use for initialising the model’s weights. Should be set when aiming at reproducibility.

  • regularizer – A regularizer to use for training.

Attributes Summary

hpo_default

The default strategy for optimizing the model’s hyper-parameters

Methods Summary

post_parameter_update()

Has to be called after each parameter update.

score_hrt(hrt_batch)

Forward pass.

Attributes Documentation

hpo_default: ClassVar[Mapping[str, Any]] = {'activation_cls': {'choices': [None, <class 'torch.nn.modules.activation.ReLU'>, <class 'torch.nn.modules.activation.LeakyReLU'>], 'type': 'categorical'}, 'base_model_cls': {'choices': [<class 'pykeen.models.unimodal.distmult.DistMult'>, <class 'pykeen.models.unimodal.complex.ComplEx'>, <class 'pykeen.models.unimodal.ermlp.ERMLP'>], 'type': 'categorical'}, 'decomposition': {'choices': ['basis', 'block'], 'type': 'categorical'}, 'edge_dropout': {'high': 0.9, 'low': 0.0, 'type': <class 'float'>}, 'edge_weighting': {'choices': [None, <function inverse_indegree_edge_weights>, <function inverse_outdegree_edge_weights>, <function symmetric_edge_weights>], 'type': 'categorical'}, 'embedding_dim': {'high': 1000, 'low': 50, 'q': 50, 'type': <class 'int'>}, 'num_bases_or_blocks': {'high': 20, 'low': 2, 'q': 1, 'type': <class 'int'>}, 'num_layers': {'high': 5, 'low': 1, 'q': 1, 'type': <class 'int'>}, 'self_loop_dropout': {'high': 0.9, 'low': 0.0, 'type': <class 'float'>}, 'use_batch_norm': {'type': 'bool'}, 'use_bias': {'type': 'bool'}}

The default strategy for optimizing the model’s hyper-parameters

Methods Documentation

post_parameter_update()[source]

Has to be called after each parameter update.

Return type

None

score_hrt(hrt_batch)[source]

Forward pass.

This method takes head, relation and tail of each triple and calculates the corresponding score.

Parameters

hrt_batch (LongTensor) – shape: (batch_size, 3), dtype: long The indices of (head, relation, tail) triples.

Raises

NotImplementedError – If the method was not implemented for this class.

Return type

FloatTensor

Returns

shape: (batch_size, 1), dtype: float The score for each triple.