RGCN
- class RGCN(*, triples_factory, embedding_dim=500, num_layers=2, base_entity_initializer=<function xavier_uniform_>, base_entity_initializer_kwargs=None, relation_initializer=<function xavier_uniform_>, relation_initializer_kwargs=None, relation_representations=None, interaction='DistMult', interaction_kwargs=None, use_bias=True, activation=None, activation_kwargs=None, edge_dropout=0.4, self_loop_dropout=0.2, edge_weighting=None, decomposition=None, decomposition_kwargs=None, regularizer=None, regularizer_kwargs=None, **kwargs)[source]
Bases:
pykeen.models.nbase.ERModel
[torch.FloatTensor
,pykeen.typing.RelationRepresentation
,torch.FloatTensor
]An implementation of R-GCN from [schlichtkrull2018].
The Relational Graph Convolutional Network (R-GCN) comprises three parts:
A GCN-based entity encoder that computes enriched representations for entities, cf.
pykeen.nn.message_passing.RGCNRepresentations
. The representation for entity \(i\) at level \(l \in (1,\dots,L)\) is denoted as \(\textbf{e}_i^l\). The GCN is modified to use different weights depending on the type of the relation.Relation representations \(\textbf{R}_{r} \in \mathbb{R}^{d \times d}\) is a diagonal matrix that are learned independently from the GCN-based encoder.
An arbitrary interaction model which computes the plausibility of facts given the enriched representations, cf.
pykeen.nn.modules.Interaction
.
Scores for each triple \((h,r,t) \in \mathcal{K}\) are calculated by using the representations in the final level of the GCN-based encoder \(\textbf{e}_h^L\) and \(\textbf{e}_t^L\) along with relation representation \(\textbf{R}_{r}\). While the original implementation of R-GCN used the DistMult model and we use it as a default, this implementation allows the specification of an arbitrary interaction model.
\[f(h,r,t) = \textbf{e}_h^L \textbf{R}_{r} \textbf{e}_t^L\]Initialize the module.
- Parameters
triples_factory (
CoreTriplesFactory
) – The triples factory facilitates access to the dataset.interaction (
Union
[str
,Interaction
[FloatTensor
, ~RelationRepresentation,FloatTensor
],Type
[Interaction
[FloatTensor
, ~RelationRepresentation,FloatTensor
]],None
]) – The interaction module (e.g., TransE)interaction_kwargs (
Optional
[Mapping
[str
,Any
]]) – Additional key-word based parameters given to the interaction module’s constructor, if not already instantiated.entity_representations – The entity representation or sequence of representations
entity_representations_kwargs – additional keyword-based parameters for instantiation of entity representations
relation_representations (
Union
[str
,Representation
,Type
[Representation
],None
]) – The relation representation or sequence of representationsrelation_representations_kwargs – additional keyword-based parameters for instantiation of relation representations
skip_checks – whether to skip entity representation checks.
kwargs – Keyword arguments to pass to the base model
Attributes Summary
The default strategy for optimizing the model"s hyper-parameters
Attributes Documentation
- hpo_default: ClassVar[Mapping[str, Any]] = {'activation_cls': {'choices': [<class 'torch.nn.modules.activation.ReLU'>, <class 'torch.nn.modules.activation.LeakyReLU'>], 'type': 'categorical'}, 'decomposition': {'choices': ['bases', 'blocks'], 'type': 'categorical'}, 'edge_dropout': {'high': 0.9, 'low': 0.0, 'type': <class 'float'>}, 'edge_weighting': {'choices': ['inverse_in_degree', 'inverse_out_degree', 'symmetric'], 'type': 'categorical'}, 'embedding_dim': {'high': 512, 'low': 32, 'q': 32, 'type': <class 'int'>}, 'interaction': {'choices': ['distmult', 'complex', 'ermlp'], 'type': 'categorical'}, 'num_layers': {'high': 5, 'low': 1, 'q': 1, 'type': <class 'int'>}, 'self_loop_dropout': {'high': 0.9, 'low': 0.0, 'type': <class 'float'>}, 'use_batch_norm': {'type': 'bool'}, 'use_bias': {'type': 'bool'}}
The default strategy for optimizing the model”s hyper-parameters