ERMLPE
- class ERMLPE(*, embedding_dim=200, hidden_dim=None, input_dropout=0.2, hidden_dropout=0.3, entity_initializer=<function uniform_>, relation_initializer=<function uniform_>, **kwargs)[source]
Bases:
pykeen.models.base.EntityRelationEmbeddingModel
An extension of
pykeen.models.ERMLP
proposed by [sharifzadeh2019].This model uses a neural network-based approach similar to ER-MLP and with slight modifications. In ER-MLP, the model is:
\[f(h, r, t) = \textbf{w}^{T} g(\textbf{W} [\textbf{h}; \textbf{r}; \textbf{t}])\]whereas in ER-MPL (E) the model is:
\[f(h, r, t) = \textbf{t}^{T} f(\textbf{W} (g(\textbf{W} [\textbf{h}; \textbf{r}]))\]including dropouts and batch-norms between each two hidden layers. ConvE can be seen as a special case of ER-MLP (E )that contains the unnecessary inductive bias of convolutional filters. The aim of this model is to show that lifting this bias from
pykeen.models.ConvE
(which simply leaves us with a modified ER-MLP model), not only reduces the number of parameters but also improves performance.Initialize the model.
- Parameters
embedding_dim (
int
) – the embedding dimensionhidden_dim (
Optional
[int
]) – the hidden dimension of the MLP. Defaults to embedding_diminput_dropout (
float
) – the input dropouthidden_dropout (
float
) – the hidden layer’s dropoutentity_initializer (
Union
[str
,Callable
[[FloatTensor
],FloatTensor
],None
]) – the entity representation initializerrelation_initializer (
Union
[str
,Callable
[[FloatTensor
],FloatTensor
],None
]) – the relation representation initializerkwargs – additional keyword-based parameters passed to
EntityRelationEmbeddingModel.__init__()
Attributes Summary
The default strategy for optimizing the model's hyper-parameters
The default parameters for the default loss function class
Methods Summary
score_h
(rt_batch, **kwargs)Forward pass using left side (head) prediction.
score_hrt
(hrt_batch, **kwargs)Forward pass.
score_t
(hr_batch, **kwargs)Forward pass using right side (tail) prediction.
Attributes Documentation
- hpo_default: ClassVar[Mapping[str, Any]] = {'embedding_dim': {'high': 256, 'low': 16, 'q': 16, 'type': <class 'int'>}, 'hidden_dim': {'high': 9, 'low': 5, 'scale': 'power_two', 'type': <class 'int'>}, 'hidden_dropout': {'high': 0.5, 'low': 0.0, 'q': 0.1, 'type': <class 'float'>}, 'input_dropout': {'high': 0.5, 'low': 0.0, 'q': 0.1, 'type': <class 'float'>}}
The default strategy for optimizing the model’s hyper-parameters
- loss_default_kwargs: ClassVar[Mapping[str, Any]] = {}
The default parameters for the default loss function class
Methods Documentation
- score_h(rt_batch, **kwargs)[source]
Forward pass using left side (head) prediction.
This method calculates the score for all possible heads for each (relation, tail) pair.
- Parameters
rt_batch (
LongTensor
) – shape: (batch_size, 2), dtype: long The indices of (relation, tail) pairs.slice_size – >0 The divisor for the scoring function when using slicing.
mode – The pass mode, which is None in the transductive setting and one of “training”, “validation”, or “testing” in the inductive setting.
- Return type
FloatTensor
- Returns
shape: (batch_size, num_entities), dtype: float For each r-t pair, the scores for all possible heads.
- score_hrt(hrt_batch, **kwargs)[source]
Forward pass.
This method takes head, relation and tail of each triple and calculates the corresponding score.
- Parameters
hrt_batch (
LongTensor
) – shape: (batch_size, 3), dtype: long The indices of (head, relation, tail) triples.mode – The pass mode, which is None in the transductive setting and one of “training”, “validation”, or “testing” in the inductive setting.
- Return type
FloatTensor
- Returns
shape: (batch_size, 1), dtype: float The score for each triple.
- score_t(hr_batch, **kwargs)[source]
Forward pass using right side (tail) prediction.
This method calculates the score for all possible tails for each (head, relation) pair.
- Parameters
hr_batch (
LongTensor
) – shape: (batch_size, 2), dtype: long The indices of (head, relation) pairs.slice_size – >0 The divisor for the scoring function when using slicing.
mode – The pass mode, which is None in the transductive setting and one of “training”, “validation”, or “testing” in the inductive setting.
- Return type
FloatTensor
- Returns
shape: (batch_size, num_entities), dtype: float For each h-r pair, the scores for all possible tails.