ERMLPE¶
- class ERMLPE(triples_factory, hidden_dim=300, input_dropout=0.2, hidden_dropout=0.3, embedding_dim=200, automatic_memory_optimization=None, loss=None, preferred_device=None, random_seed=None, regularizer=None)[source]¶
Bases:
pykeen.models.base.EntityRelationEmbeddingModel
An extension of ERMLP proposed by [sharifzadeh2019].
This model uses a neural network-based approach similar to ERMLP and with slight modifications. In ERMLP, the model is:
\[f(h, r, t) = \textbf{w}^{T} g(\textbf{W} [\textbf{h}; \textbf{r}; \textbf{t}])\]whereas in ERMPLE the model is:
\[f(h, r, t) = \textbf{t}^{T} f(\textbf{W} (g(\textbf{W} [\textbf{h}; \textbf{r}]))\]including dropouts and batch-norms between each two hidden layers. ConvE can be seen as a special case of ERMLPE that contains the unnecessary inductive bias of convolutional filters. The aim of this model is to show that lifting this bias from ConvE (which simply leaves us with a modified ERMLP model), not only reduces the number of parameters but also improves performance.
Initialize the entity embedding model.
- Parameters
relation_dim – The relation embedding dimensionality. If not given, defaults to same size as entity embedding dimension.
See also
Constructor of the base class
pykeen.models.Model
See also
Constructor of the base class
pykeen.models.EntityEmbeddingModel
Attributes Summary
The default strategy for optimizing the model’s hyper-parameters
The default parameters for the default loss function class
Methods Summary
score_h
(rt_batch)Forward pass using left side (head) prediction.
score_hrt
(hrt_batch)Forward pass.
score_t
(hr_batch)Forward pass using right side (tail) prediction.
Attributes Documentation
- hpo_default: ClassVar[Mapping[str, Any]] = {'embedding_dim': {'high': 350, 'low': 50, 'q': 25, 'type': <class 'int'>}, 'hidden_dim': {'high': 450, 'low': 50, 'q': 25, 'type': <class 'int'>}, 'hidden_dropout': {'high': 0.8, 'low': 0.0, 'q': 0.1, 'type': <class 'float'>}, 'input_dropout': {'high': 0.8, 'low': 0.0, 'q': 0.1, 'type': <class 'float'>}}¶
The default strategy for optimizing the model’s hyper-parameters
- loss_default_kwargs: ClassVar[Optional[Mapping[str, Any]]] = {}¶
The default parameters for the default loss function class
Methods Documentation
- score_h(rt_batch)[source]¶
Forward pass using left side (head) prediction.
This method calculates the score for all possible heads for each (relation, tail) pair.
- Parameters
rt_batch (
LongTensor
) – shape: (batch_size, 2), dtype: long The indices of (relation, tail) pairs.- Return type
FloatTensor
- Returns
shape: (batch_size, num_entities), dtype: float For each r-t pair, the scores for all possible heads.
- score_hrt(hrt_batch)[source]¶
Forward pass.
This method takes head, relation and tail of each triple and calculates the corresponding score.
- Parameters
hrt_batch (
LongTensor
) – shape: (batch_size, 3), dtype: long The indices of (head, relation, tail) triples.- Raises
NotImplementedError – If the method was not implemented for this class.
- Return type
FloatTensor
- Returns
shape: (batch_size, 1), dtype: float The score for each triple.
- score_t(hr_batch)[source]¶
Forward pass using right side (tail) prediction.
This method calculates the score for all possible tails for each (head, relation) pair.
- Parameters
hr_batch (
LongTensor
) – shape: (batch_size, 2), dtype: long The indices of (head, relation) pairs.- Return type
FloatTensor
- Returns
shape: (batch_size, num_entities), dtype: float For each h-r pair, the scores for all possible tails.