ERMLPE
- class ERMLPE(*, embedding_dim=256, hidden_dim=None, input_dropout=0.2, hidden_dropout=None, entity_initializer=<function uniform_>, relation_initializer=None, **kwargs)[source]
Bases:
ERModel
An extension of
pykeen.models.ERMLP
proposed by [sharifzadeh2019].This model uses a neural network-based approach similar to ER-MLP and with slight modifications. In ER-MLP, the model is:
\[f(h, r, t) = \textbf{w}^{T} g(\textbf{W} [\textbf{h}; \textbf{r}; \textbf{t}])\]whereas in ER-MLP (E) the model is:
\[f(h, r, t) = \textbf{t}^{T} f(\textbf{W} (g(\textbf{W} [\textbf{h}; \textbf{r}]))\]including dropouts and batch-norms between each two hidden layers. ConvE can be seen as a special case of ER-MLP (E) that contains the unnecessary inductive bias of convolutional filters. The aim of this model is to show that lifting this bias from
pykeen.models.ConvE
(which simply leaves us with a modified ER-MLP model), not only reduces the number of parameters but also improves performance.Initialize the model.
- Parameters:
embedding_dim (
int
) – the embedding dimension (for both, entities and relations)hidden_dim (
Optional
[int
]) – the hidden dimension of the MLP; defaults toembedding_dim
.input_dropout (
float
) – the input dropout of the MLPhidden_dropout (
Optional
[float
]) – the hidden dropout of the MLP; defaults toinput_dropout
.entity_initializer (
Union
[str
,Callable
[[FloatTensor
],FloatTensor
],None
]) – the entity embedding initializerrelation_initializer (
Union
[str
,Callable
[[FloatTensor
],FloatTensor
],None
]) – the relation embedding initializer; defaults toentity_initializer
.kwargs – additional keyword-based parameters passed to
ERModel.__init__()
Attributes Summary
The default strategy for optimizing the model's hyper-parameters
The default parameters for the default loss function class
Attributes Documentation
- hpo_default: ClassVar[Mapping[str, Any]] = {'embedding_dim': {'high': 256, 'low': 16, 'q': 16, 'type': <class 'int'>}, 'hidden_dim': {'high': 9, 'low': 5, 'scale': 'power_two', 'type': <class 'int'>}, 'hidden_dropout': {'high': 0.5, 'low': 0.0, 'q': 0.1, 'type': <class 'float'>}, 'input_dropout': {'high': 0.5, 'low': 0.0, 'q': 0.1, 'type': <class 'float'>}}
The default strategy for optimizing the model’s hyper-parameters