ConvE

class ConvE(triples_factory, input_channels=None, output_channels=32, embedding_height=None, embedding_width=None, kernel_height=3, kernel_width=3, input_dropout=0.2, output_dropout=0.3, feature_map_dropout=0.2, embedding_dim=200, apply_batch_normalization=True, entity_initializer=<function xavier_normal_>, relation_initializer=<function xavier_normal_>, **kwargs)[source]

Bases: ERModel

An implementation of ConvE from [dettmers2018].

ConvE is a CNN-based approach. For each triple \((h,r,t)\), the input to ConvE is a matrix \(\mathbf{A} \in \mathbb{R}^{2 \times d}\) where the first row of \(\mathbf{A}\) represents \(\mathbf{h} \in \mathbb{R}^d\) and the second row represents \(\mathbf{r} \in \mathbb{R}^d\). \(\mathbf{A}\) is reshaped to a matrix \(\mathbf{B} \in \mathbb{R}^{m \times n}\) where the first \(m/2\) half rows represent \(\mathbf{h}\) and the remaining \(m/2\) half rows represent \(\mathbf{r}\). In the convolution layer, a set of textit{2-dimensional} convolutional filters \(\Omega = \{\omega_i \ | \ \omega_i \in \mathbb{R}^{r \times c}\}\) are applied on \(\mathbf{B}\) that capture interactions between \(\mathbf{h}\) and \(\mathbf{r}\). The resulting feature maps are reshaped and concatenated in order to create a feature vector \(\mathbf{v} \in \mathbb{R}^{|\Omega|rc}\). In the next step, \(\mathbf{v}\) is mapped into the entity space using a linear transformation \(\mathbf{W} \in \mathbb{R}^{|\Omega|rc \times d}\), that is \(\mathbf{e}_{h,r} = \mathbf{v}^{T} \mathbf{W}\). The score for the triple \((h,r,t) \in \mathbb{K}\) is then given by:

\[f(h,r,t) = \mathbf{e}_{h,r} \mathbf{t}\]

Since the interaction model can be decomposed into \(f(h,r,t) = \left\langle f'(\mathbf{h}, \mathbf{r}), \mathbf{t} \right\rangle\), the model is particularly designed to 1-N scoring, i.e. efficient computation of scores for \((h,r,t)\) for fixed \(h,r\) and many different \(t\).

The default setting uses batch normalization. Batch normalization normalizes the output of the activation functions, in order to ensure that the weights of the NN don’t become imbalanced and to speed up training. However, batch normalization is not the only way to achieve more robust and effective training [santurkar2018]. Therefore, we added the flag ‘apply_batch_normalization’ to turn batch normalization on/off (it’s turned on as default).

Example usage:

>>> # Step 1: Get triples
>>> from pykeen.datasets import Nations
>>> dataset = Nations(create_inverse_triples=True)
>>> # Step 2: Configure the model
>>> from pykeen.models import ConvE
>>> model = ConvE(
...     embedding_dim       = 200,
...     input_channels      = 1,
...     output_channels     = 32,
...     embedding_height    = 10,
...     embedding_width     = 20,
...     kernel_height       = 3,
...     kernel_width        = 3,
...     input_dropout       = 0.2,
...     feature_map_dropout = 0.2,
...     output_dropout      = 0.3,
... )
>>> # Step 3: Configure the loop
>>> from torch.optim import Adam
>>> optimizer = Adam(params=model.get_grad_params())
>>> from pykeen.training import LCWATrainingLoop
>>> training_loop = LCWATrainingLoop(model=model, optimizer=optimizer)
>>> # Step 4: Train
>>> losses = training_loop.train(num_epochs=5, batch_size=256)
>>> # Step 5: Evaluate the model
>>> from pykeen.evaluation import RankBasedEvaluator
>>> evaluator = RankBasedEvaluator()
>>> metric_result = evaluator.evaluate(
...     model=model,
...     mapped_triples=dataset.testing.mapped_triples,
...     additional_filter_triples=dataset.training.mapped_triples,
...     batch_size=8192,
... )

Initialize the model.

Attributes Summary

hpo_default

The default strategy for optimizing the model's hyper-parameters

loss_default_kwargs

The default parameters for the default loss function class

Attributes Documentation

Parameters:
  • triples_factory (CoreTriplesFactory) –

  • input_channels (int | None) –

  • output_channels (int) –

  • embedding_height (int | None) –

  • embedding_width (int | None) –

  • kernel_height (int) –

  • kernel_width (int) –

  • input_dropout (float) –

  • output_dropout (float) –

  • feature_map_dropout (float) –

  • embedding_dim (int) –

  • apply_batch_normalization (bool) –

  • entity_initializer (str | Callable[[FloatTensor], FloatTensor] | None) –

  • relation_initializer (str | Callable[[FloatTensor], FloatTensor] | None) –

hpo_default: ClassVar[Mapping[str, Any]] = {'feature_map_dropout': {'high': 0.5, 'low': 0.0, 'q': 0.1, 'type': <class 'float'>}, 'input_dropout': {'high': 0.5, 'low': 0.0, 'q': 0.1, 'type': <class 'float'>}, 'output_channels': {'high': 6, 'low': 4, 'scale': 'power_two', 'type': <class 'int'>}, 'output_dropout': {'high': 0.5, 'low': 0.0, 'q': 0.1, 'type': <class 'float'>}}

The default strategy for optimizing the model’s hyper-parameters

loss_default_kwargs: ClassVar[Mapping[str, Any]] = {}

The default parameters for the default loss function class