class ConvKB(*, embedding_dim=200, hidden_dropout_rate=0.0, num_filters=400, regularizer=None, entity_initializer=<function uniform_>, relation_initializer=<function uniform_>, **kwargs)[source]

Bases: ERModel

An implementation of ConvKB from [nguyen2018].

ConvKB uses a convolutional neural network (CNN) whose feature maps capture global interactions of the input. Each triple \((h,r,t) \in \mathbb{K}\) is represented as a input matrix \(\mathbf{A} = [\mathbf{h}; \mathbf{r}; \mathbf{t}] \in \mathbb{R}^{d \times 3}\) in which the columns represent the embeddings for \(h\), \(r\), and \(t\). In the convolution layer, a set of convolutional filters \(\omega_i \in \mathbb{R}^{1 \times 3}, i=1, \dots, \tau,\) are applied on the input in order to compute for each dimension global interactions of the embedded triple. Each \(\omega_i ` is applied on every row of :math:\)mathbf{A}` creating a feature map \(\mathbf{v}_i = [v_{i,1},...,v_{i,d}] \in \mathbb{R}^d\):

\[\mathbf{v}_i = g(\omega_j \mathbf{A} + \mathbf{b})\]

where \(\mathbf{b} \in \mathbb{R}\) denotes a bias term and \(g\) an activation function which is employed element-wise. Based on the resulting feature maps \(\mathbf{v}_1, \dots, \mathbf{v}_{\tau}\), the plausibility score of a triple is given by:

\[f(h,r,t) = [\mathbf{v}_i; \ldots ;\mathbf{v}_\tau] \cdot \mathbf{w}\]

where \([\mathbf{v}_i; \ldots ;\mathbf{v}_\tau] \in \mathbb{R}^{\tau d \times 1}\) and \(\mathbf{w} \in \mathbb{R}^{\tau d \times 1} ` is a shared weight vector. ConvKB may be seen as a restriction of :class:`pykeen.models.ERMLP\) with a certain weight sharing pattern in the first layer.

See also

Initialize the model.

  • embedding_dim (int) – The entity embedding dimension \(d\).

  • hidden_dropout_rate (float) – The hidden dropout rate

  • num_filters (int) – The number of convolutional filters to use

  • regularizer (Optional[Regularizer]) – The regularizer to use. Defaults to \(L_p\)

  • entity_initializer (Union[str, Callable[[FloatTensor], FloatTensor], None]) – Entity initializer function. Defaults to torch.nn.init.uniform_()

  • relation_initializer (Union[str, Callable[[FloatTensor], FloatTensor], None]) – Relation initializer function. Defaults to torch.nn.init.uniform_()

  • kwargs – Remaining keyword arguments passed through to pykeen.models.EntityRelationEmbeddingModel.

To be consistent with the paper, pass entity and relation embeddings pre-trained from TransE.

Attributes Summary


The default strategy for optimizing the model's hyper-parameters


The LP settings used by [nguyen2018] for ConvKB.

Attributes Documentation

hpo_default: ClassVar[Mapping[str, Any]] = {'embedding_dim': {'high': 256, 'low': 16, 'q': 16, 'type': <class 'int'>}, 'hidden_dropout_rate': {'high': 0.5, 'low': 0.0, 'q': 0.1, 'type': <class 'float'>}, 'num_filters': {'high': 9, 'low': 7, 'scale': 'power_two', 'type': <class 'int'>}}

The default strategy for optimizing the model’s hyper-parameters

regularizer_default_kwargs: ClassVar[Mapping[str, Any]] = {'apply_only_once': True, 'normalize': True, 'p': 2.0, 'weight': 0.0005}

The LP settings used by [nguyen2018] for ConvKB.