ConvE
- class ConvE(triples_factory: ~pykeen.triples.triples_factory.CoreTriplesFactory, input_channels: int | None = None, output_channels: int = 32, embedding_height: int | None = None, embedding_width: int | None = None, kernel_height: int = 3, kernel_width: int = 3, input_dropout: float = 0.2, output_dropout: float = 0.3, feature_map_dropout: float = 0.2, embedding_dim: int = 200, apply_batch_normalization: bool = True, entity_initializer: str | ~typing.Callable[[~torch.Tensor], ~torch.Tensor] | None = <function xavier_normal_>, relation_initializer: str | ~typing.Callable[[~torch.Tensor], ~torch.Tensor] | None = <function xavier_normal_>, **kwargs)[source]
Bases:
ERModel
[Tensor
,Tensor
,tuple
[Tensor
,Tensor
]]An implementation of ConvE from [dettmers2018].
ConvE represents entities using a \(d\)-dimensional embedding and a scalar tail bias. Relations are represented by a \(d\)-dimensional vector. All three components can be stored as
Embedding
.On top of these representations, this model uses the
ConvEInteraction
to calculate scores.- Example::
"""Example of using ConvE outside of the pipeline.""" # Step 1: Get triples from pykeen.datasets import get_dataset dataset = get_dataset(dataset="nations", dataset_kwargs=dict(create_inverse_triples=True)) # Step 2: Configure the model from pykeen.models import ConvE model = ConvE( triples_factory=dataset.training, embedding_dim=200, input_channels=1, output_channels=32, embedding_height=10, embedding_width=20, kernel_height=3, kernel_width=3, input_dropout=0.2, feature_map_dropout=0.2, output_dropout=0.3, ) # Step 3: Configure the loop from torch.optim import Adam optimizer = Adam(params=model.get_grad_params()) from pykeen.training import LCWATrainingLoop training_loop = LCWATrainingLoop(model=model, optimizer=optimizer) # Step 4: Train losses = training_loop.train(triples_factory=dataset.training, num_epochs=5, batch_size=256) # Step 5: Evaluate the model from pykeen.evaluation import RankBasedEvaluator evaluator = RankBasedEvaluator() metric_result = evaluator.evaluate( model=model, mapped_triples=dataset.testing.mapped_triples, additional_filter_triples=dataset.training.mapped_triples, batch_size=8192, )
Initialize the model.
Attributes Summary
The default strategy for optimizing the model's hyper-parameters
The default parameters for the default loss function class
Attributes Documentation
- Parameters:
triples_factory (CoreTriplesFactory)
input_channels (int | None)
output_channels (int)
embedding_height (int | None)
embedding_width (int | None)
kernel_height (int)
kernel_width (int)
input_dropout (float)
output_dropout (float)
feature_map_dropout (float)
embedding_dim (int)
apply_batch_normalization (bool)
entity_initializer (str | Callable[[Tensor], Tensor] | None)
relation_initializer (str | Callable[[Tensor], Tensor] | None)
- hpo_default: ClassVar[Mapping[str, Any]] = {'feature_map_dropout': {'high': 0.5, 'low': 0.0, 'q': 0.1, 'type': <class 'float'>}, 'input_dropout': {'high': 0.5, 'low': 0.0, 'q': 0.1, 'type': <class 'float'>}, 'output_channels': {'high': 6, 'low': 4, 'scale': 'power_two', 'type': <class 'int'>}, 'output_dropout': {'high': 0.5, 'low': 0.0, 'q': 0.1, 'type': <class 'float'>}}
The default strategy for optimizing the model’s hyper-parameters