# DistMultLiteralGated

class DistMultLiteralGated(triples_factory, embedding_dim=50, input_dropout=0.0, **kwargs)[source]

An implementation of the LiteralE model with thhe Gated DistMult interaction from [kristiadi2018].

This model is different from pykeen.models.DistMultLiteral because it uses a gate (like found in LSTMs) instead of a LinearDropout module.

This gate implements the full $$g$$ function described in the LiteralE paper (see equation 4).

Initialize the module.

Parameters
• triples_factory (TriplesNumericLiteralsFactory) – The triples factory facilitates access to the dataset.

• interaction – The interaction module (e.g., TransE)

• interaction_kwargs – Additional key-word based parameters given to the interaction module’s constructor, if not already instantiated.

• entity_representations – The entity representation or sequence of representations

• relation_representations – The relation representation or sequence of representations

• loss – The loss to use. If None is given, use the loss default specific to the model subclass.

• predict_with_sigmoid – Whether to apply sigmoid onto the scores when predicting scores. Applying sigmoid at prediction time may lead to exactly equal scores for certain triples with very high, or very low score. When not trained with applying sigmoid (or using BCEWithLogitsLoss), the scores are not calibrated to perform well with sigmoid.

• preferred_device – The preferred device for model training and inference.

• random_seed – A random seed to use for initialising the model’s weights. Should be set when aiming at reproducibility.

• skip_checks – whether to skip entity representation checks.

Attributes Summary

 hpo_default The default strategy for optimizing the model's hyper-parameters loss_default_kwargs The default parameters for the default loss function class

Attributes Documentation

hpo_default: ClassVar[Mapping[str, Any]] = {'embedding_dim': {'high': 256, 'low': 16, 'q': 16, 'type': <class 'int'>}, 'input_dropout': {'high': 0.5, 'low': 0.0, 'q': 0.1, 'type': <class 'float'>}}

The default strategy for optimizing the model’s hyper-parameters

loss_default_kwargs: ClassVar[Mapping[str, Any]] = {'margin': 0.0}

The default parameters for the default loss function class