MarginRankingLoss

class MarginRankingLoss(margin=1.0, margin_activation='relu', reduction='mean')[source]

Bases: pykeen.losses.PairwiseLoss

A module for the margin ranking loss.

\[L(score^+, score^-) = activation(score^- - score^+ + margin)\]

Initialize the margin loss instance.

Parameters
  • margin (float) – The margin by which positive and negative scores should be apart.

  • margin_activation (Union[str, Module, None]) – A margin activation. Defaults to 'relu', i.e. \(h(\Delta) = max(0, \Delta + \lambda)\), which is the default “margin loss”. Using 'softplus' leads to a “soft-margin” formulation as discussed in https://arxiv.org/abs/1703.07737.

  • reduction (str) – The name of the reduction operation to aggregate the individual loss values from a batch to a scalar loss value. From {‘mean’, ‘sum’}.

Attributes Summary

hpo_default

The default strategy for optimizing the loss’s hyper-parameters

synonyms

Methods Summary

forward(pos_scores, neg_scores)

Compute the margin loss.

process_lcwa_scores(predictions, labels[, …])

Process scores from LCWA training loop.

process_slcwa_scores(positive_scores, …[, …])

Process scores from sLCWA training loop.

Attributes Documentation

hpo_default: ClassVar[Mapping[str, Any]] = {'margin': {'high': 3, 'low': 0, 'q': 1, 'type': <class 'int'>}, 'margin_activation': {'choices': {'relu', 'softplus'}, 'type': 'categorical'}}

The default strategy for optimizing the loss’s hyper-parameters

synonyms: ClassVar[Optional[Set[str]]] = {'Pairwise Hinge Loss'}

Methods Documentation

forward(pos_scores, neg_scores)[source]

Compute the margin loss.

The scores have to be in broadcastable shape.

Parameters
  • pos_scores (FloatTensor) – The positive scores.

  • neg_scores (FloatTensor) – The negative scores.

Return type

FloatTensor

Returns

A scalar loss term.

process_lcwa_scores(predictions, labels, label_smoothing=None, num_entities=None)[source]

Process scores from LCWA training loop.

Parameters
  • predictions (FloatTensor) – shape: (batch_size, num_entities) The scores.

  • labels (FloatTensor) – shape: (batch_size, num_entities) The labels.

  • label_smoothing (Optional[float]) – An optional label smoothing parameter.

  • num_entities (Optional[int]) – The number of entities.

Return type

FloatTensor

Returns

A scalar loss value.

process_slcwa_scores(positive_scores, negative_scores, label_smoothing=None, batch_filter=None, num_entities=None)[source]

Process scores from sLCWA training loop.

Parameters
  • positive_scores (FloatTensor) – shape: (batch_size, 1) The scores for positive triples.

  • negative_scores (FloatTensor) – shape: (batch_size, num_neg_per_pos) or (num_unfiltered_negatives,) The scores for the negative triples, either in dense 2D shape, or in case they are already filtered, in sparse shape. If they are given in sparse shape, batch_filter needs to be provided, too.

  • label_smoothing (Optional[float]) – An optional label smoothing parameter.

  • batch_filter (Optional[BoolTensor]) – shape: (batch_size, num_neg_per_pos) An optional filter of negative scores which were kept. Given if and only if negative_scores have been pre-filtered.

  • num_entities (Optional[int]) – The number of entities. Only required if label smoothing is enabled.

Return type

FloatTensor

Returns

A scalar loss term.