MarginPairwiseLoss

class MarginPairwiseLoss(margin: float = 1.0, margin_activation: str | Module | None = None, reduction: Literal['mean', 'sum'] = 'mean')[source]

Bases: PairwiseLoss

The generalized margin ranking loss.

\[L(k, \bar{k}) = g(f(\bar{k}) - f(k) + \lambda)\]

Where \(k\) are the positive triples, \(\bar{k}\) are the negative triples, \(f\) is the interaction function (e.g., pykeen.models.TransE has \(f(h,r,t)=-||\mathbf{e}_h+\mathbf{e}_r-\mathbf{e}_t||_p\)), \(g(x)\) is an activation function like the ReLU or softmax, and \(\lambda\) is the margin.

Initialize the margin loss instance.

Parameters:
  • margin (float) – The margin by which positive and negative scores should be apart.

  • margin_activation (Hint[nn.Module]) – A margin activation. Defaults to 'relu', i.e. \(h(\Delta) = max(0, \Delta + \lambda)\), which is the default “margin loss”. Using 'softplus' leads to a “soft-margin” formulation as discussed in https://arxiv.org/abs/1703.07737.

  • reduction (Literal['mean', 'sum']) – The name of the reduction operation to aggregate the individual loss values from a batch to a scalar loss value. From {‘mean’, ‘sum’}.

Attributes Summary

hpo_default

The default strategy for optimizing the loss's hyper-parameters

Methods Summary

forward(pos_scores, neg_scores[, ...])

Calculate the point-wise loss.

process_lcwa_scores(predictions, labels[, ...])

Process scores from LCWA training loop.

process_slcwa_scores(positive_scores, ...[, ...])

Process scores from sLCWA training loop.

Attributes Documentation

hpo_default: ClassVar[Mapping[str, Any]] = {'margin': {'high': 3, 'low': 0, 'type': <class 'float'>}, 'margin_activation': {'choices': {'hard', 'relu', 'soft', 'softplus'}, 'type': 'categorical'}}

The default strategy for optimizing the loss’s hyper-parameters

Methods Documentation

forward(pos_scores: Tensor, neg_scores: Tensor, pos_weights: Tensor | None = None, neg_weights: Tensor | None = None) Tensor[source]

Calculate the point-wise loss.

Note

The positive and negative scores need to be broadcastable.

Note

If given, the positve/negative weight needs to be broadcastable to the respective scores.

Parameters:
  • pos_scores (Tensor) – The positive scores.

  • neg_scores (Tensor) – The negative scores.

  • pos_weights (Tensor | None) – The sample weights for positives.

  • neg_weights (Tensor | None) – The sample weights for negatives.

Returns:

The scalar loss value.

Return type:

Tensor

process_lcwa_scores(predictions: Tensor, labels: Tensor, label_smoothing: float | None = None, num_entities: int | None = None, weights: Tensor | None = None) Tensor[source]

Process scores from LCWA training loop.

Parameters:
  • predictions (Tensor) – shape: (*shape) The scores.

  • labels (Tensor) – shape: (*shape) The labels.

  • label_smoothing (float | None) – An optional label smoothing parameter.

  • num_entities (int | None) – The number of entities (required for label-smoothing).

  • weights (Tensor | None) – shape: (*shape) Sample weights.

Returns:

A scalar loss value.

Return type:

Tensor

process_slcwa_scores(positive_scores: Tensor, negative_scores: Tensor, label_smoothing: float | None = None, batch_filter: Tensor | None = None, num_entities: int | None = None, pos_weights: Tensor | None = None, neg_weights: Tensor | None = None) Tensor[source]

Process scores from sLCWA training loop.

Parameters:
  • positive_scores (Tensor) – shape: (batch_size, 1) The scores for positive triples.

  • negative_scores (Tensor) – shape: (batch_size, num_neg_per_pos) or (num_unfiltered_negatives,) The scores for the negative triples, either in dense 2D shape, or in case they are already filtered, in sparse shape. If they are given in sparse shape, batch_filter needs to be provided, too.

  • label_smoothing (float | None) – An optional label smoothing parameter.

  • batch_filter (Tensor | None) – shape: (batch_size, num_neg_per_pos) An optional filter of negative scores which were kept. Given if and only if negative_scores have been pre-filtered.

  • num_entities (int | None) – The number of entities. Only required if label smoothing is enabled.

  • pos_weights (Tensor | None) – shape: (batch_size, 1) Positive sample weights.

  • neg_weights (Tensor | None) – shape: (batch_size, num_neg_per_pos) Negative sample weights.

Returns:

A scalar loss term.

Return type:

Tensor