SoftMarginRankingLoss

class SoftMarginRankingLoss(margin=1.0, reduction='mean')[source]

Bases: pykeen.losses.MarginPairwiseLoss

A module for the soft pairwise hinge loss (i.e., soft margin ranking loss).

\[L(k, \bar{k}) = \log(1 + \exp(f(k) - f(\bar{k}) + \lambda))\]

Where \(k\) are the positive triples, \(\bar{k}\) are the negative triples, \(f\) is the interaction function (e.g., pykeen.models.TransE has \(f(h,r,t)=\mathbf{e}_h+\mathbf{r}_r-\mathbf{e}_t\)), \(g(x)=\log(1 + \exp(x))\) is the softmax activation function, and \(\lambda\) is the margin.

See also

When choosing margin=0`, this loss becomes equivalent to pykeen.losses.SoftMarginRankingLoss. It is also closely related to pykeen.losses.MarginRankingLoss, only differing in that this loss uses the softmax activation and pykeen.losses.MarginRankingLoss uses the ReLU activation.

Initialize the margin loss instance.

Parameters
  • margin (float) – The margin by which positive and negative scores should be apart.

  • margin_activation – A margin activation. Defaults to 'relu', i.e. \(h(\Delta) = max(0, \Delta + \lambda)\), which is the default “margin loss”. Using 'softplus' leads to a “soft-margin” formulation as discussed in https://arxiv.org/abs/1703.07737.

  • reduction (str) – The name of the reduction operation to aggregate the individual loss values from a batch to a scalar loss value. From {‘mean’, ‘sum’}.

Attributes Summary

hpo_default

The default strategy for optimizing the loss's hyper-parameters

Attributes Documentation

hpo_default: ClassVar[Mapping[str, Any]] = {'margin': {'high': 3, 'low': 0, 'type': <class 'float'>}}

The default strategy for optimizing the loss’s hyper-parameters