class MarginRankingLoss(margin=1.0, reduction='mean')[source]

Bases: MarginPairwiseLoss

The pairwise hinge loss (i.e., margin ranking loss).

\[L(k, \bar{k}) = \max(0, f(\bar{k}) - f(k) + \lambda)\]

Where \(k\) are the positive triples, \(\bar{k}\) are the negative triples, \(f\) is the interaction function (e.g., TransE has \(f(h,r,t)=-||\mathbf{e}_h+\mathbf{e}_r-\mathbf{e}_t||_p\)), \(g(x)=\max(0,x)\) is the ReLU activation function, and \(\lambda\) is the margin.

See also

MRL is closely related to pykeen.losses.SoftMarginRankingLoss, only differing in that this loss uses the ReLU activation and pykeen.losses.SoftMarginRankingLoss uses the softmax activation. MRL is also related to the pykeen.losses.PairwiseLogisticLoss as this is a special case of the pykeen.losses.SoftMarginRankingLoss with no margin.


The related torch module is torch.nn.MarginRankingLoss, but it can not be used interchangeably in PyKEEN because of the extended functionality implemented in PyKEEN’s loss functions.

Initialize the margin loss instance.

  • margin (float) – The margin by which positive and negative scores should be apart.

  • reduction (str) – The name of the reduction operation to aggregate the individual loss values from a batch to a scalar loss value. From {‘mean’, ‘sum’}.

Attributes Summary


The default strategy for optimizing the loss's hyper-parameters


synonyms of this loss

Attributes Documentation

hpo_default: ClassVar[Mapping[str, Any]] = {'margin': {'high': 3, 'low': 0, 'type': <class 'float'>}}

The default strategy for optimizing the loss’s hyper-parameters

synonyms: ClassVar[Set[str] | None] = {'Pairwise Hinge Loss'}

synonyms of this loss