DeltaPointwiseLoss

class DeltaPointwiseLoss(margin=0.0, margin_activation='softplus', reduction='mean')[source]

Bases: PointwiseLoss

A generic class for delta-pointwise losses.

Pointwise Loss

Activation

Margin

Formulation

Implementation

Pointwise Hinge

ReLU

\(\lambda \neq 0\)

\(g(s, l) = \max(0, \lambda -\hat{l}*s)\)

pykeen.losses.PointwiseHingeLoss

Soft Pointwise Hinge

softplus

\(\lambda \neq 0\)

\(g(s, l) = \log(1+\exp(\lambda -\hat{l}*s))\)

pykeen.losses.SoftPointwiseHingeLoss

Pointwise Logistic (softplus)

softplus

\(\lambda = 0\)

\(g(s, l) = \log(1+\exp(-\hat{l}*s))\)

pykeen.losses.SoftplusLoss

Initialize the loss.

Parameters:
  • margin (Optional[float]) – the margin, cf. PointwiseLoss.__init__()

  • margin_activation (Union[str, Module, None]) – the margin activation, or a hint thereof, cf. margin_activation_resolver.

  • reduction (str) – the reduction, cf. PointwiseLoss.__init__()

Attributes Summary

hpo_default

The default strategy for optimizing the loss's hyper-parameters

Methods Summary

forward(logits, labels)

Calculate the loss for the given scores and labels.

Attributes Documentation

hpo_default: ClassVar[Mapping[str, Any]] = {'margin': {'high': 3, 'low': 0, 'type': <class 'float'>}, 'margin_activation': {'choices': {'hard', 'relu', 'soft', 'softplus'}, 'type': 'categorical'}}

The default strategy for optimizing the loss’s hyper-parameters

Methods Documentation

forward(logits, labels)[source]

Calculate the loss for the given scores and labels.

Return type:

FloatTensor

Parameters:
  • logits (FloatTensor) –

  • labels (FloatTensor) –