BCEWithLogitsLoss

class BCEWithLogitsLoss(reduction='mean')[source]

Bases: PointwiseLoss

The binary cross entropy loss.

For label function \(l:\mathcal{E} \times \mathcal{R} \times \mathcal{E} \rightarrow \{0,1\}\) and interaction function \(f:\mathcal{E} \times \mathcal{R} \times \mathcal{E} \rightarrow \mathbb{R}\), the binary cross entropy loss is defined as:

\[L(h, r, t) = -(l(h,r,t) \cdot \log(\sigma(f(h,r,t))) + (1 - l(h,r,t)) \cdot \log(1 - \sigma(f(h,r,t))))\]

where represents the logistic sigmoid function

\[\sigma(x) = \frac{1}{1 + \exp(-x)}\]

Note

The softplus activation function \(h_{\text{softplus}}(x) = -\log(\sigma(x))\).

Thus, the problem is framed as a binary classification problem of triples, where the interaction functions’ outputs are regarded as logits.

Warning

This loss is not well-suited for translational distance models because these models produce a negative distance as score and cannot produce positive model outputs.

Note

The related torch module is torch.nn.BCEWithLogitsLoss, but it can not be used interchangeably in PyKEEN because of the extended functionality implemented in PyKEEN’s loss functions.

Initialize the loss.

Parameters:

reduction (str) – the reduction, cf. _Loss.__init__

Attributes Summary

synonyms

synonyms of this loss

Methods Summary

forward(scores, labels)

Define the computation performed at every call.

Attributes Documentation

synonyms: ClassVar[Set[str] | None] = {'Negative Log Likelihood Loss'}

synonyms of this loss

Methods Documentation

forward(scores, labels)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Return type:

FloatTensor

Parameters:
  • scores (FloatTensor) –

  • labels (FloatTensor) –