BCEWithLogitsLoss
- class BCEWithLogitsLoss(reduction: str = 'mean', pos_weight: None | float = None)[source]
Bases:
PointwiseLoss
The binary cross entropy loss.
For label function \(l:\mathcal{E} \times \mathcal{R} \times \mathcal{E} \rightarrow \{0,1\}\) and interaction function \(f:\mathcal{E} \times \mathcal{R} \times \mathcal{E} \rightarrow \mathbb{R}\), the binary cross entropy loss is defined as:
\[L(h, r, t) = -(l(h,r,t) \cdot \log(\sigma(f(h,r,t))) + (1 - l(h,r,t)) \cdot \log(1 - \sigma(f(h,r,t))))\]where represents the logistic sigmoid function
\[\sigma(x) = \frac{1}{1 + \exp(-x)}\]Note
The softplus activation function \(h_{\text{softplus}}(x) = -\log(\sigma(x))\).
Thus, the problem is framed as a binary classification problem of triples, where the interaction functions’ outputs are regarded as logits.
Warning
This loss is not well-suited for translational distance models because these models produce a negative distance as score and cannot produce positive model outputs.
Note
The related
torch
module istorch.nn.BCEWithLogitsLoss
, but it can not be used interchangeably in PyKEEN because of the extended functionality implemented in PyKEEN’s loss functions.Initialize the loss criterion.
- Parameters:
Attributes Summary
The default strategy for optimizing the loss's hyper-parameters
synonyms of this loss
Methods Summary
forward
(x, target[, weight])Calculate the point-wise loss.
Attributes Documentation
- hpo_default: ClassVar[Mapping[str, Any]] = {'pos_weight': {'high': 1024, 'log': True, 'low': 0.25, 'type': <class 'float'>}, 'reduction': {'choices': ['mean', 'sum'], 'type': 'categorical'}}
The default strategy for optimizing the loss’s hyper-parameters
Methods Documentation