Loss Functions¶
Loss functions integrated in PyKEEN.
Rather than reusing the builtin loss functions in PyTorch, we have elected to reimplement
some of the code from pytorch.nn.modules.loss
in order to encode the three different
links of loss functions accepted by PyKEEN in a class hierarchy. This allows for PyKEEN to more
dynamically handle different kinds of loss functions as well as share code. Further, it gives
more insight to potential users.
Throughout the following explanations of pointwise loss functions, pairwise loss functions, and setwise loss functions, we will assume the set of entities \(\mathcal{E}\), set of relations \(\mathcal{R}\), set of possible triples \(\mathcal{T} = \mathcal{E} \times \mathcal{R} \times \mathcal{E}\), set of possible subsets of possible triples \(2^{\mathcal{T}}\) (i.e., the power set of \(\mathcal{T}\)), set of positive triples \(\mathcal{K}\), set of negative triples \(\mathcal{\bar{K}}\), scoring function (e.g., TransE) \(f: \mathcal{T} \rightarrow \mathbb{R}\) and labeling function \(l:\mathcal{T} \rightarrow \{0,1\}\) where a value of 1 denotes the triple is positive (i.e., \((h,r,t) \in \mathcal{K}\)) and a value of 0 denotes the triple is negative (i.e., \((h,r,t) \notin \mathcal{K}\)).
Note
In most realistic use cases of knowledge graph embedding models, you will have observed a subset of positive triples \(\mathcal{T_{obs}} \subset \mathcal{K}\) and no observations over negative triples. Depending on the training assumption (sLCWA or LCWA), this will mean negative triples are generated in a variety of patterns.
Note
Following the open world assumption (OWA), triples \(\mathcal{\bar{K}}\) are better named “not positive” rather than negative. This is most relevant for pointwise loss functions. For pairwise and setwise loss functions, triples are compared as being more/less positive and the binary classification is not relevant.
Pointwise Loss Functions¶
A pointwise loss is applied to a single triple. It takes the form of \(L: \mathcal{T} \rightarrow \mathbb{R}\) and computes a realvalue for the triple given its labeling. Typically, a pointwise loss function takes the form of \(g: \mathbb{R} \times \{0,1\} \rightarrow \mathbb{R}\) based on the scoring function and labeling function.
Examples¶
Pointwise Loss 
Formulation 

Square Error 
\(g(s, l) = \frac{1}{2}(s  l)^2\) 
Binary Cross Entropy 
\(g(s, l) = (l*\log (\sigma(s))+(1l)*(\log (1\sigma(s))))\) 
Pointwise Hinge 
\(g(s, l) = \max(0, \lambda \hat{l}*s)\) 
Soft Pointwise Hinge 
\(g(s, l) = \log(1+\exp(\lambda\hat{l}*s))\) 
Pointwise Logistic (softplus) 
\(g(s, l) = \log(1+\exp(\hat{l}*s))\) 
For the pointwise logistic and pointwise hinge losses, \(\hat{l}\) has been rescaled from \(\{0,1\}\) to \(\{1,1\}\). The sigmoid logistic loss function is defined as \(\sigma(z) = \frac{1}{1 + e^{z}}\).
Note
The pointwise logistic loss can be considered as a special case of the pointwise soft hinge loss where \(\lambda = 0\).
Batching¶
The pointwise loss of a set of triples (i.e., a batch) \(\mathcal{L}_L: 2^{\mathcal{T}} \rightarrow \mathbb{R}\) is defined as the arithmetic mean of the pointwise losses over each triple in the subset \(\mathcal{B} \in 2^{\mathcal{T}}\):
Pairwise Loss Functions¶
A pairwise loss is applied to a pair of triples  a positive and a negative one. It is defined as \(L: \mathcal{K} \times \mathcal{\bar{K}} \rightarrow \mathbb{R}\) and computes a real value for the pair.
All loss functions implemented in PyKEEN induce an auxillary loss function based on the chosen interaction function \(L{*}: \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}\) that simply passes the scores through. Note that \(L\) is often used interchangbly with \(L^{*}\).
Delta Pairwise Loss Functions¶
Delta pairwise losses are computed on the differences between the scores of the negative and positive triples (e.g., \(\Delta := f(\bar{k})  f(k)\)) with transfer function \(g: \mathbb{R} \rightarrow \mathbb{R}\) that take the form of:
The following table shows delta pairwise loss functions:
Pairwise Loss 
Activation 
Margin 
Formulation 

Pairwise Hinge (margin ranking) 
ReLU 
\(\lambda \neq 0\) 
\(g(\Delta) = \max(0, \Delta + \lambda)\) 
Soft Pairwise Hinge (soft margin ranking) 
softplus 
\(\lambda \neq 0\) 
\(g(\Delta) = \log(1 + \exp(\Delta + \lambda))\) 
Pairwise Logistic 
softplus 
\(\lambda=0\) 
\(g(\Delta) = \log(1 + \exp(\Delta))\) 
Note
The pairwise logistic loss can be considered as a special case of the pairwise soft hinge loss where \(\lambda = 0\).
Inseparable Pairwise Loss Functions¶
The following pairwise loss function use the full generalized form of \(L(k, \bar{k}) = \dots\) for their definitions:
Pairwise Loss 
Formulation 

Double Loss 
\(h(\bar{\lambda} + f(\bar{k})) + h(\lambda  f(k))\) 
Batching¶
The pairwise loss for a set of pairs of positive/negative triples \(\mathcal{L}_L: 2^{\mathcal{K} \times \mathcal{\bar{K}}} \rightarrow \mathbb{R}\) is defined as the arithmetic mean of the pairwise losses for each pair of positive and negative triples in the subset \(\mathcal{B} \in 2^{\mathcal{K} \times \mathcal{\bar{K}}}\).
Setwise Loss Functions¶
A setwise loss is applied to a set of triples which can be either positive or negative. It is defined as
\(L: 2^{\mathcal{T}} \rightarrow \mathbb{R}\). The two setwise loss functions implemented in PyKEEN,
pykeen.losses.NSSALoss
and pykeen.losses.CrossEntropyLoss
are both widely different
in their paradigms, but both share the notion that triples are not strictly positive or negative.
Batching¶
The pairwise loss for a set of sets of triples triples \(\mathcal{L}_L: 2^{2^{\mathcal{T}}} \rightarrow \mathbb{R}\) is defined as the arithmetic mean of the setwise losses for each set of triples \(\mathcal{b}\) in the subset \(\mathcal{B} \in 2^{2^{\mathcal{T}}}\).
Classes¶

Pointwise loss functions compute an independent loss term for each triplelabel pair. 

A generic class for deltapointwise losses. 

The generalized margin ranking loss. 

Pairwise loss functions compare the scores of a positive triple and a negative triple. 

Setwise loss functions compare the scores of several triples. 

A loss with adversarial weighting of negative samples. 

An adversarially weighted BCE loss. 

The numerically unstable version of explicit Sigmoid + BCE loss. 

The binary cross entropy loss. 

The cross entropy loss that evaluates the cross entropy after softmax output. 

The focal loss proposed by [lin2018]. 

The InfoNCE loss with additive margin proposed by [wang2022]. 

The pairwise hinge loss (i.e., margin ranking loss). 

The mean squared error loss. 

The selfadversarial negative sampling loss function proposed by [sun2019]. 

The pointwise logistic loss (i.e., softplus loss). 

The soft pointwise hinge loss. 

The pointwise hinge loss. 

A limitbased scoring loss, with separate margins for positive and negative elements from [sun2018]. 

The soft pairwise hinge loss (i.e., soft margin ranking loss). 

The pairwise logistic loss. 