# MonotonicAffineTransformationInteraction

class MonotonicAffineTransformationInteraction(base, initial_bias=0.0, trainable_bias=True, initial_scale=1.0, trainable_scale=True)[source]

An adapter of interaction functions which adds a scalar (trainable) monotonic affine transformation of the score.

$score(h, r, t) = \alpha \cdot score'(h, r, t) + \beta$

This adapter is useful for losses such as BCE, where there is a fixed decision threshold, or margin-based losses, where the margin is not be treated as hyper-parameter, but rather a trainable parameter. This is particularly useful, if the value range of the score function is not known in advance, and thus choosing an appropriate margin becomes difficult.

Monotonicity is required to preserve the ordering of the original scoring function, and thus ensures that more plausible triples are still more plausible after the transformation.

For example, we can add a bias to a distance-based interaction function to enable positive values:

>>> base = TransEInteraction(p=2)
>>> interaction = MonotonicAffineTransformationInteraction(base=base, trainable_bias=True, trainable_scale=False)


When combined with BCE loss, we can geometrically think about predicting a (soft) sphere at $$h + r$$ with radius equal to the bias of the transformation. When we add a trainable scale, the model can control the “softness” of the decision boundary itself.

Initialize the interaction.

Parameters

Methods Summary

 forward(h, r, t) Compute broadcasted triple scores given broadcasted representations for head, relation and tails. Reset parameters the interaction function may have.

Methods Documentation

forward(h, r, t)[source]

Parameters

• r (~RelationRepresentation) – shape: (*batch_dims, *dims) The relation representations.

• t (~TailRepresentation) – shape: (*batch_dims, *dims) The tail representations.

Return type

FloatTensor

Returns

shape: batch_dims The scores.

reset_parameters()[source]

Reset parameters the interaction function may have.