ClampedInteraction

class ClampedInteraction(base: str | Interaction[HeadRepresentation, RelationRepresentation, TailRepresentation] | type[Interaction[HeadRepresentation, RelationRepresentation, TailRepresentation]], base_kwargs: Mapping[str, Any] | None = None, clamp_score: tuple[float | None, float] | tuple[float, float | None] | float | None = None)[source]

Bases: Interaction[HeadRepresentation, RelationRepresentation, TailRepresentation]

An adapter to clamp scores to a minimum or maximum value.

Warning

The used torch.clamp() function has zero gradient for scores below the minimum of above the maximum value. Thus, it aggravates gradient-based optimization.

Initialize the interaction module.

Parameters:

Note

The parameter pair (base, base_kwargs) is used for interaction_resolver

An explanation of resolvers and how to use them is given in https://class-resolver.readthedocs.io/en/latest/.

Attributes Summary

entity_shape

Expose the base interaction's entity shape.

relation_shape

Expose the base interaction's relation shape.

Methods Summary

forward(h, r, t)

Compute broadcasted triple scores given broadcasted representations for head, relation and tails.

Attributes Documentation

entity_shape: Sequence[str]

The symbolic shapes for entity representations

relation_shape: Sequence[str]

The symbolic shapes for relation representations

Methods Documentation

forward(h: HeadRepresentation, r: RelationRepresentation, t: TailRepresentation) Tensor[source]

Compute broadcasted triple scores given broadcasted representations for head, relation and tails.

In general, each interaction function (class) expects a certain format for each of head, relation and tail representations. This format is composed of the number and the shape of the representations.

Many simple interaction functions such as TransEInteraction operate on a single representation, however there are also interactions such as TransDInteraction, which requires two representations for each slot, or PairREInteraction, which requires two relation representations, but only a single representation for head and tail entity respectively.

Each individual representation has a shape. This can be a simple \(d\)-dimensional vector, but also comprise matrices, or even high-order tensors.

This method supports the general batched calculation, i.e., each of the representations can have a preceding batch dimensions. Those batch dimensions do not necessarily need to be exactly the same, but they need to be broadcastable. A good explanation of broadcasting rules can be found in NumPy’s documentation.

See also

  • Representations for an overview about different ways how to obtain individual representations.

Parameters:
Returns:

shape: batch_dims The scores.

Return type:

Tensor