ClampedInteraction
- class ClampedInteraction(base: str | Interaction[HeadRepresentation, RelationRepresentation, TailRepresentation] | type[Interaction[HeadRepresentation, RelationRepresentation, TailRepresentation]], base_kwargs: Mapping[str, Any] | None = None, clamp_score: tuple[float | None, float] | tuple[float, float | None] | float | None = None)[source]
Bases:
Interaction
[HeadRepresentation
,RelationRepresentation
,TailRepresentation
]An adapter to clamp scores to a minimum or maximum value.
Warning
The used
torch.clamp()
function has zero gradient for scores below the minimum of above the maximum value. Thus, it aggravates gradient-based optimization.Initialize the interaction module.
- Parameters:
base (Interaction[HeadRepresentation, RelationRepresentation, TailRepresentation]) – the base interaction.
base_kwargs (OptionalKwargs) – keyword-based parameters used to instantiate the base interaction
clamp_score (tuple[float | None, float] | tuple[float, float | None] | None) – whether to clamp scores into a fixed interval
Note
The parameter pair
(base, base_kwargs)
is used forinteraction_resolver
An explanation of resolvers and how to use them is given in https://class-resolver.readthedocs.io/en/latest/.
Attributes Summary
Expose the base interaction's entity shape.
Expose the base interaction's relation shape.
Methods Summary
forward
(h, r, t)Compute broadcasted triple scores given broadcasted representations for head, relation and tails.
Attributes Documentation
Methods Documentation
- forward(h: HeadRepresentation, r: RelationRepresentation, t: TailRepresentation) Tensor [source]
Compute broadcasted triple scores given broadcasted representations for head, relation and tails.
In general, each interaction function (class) expects a certain format for each of head, relation and tail representations. This format is composed of the number and the shape of the representations.
Many simple interaction functions such as
TransEInteraction
operate on a single representation, however there are also interactions such asTransDInteraction
, which requires two representations for each slot, orPairREInteraction
, which requires two relation representations, but only a single representation for head and tail entity respectively.Each individual representation has a shape. This can be a simple \(d\)-dimensional vector, but also comprise matrices, or even high-order tensors.
This method supports the general batched calculation, i.e., each of the representations can have a preceding batch dimensions. Those batch dimensions do not necessarily need to be exactly the same, but they need to be broadcastable. A good explanation of broadcasting rules can be found in NumPy’s documentation.
See also
Representations for an overview about different ways how to obtain individual representations.
- Parameters:
h (HeadRepresentation) – shape:
(*batch_dims, *dims)
The head representations.r (RelationRepresentation) – shape:
(*batch_dims, *dims)
The relation representations.t (TailRepresentation) – shape:
(*batch_dims, *dims)
The tail representations.
- Returns:
shape: batch_dims The scores.
- Return type: