TuckerInteraction

class TuckerInteraction(embedding_dim=200, relation_dim=None, head_dropout=0.3, relation_dropout=0.4, head_relation_dropout=0.5, apply_batch_normalization=True, core_initializer=None, core_initializer_kwargs=None)[source]

Bases: FunctionalInteraction[FloatTensor, FloatTensor, FloatTensor]

A stateful module for the stateless Tucker interaction function.

Initialize the Tucker interaction function.

Parameters:
  • embedding_dim (int) – The entity embedding dimension.

  • relation_dim (Optional[int]) – The relation embedding dimension.

  • head_dropout (float) – The dropout rate applied to the head representations.

  • relation_dropout (float) – The dropout rate applied to the relation representations.

  • head_relation_dropout (float) – The dropout rate applied to the combined head and relation representations.

  • apply_batch_normalization (bool) – Whether to use batch normalization on head representations and the combination of head and relation.

  • core_initializer (Union[str, Callable[[FloatTensor], FloatTensor], None]) – the core tensor’s initializer, or a hint thereof

  • core_initializer_kwargs (Optional[Mapping[str, Any]]) – additional keyword-based parameters for the initializer

Attributes Summary

default_core_initializer_kwargs

Methods Summary

default_core_initializer([a, b, generator])

Fill the input Tensor with values drawn from the uniform distribution.

func(r, t, core_tensor, do_h, do_r, do_hr, ...)

Evaluate the TuckEr interaction function.

reset_parameters()

Reset parameters the interaction function may have.

Attributes Documentation

default_core_initializer_kwargs: Mapping[str, Any] = {'a': -1.0, 'b': 1.0}

Methods Documentation

static default_core_initializer(a=0.0, b=1.0, generator=None)

Fill the input Tensor with values drawn from the uniform distribution.

\(\mathcal{U}(a, b)\).

Args:

tensor: an n-dimensional torch.Tensor a: the lower bound of the uniform distribution b: the upper bound of the uniform distribution generator: the torch Generator to sample from (default: None)

Examples:
>>> w = torch.empty(3, 5)
>>> nn.init.uniform_(w)
Return type:

Tensor

Parameters:
func(r, t, core_tensor, do_h, do_r, do_hr, bn_h, bn_hr)

Evaluate the TuckEr interaction function.

Compute scoring function W x_1 h x_2 r x_3 t as in the official implementation, i.e. as

\[DO_{hr}(BN_{hr}(DO_h(BN_h(h)) x_1 DO_r(W x_2 r))) x_3 t\]

where BN denotes BatchNorm and DO denotes Dropout

Parameters:
  • h (FloatTensor) – shape: (*batch_dims, d_e) The head representations.

  • r (FloatTensor) – shape: (*batch_dims, d_r) The relation representations.

  • t (FloatTensor) – shape: (*batch_dims, d_e) The tail representations.

  • core_tensor (FloatTensor) – shape: (d_e, d_r, d_e) The core tensor.

  • do_h (Dropout) – The dropout layer for the head representations.

  • do_r (Dropout) – The first hidden dropout.

  • do_hr (Dropout) – The second hidden dropout.

  • bn_h (Optional[BatchNorm1d]) – The first batch normalization layer.

  • bn_hr (Optional[BatchNorm1d]) – The second batch normalization layer.

Return type:

FloatTensor

Returns:

shape: batch_dims The scores.

reset_parameters()[source]

Reset parameters the interaction function may have.