Utilities
Utilities for PyKEEN.
- class Bias(dim)[source]
A module wrapper for adding a bias.
Initialize the module.
- Parameters:
dim (
int
) – >0 The dimension of the input.
- class ExtraReprMixin[source]
A mixin for modules with hierarchical extra_repr.
It takes up the
torch.nn.Module.extra_repr()
idea, and additionally provides a simple composable way to generate the components ofextra_repr()
viaiter_extra_repr()
.If combined with torch.nn.Module, make sure to put
ExtraReprMixin
behindtorch.nn.Module
to prefer the latter’s__repr__()
implementation.- iter_extra_repr()[source]
Iterate over the components of the
extra_repr()
.This method is typically overridden. A common pattern would be
def iter_extra_repr(self) -> Iterable[str]: yield from super().iter_extra_repr() yield "<key1>=<value1>" yield "<key2>=<value2>"
- Return type:
- Returns:
an iterable over individual components of the
extra_repr()
- class Result[source]
A superclass of results that can be saved to a directory.
- all_in_bounds(x, low=None, high=None, a_tol=0.0)[source]
Check if tensor values respect lower and upper bound.
- at_least_eps(x)[source]
Make sure a tensor is greater than zero.
- Return type:
FloatTensor
- Parameters:
x (FloatTensor) –
- broadcast_upgrade_to_sequences(*xs)[source]
Apply upgrade_to_sequence to each input, and afterwards repeat singletons to match the maximum length.
- Parameters:
- Return type:
- Returns:
a sequence of length m, where each element is a sequence and all elements have the same length.
- Raises:
ValueError – if there is a non-singleton sequence input with length different from the maximum sequence length.
>>> broadcast_upgrade_to_sequences(1) ((1,),) >>> broadcast_upgrade_to_sequences(1, 2) ((1,), (2,)) >>> broadcast_upgrade_to_sequences(1, (2, 3)) ((1, 1), (2, 3))
- calculate_broadcasted_elementwise_result_shape(first, second)[source]
Determine the return shape of a broadcasted elementwise operation.
- check_shapes(*x, raise_on_errors=True)[source]
Verify that a sequence of tensors are of matching shapes.
- Parameters:
x (
Tuple
[Union
[Tensor
,Tuple
[int
, …]],str
]) – A tuple (t, s), where t is a tensor, or an actual shape of a tensor (a tuple of integers), and s is a string, where each character corresponds to a (named) dimension. If the shapes of different tensors share a character, the corresponding dimensions are expected to be of equal size.raise_on_errors (
bool
) – Whether to raise an exception in case of a mismatch.
- Return type:
- Returns:
Whether the shapes matched.
- Raises:
ValueError – If the shapes mismatch and raise_on_error is True.
Examples: >>> check_shapes(((10, 20), “bd”), ((10, 20, 20), “bdd”)) True >>> check_shapes(((10, 20), “bd”), ((10, 30, 20), “bdd”), raise_on_errors=False) False
- clamp_norm(x, maxnorm, p='fro', dim=None)[source]
Ensure that a tensor’s norm does not exceeds some threshold.
- combine_complex(x_re, x_im)[source]
Combine a complex tensor from real and imaginary part.
- Return type:
FloatTensor
- Parameters:
x_re (FloatTensor) –
x_im (FloatTensor) –
- compact_mapping(mapping)[source]
Update a mapping (key -> id) such that the IDs range from 0 to len(mappings) - 1.
- complex_normalize(x)[source]
Normalize a vector of complex numbers such that each element is of unit-length.
Let \(x \in \mathbb{C}^d\) denote a complex vector. Then, the operation computes
\[x_i' = \frac{x_i}{|x_i|}\]where \(|x_i| = \sqrt{Re(x_i)^2 + Im(x_i)^2}\) is the modulus of complex number
- class compose(*operations, name)[source]
A class representing the composition of several functions.
Initialize the composition with a sequence of operations.
- convert_to_canonical_shape(x, dim, num=None, batch_size=1, suffix_shape=-1)[source]
Convert a tensor to canonical shape.
- Parameters:
- Return type:
FloatTensor
- Returns:
shape: (batch_size, num_heads, num_relations, num_tails,
*
) A tensor in canonical shape.
- create_relation_to_entity_set_mapping(triples)[source]
Create mappings from relation IDs to the set of their head / tail entities.
- einsum(*args)[source]
Sums the product of the elements of the input
operands
along dimensions specified using a notation based on the Einstein summation convention.Einsum allows computing many common multi-dimensional linear algebraic array operations by representing them in a short-hand format based on the Einstein summation convention, given by
equation
. The details of this format are described below, but the general idea is to label every dimension of the inputoperands
with some subscript and define which subscripts are part of the output. The output is then computed by summing the product of the elements of theoperands
along the dimensions whose subscripts are not part of the output. For example, matrix multiplication can be computed using einsum as torch.einsum(“ij,jk->ik”, A, B). Here, j is the summation subscript and i and k the output subscripts (see section below for more details on why).Equation:
The
equation
string specifies the subscripts (letters in [a-zA-Z]) for each dimension of the inputoperands
in the same order as the dimensions, separating subscripts for each operand by a comma (‘,’), e.g. ‘ij,jk’ specify subscripts for two 2D operands. The dimensions labeled with the same subscript must be broadcastable, that is, their size must either match or be 1. The exception is if a subscript is repeated for the same input operand, in which case the dimensions labeled with this subscript for this operand must match in size and the operand will be replaced by its diagonal along these dimensions. The subscripts that appear exactly once in theequation
will be part of the output, sorted in increasing alphabetical order. The output is computed by multiplying the inputoperands
element-wise, with their dimensions aligned based on the subscripts, and then summing out the dimensions whose subscripts are not part of the output.Optionally, the output subscripts can be explicitly defined by adding an arrow (‘->’) at the end of the equation followed by the subscripts for the output. For instance, the following equation computes the transpose of a matrix multiplication: ‘ij,jk->ki’. The output subscripts must appear at least once for some input operand and at most once for the output.
Ellipsis (’…’) can be used in place of subscripts to broadcast the dimensions covered by the ellipsis. Each input operand may contain at most one ellipsis which will cover the dimensions not covered by subscripts, e.g. for an input operand with 5 dimensions, the ellipsis in the equation ‘ab…c’ cover the third and fourth dimensions. The ellipsis does not need to cover the same number of dimensions across the
operands
but the ‘shape’ of the ellipsis (the size of the dimensions covered by them) must broadcast together. If the output is not explicitly defined with the arrow (‘->’) notation, the ellipsis will come first in the output (left-most dimensions), before the subscript labels that appear exactly once for the input operands. e.g. the following equation implements batch matrix multiplication ‘…ij,…jk’.A few final notes: the equation may contain whitespaces between the different elements (subscripts, ellipsis, arrow and comma) but something like ‘…’ is not valid. An empty string ‘’ is valid for scalar operands.
Note
torch.einsum
handles ellipsis (’…’) differently from NumPy in that it allows dimensions covered by the ellipsis to be summed over, that is, ellipsis are not required to be part of the output.Note
This function uses opt_einsum (https://optimized-einsum.readthedocs.io/en/stable/) to speed up computation or to consume less memory by optimizing contraction order. This optimization occurs when there are at least three inputs, since the order does not matter otherwise. Note that finding _the_ optimal path is an NP-hard problem, thus, opt_einsum relies on different heuristics to achieve near-optimal results. If opt_einsum is not available, the default order is to contract from left to right.
To bypass this default behavior, add the following line to disable the usage of opt_einsum and skip path calculation: torch.backends.opt_einsum.enabled = False
To specify which strategy you’d like for opt_einsum to compute the contraction path, add the following line: torch.backends.opt_einsum.strategy = ‘auto’. The default strategy is ‘auto’, and we also support ‘greedy’ and ‘optimal’. Disclaimer that the runtime of ‘optimal’ is factorial in the number of inputs! See more details in the opt_einsum documentation (https://optimized-einsum.readthedocs.io/en/stable/path_finding.html).
Note
As of PyTorch 1.10
torch.einsum()
also supports the sublist format (see examples below). In this format, subscripts for each operand are specified by sublists, list of integers in the range [0, 52). These sublists follow their operands, and an extra sublist can appear at the end of the input to specify the output’s subscripts., e.g. torch.einsum(op1, sublist1, op2, sublist2, …, [subslist_out]). Python’s Ellipsis object may be provided in a sublist to enable broadcasting as described in the Equation section above.- Args:
equation (str): The subscripts for the Einstein summation. operands (List[Tensor]): The tensors to compute the Einstein summation of.
Examples:
>>> # xdoctest: +IGNORE_WANT("non-deterministic") >>> # trace >>> torch.einsum('ii', torch.randn(4, 4)) tensor(-1.2104) >>> # xdoctest: +IGNORE_WANT("non-deterministic") >>> # diagonal >>> torch.einsum('ii->i', torch.randn(4, 4)) tensor([-0.1034, 0.7952, -0.2433, 0.4545]) >>> # xdoctest: +IGNORE_WANT("non-deterministic") >>> # outer product >>> x = torch.randn(5) >>> y = torch.randn(4) >>> torch.einsum('i,j->ij', x, y) tensor([[ 0.1156, -0.2897, -0.3918, 0.4963], [-0.3744, 0.9381, 1.2685, -1.6070], [ 0.7208, -1.8058, -2.4419, 3.0936], [ 0.1713, -0.4291, -0.5802, 0.7350], [ 0.5704, -1.4290, -1.9323, 2.4480]]) >>> # xdoctest: +IGNORE_WANT("non-deterministic") >>> # batch matrix multiplication >>> As = torch.randn(3, 2, 5) >>> Bs = torch.randn(3, 5, 4) >>> torch.einsum('bij,bjk->bik', As, Bs) tensor([[[-1.0564, -1.5904, 3.2023, 3.1271], [-1.6706, -0.8097, -0.8025, -2.1183]], [[ 4.2239, 0.3107, -0.5756, -0.2354], [-1.4558, -0.3460, 1.5087, -0.8530]], [[ 2.8153, 1.8787, -4.3839, -1.2112], [ 0.3728, -2.1131, 0.0921, 0.8305]]]) >>> # xdoctest: +IGNORE_WANT("non-deterministic") >>> # with sublist format and ellipsis >>> torch.einsum(As, [..., 0, 1], Bs, [..., 1, 2], [..., 0, 2]) tensor([[[-1.0564, -1.5904, 3.2023, 3.1271], [-1.6706, -0.8097, -0.8025, -2.1183]], [[ 4.2239, 0.3107, -0.5756, -0.2354], [-1.4558, -0.3460, 1.5087, -0.8530]], [[ 2.8153, 1.8787, -4.3839, -1.2112], [ 0.3728, -2.1131, 0.0921, 0.8305]]]) >>> # batch permute >>> A = torch.randn(2, 3, 4, 5) >>> torch.einsum('...ij->...ji', A).shape torch.Size([2, 3, 5, 4]) >>> # equivalent to torch.nn.functional.bilinear >>> A = torch.randn(3, 5, 4) >>> l = torch.randn(2, 5) >>> r = torch.randn(2, 4) >>> torch.einsum('bn,anm,bm->ba', l, A, r) tensor([[-0.3430, -5.2405, 0.4494], [ 0.3311, 5.5201, -3.0356]])
- ensure_complex(*xs)[source]
Ensure that all tensors are of complex dtype.
Reshape and convert if necessary.
- ensure_tuple(*x)[source]
Ensure that all elements in the sequence are upgraded to sequences.
- Parameters:
x (
Union
[~X,Sequence
[~X]]) – A sequence of sequences or literals- Return type:
- Returns:
An upgraded sequence of sequences
>>> ensure_tuple(1, (1,), (1, 2)) ((1,), (1,), (1, 2))
- estimate_cost_of_sequence(shape, *other_shapes)[source]
Cost of a sequence of broadcasted element-wise operations of tensors, given their shapes.
- extend_batch(batch, max_id, dim, ids=None)[source]
Extend batch for 1-to-all scoring by explicit enumeration.
- Parameters:
- Return type:
LongTensor
- Returns:
shape: (batch_size * num_choices, 3) A large batch, where every pair from the original batch is combined with every ID.
- get_connected_components(pairs)[source]
Calculate the connected components for a graph given as edge list.
The implementation uses a union-find data structure with path compression.
- Parameters:
pairs (
Iterable
[Tuple
[~X, ~X]]) – the edge list, i.e., pairs of node ids.- Return type:
Collection
[Collection
[~X]]- Returns:
a collection of connected components, i.e., a collection of disjoint collections of node ids.
- get_devices(module)[source]
Return the device(s) from each components of the model.
- Return type:
- Parameters:
module (Module) –
- get_df_io(df)[source]
Get the dataframe as bytes.
- Return type:
BytesIO
- Parameters:
df (DataFrame) –
- get_edge_index(*, triples_factory=None, mapped_triples=None, edge_index=None)[source]
Get the edge index from a number of different sources.
- Parameters:
- Raises:
ValueError – if none of the source was different from None
- Return type:
LongTensor
- Returns:
shape: (2, m) the edge index
- get_expected_norm(p, d)[source]
Compute the expected value of the L_p norm.
\[E[\|x\|_p] = d^{1/p} E[|x_1|^p]^{1/p}\]under the assumption that \(x_i \sim N(0, 1)\), i.e.
\[E[|x_1|^p] = 2^{p/2} \cdot \Gamma(\frac{p+1}{2} \cdot \pi^{-1/2}\]
- get_optimal_sequence(*shapes)[source]
Find the optimal sequence in which to combine tensors elementwise based on the shapes.
- invert_mapping(mapping)[source]
Invert a mapping.
- Parameters:
mapping (
Mapping
[~K, ~V]) – The mapping, key -> value.- Return type:
Mapping
[~V, ~K]- Returns:
The inverse mapping, value -> key.
- Raises:
ValueError – if the mapping is not bijective
- is_cuda_oom_error(runtime_error)[source]
Check whether the caught RuntimeError was due to CUDA being out of memory.
- Return type:
- Parameters:
runtime_error (RuntimeError) –
- is_cudnn_error(runtime_error)[source]
Check whether the caught RuntimeError was due to a CUDNN error.
- Return type:
- Parameters:
runtime_error (RuntimeError) –
- is_triple_tensor_subset(a, b)[source]
Check whether one tensor of triples is a subset of another one.
- Return type:
- Parameters:
a (LongTensor) –
b (LongTensor) –
- isin_many_dim(elements, test_elements, dim=0)[source]
Return whether elements are contained in test elements.
- logcumsumexp(a)[source]
Compute
log(cumsum(exp(a)))
.- Parameters:
a (
ndarray
) – shape: s the array- Return type:
- Returns:
shape s the log-cumsum-exp of the array
See also
scipy.special.logsumexp()
andtorch.logcumsumexp()
- negative_norm(x, p=2, power_norm=False)[source]
Evaluate negative norm of a vector.
- Parameters:
x (
FloatTensor
) – shape: (batch_size, num_heads, num_relations, num_tails, dim) The vectors.p (
Union
[str
,int
,float
]) – The p for the norm. cf.torch.linalg.vector_norm()
.power_norm (
bool
) – Whether to return \(|x-y|_p^p\), cf. https://github.com/pytorch/pytorch/issues/28119
- Return type:
FloatTensor
- Returns:
shape: (batch_size, num_heads, num_relations, num_tails) The scores.
- negative_norm_of_sum(*x, p=2, power_norm=False)[source]
Evaluate negative norm of a sum of vectors on already broadcasted representations.
- Parameters:
x (
FloatTensor
) – shape: (batch_size, num_heads, num_relations, num_tails, dim) The representations.p (
Union
[str
,int
,float
]) – The p for the norm. cf.torch.linalg.vector_norm()
.power_norm (
bool
) – Whether to return \(|x-y|_p^p\), cf. https://github.com/pytorch/pytorch/issues/28119
- Return type:
FloatTensor
- Returns:
shape: (batch_size, num_heads, num_relations, num_tails) The scores.
- normalize_path(path, *other, mkdir=False, is_file=False, default=None)[source]
Normalize a path.
- Parameters:
path (
Union
[str
,Path
,TextIO
,None
]) – the path in either of the valid forms.other (
Union
[str
,Path
]) – additional parts to join to the pathmkdir (
bool
) – whether to ensure that the path refers to an existing directory by creating it if necessaryis_file (
bool
) – whether the path is intended to be a file - only relevant for creating directoriesdefault (
Union
[None
,str
,Path
,TextIO
]) – the default to use if path is None
- Raises:
TypeError – if path is of unsuitable type
ValueError – if path and default are both None
- Return type:
- Returns:
the absolute and resolved path
- prepare_filter_triples(mapped_triples, additional_filter_triples=None, warn=True)[source]
Prepare the filter triples from the evaluation triples, and additional filter triples.
- project_entity(e, e_p, r_p)[source]
Project entity relation-specific.
\[e_{\bot} = M_{re} e = (r_p e_p^T + I^{d_r \times d_e}) e = r_p e_p^T e + I^{d_r \times d_e} e = r_p (e_p^T e) + e'\]and additionally enforces
\[\|e_{\bot}\|_2 \leq 1\]- Parameters:
e (
FloatTensor
) – shape: (…, d_e) The entity embedding.e_p (
FloatTensor
) – shape: (…, d_e) The entity projection.r_p (
FloatTensor
) – shape: (…, d_r) The relation projection.
- Return type:
FloatTensor
- Returns:
shape: (…, d_r)
- set_random_seed(seed)[source]
Set the random seed on numpy, torch, and python.
- Parameters:
seed (
int
) – The seed that will be used innp.random.seed()
,torch.manual_seed()
, andrandom.seed()
.- Return type:
- Returns:
A three tuple with None, the torch generator, and None.
- split_complex(x)[source]
Split a complex tensor into real and imaginary part.
- Return type:
Tuple
[FloatTensor
,FloatTensor
]- Parameters:
x (FloatTensor) –
- tensor_product(*tensors)[source]
Compute element-wise product of tensors in broadcastable shape.
- Return type:
FloatTensor
- Parameters:
tensors (FloatTensor) –
- tensor_sum(*tensors)[source]
Compute element-wise sum of tensors in broadcastable shape.
- Return type:
FloatTensor
- Parameters:
tensors (FloatTensor) –
- unpack_singletons(*xs)[source]
Unpack sequences of length one.
- Parameters:
xs (
Tuple
[~X]) – A sequence of tuples of length 1 or more- Return type:
- Returns:
An unpacked sequence of sequences
>>> unpack_singletons((1,), (1, 2), (1, 2, 3)) (1, (1, 2), (1, 2, 3))
- upgrade_to_sequence(x)[source]
Ensure that the input is a sequence.
Note
While strings are technically also a sequence, i.e.,
isinstance("test", typing.Sequence) is True
this may lead to unexpected behaviour when calling upgrade_to_sequence(“test”). We thus handle strings as non-sequences. To recover the other behavior, the following may be used:
upgrade_to_sequence(tuple("test"))
- Parameters:
x (
Union
[~X,Sequence
[~X]]) – A literal or sequence of literals- Return type:
Sequence
[~X]- Returns:
If a literal was given, a one element tuple with it in it. Otherwise, return the given value.
>>> upgrade_to_sequence(1) (1,) >>> upgrade_to_sequence((1, 2, 3)) (1, 2, 3) >>> upgrade_to_sequence("test") ('test',) >>> upgrade_to_sequence(tuple("test")) ('t', 'e', 's', 't')
- view_complex(x)[source]
Convert a PyKEEN complex tensor representation into a torch one.
- Return type:
- Parameters:
x (FloatTensor) –
- env(file=None)[source]
Print the env or output as HTML if in Jupyter.
- Parameters:
file – The file to print to if not in a Jupyter setting. Defaults to sys.stdout
- Returns:
A
IPython.display.HTML
if in a Jupyter notebook setting, otherwise none.
Version information for PyKEEN.