Regularizers

Regularization in PyKEEN.

Classes

LpRegularizer(*[, weight, apply_only_once, ...])

A simple L_p norm based regularizer.

NoRegularizer([weight, apply_only_once, ...])

A regularizer which does not perform any regularization.

CombinedRegularizer(regularizers[, total_weight])

A convex combination of regularizers.

PowerSumRegularizer(*[, weight, ...])

A simple x^p based regularizer.

OrthogonalityRegularizer(*[, weight, ...])

A regularizer for the soft orthogonality constraints from [wang2014].

NormLimitRegularizer(*[, weight, ...])

A regularizer which formulates a soft constraint on a maximum norm.

Class Inheritance Diagram

Inheritance diagram of pykeen.regularizers.LpRegularizer, pykeen.regularizers.NoRegularizer, pykeen.regularizers.CombinedRegularizer, pykeen.regularizers.PowerSumRegularizer, pykeen.regularizers.OrthogonalityRegularizer, pykeen.regularizers.NormLimitRegularizer

Base Classes

class Regularizer(weight: float = 1.0, apply_only_once: bool = False, parameters: Iterable[Parameter] | None = None)[source]

A base class for all regularizers.

Instantiate the regularizer.

Parameters:
  • weight (Tensor) – The relative weight of the regularization

  • apply_only_once (bool) – Should the regularization be applied more than once after reset?

  • parameters (Iterable[nn.Parameter] | None) – Specific parameters to track. if none given, it’s expected that your model automatically delegates to the update() function.

add_parameter(parameter: Parameter) None[source]

Add a parameter for regularization.

Parameters:

parameter (Parameter)

Return type:

None

apply_only_once: bool

Should the regularization only be applied once? This was used for ConvKB and defaults to False.

abstract forward(x: Tensor) Tensor[source]

Compute the regularization term for one tensor.

Parameters:

x (Tensor)

Return type:

Tensor

classmethod get_normalized_name() str[source]

Get the normalized name of the regularizer class.

Return type:

str

hpo_default: ClassVar[Mapping[str, Any]] = {'weight': {'high': 1.0, 'low': 0.01, 'scale': 'log', 'type': <class 'float'>}}

The default strategy for optimizing the regularizer’s hyper-parameters

pop_regularization_term() Tensor[source]

Return the weighted regularization term, and reset the regularize afterwards.

Return type:

Tensor

post_parameter_update()[source]

Reset the regularizer’s term.

Warning

Typically, you want to use the regularization term exactly once to calculate gradients via pop_regularization_term(). In this case, there should be no need to manually call this method.

regularization_term: FloatTensor

The current regularization term (a scalar)

reset() None[source]

Reset the regularization term to zero.

Return type:

None

property term: Tensor

Return the weighted regularization term.

update(*tensors: Tensor) None[source]

Update the regularization term based on passed tensors.

Parameters:

tensors (Tensor)

Return type:

None

updated: bool

Has this regularizer been updated since last being reset?

weight: FloatTensor

The overall regularization weight