Regularizers
Regularization in PyKEEN.
Classes
|
A simple L_p norm based regularizer. |
|
A regularizer which does not perform any regularization. |
|
A convex combination of regularizers. |
|
A simple x^p based regularizer. |
|
A regularizer for the soft orthogonality constraints from [wang2014]. |
|
A regularizer which formulates a soft constraint on a maximum norm. |
Class Inheritance Diagram
digraph inheritanceb309268641 { bgcolor=transparent; rankdir=LR; size="8.0, 12.0"; "ABC" [fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",tooltip="Helper class that provides a standard way to create an ABC using"]; "CombinedRegularizer" [URL="../api/pykeen.regularizers.CombinedRegularizer.html#pykeen.regularizers.CombinedRegularizer",fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",target="_top",tooltip="A convex combination of regularizers."]; "Regularizer" -> "CombinedRegularizer" [arrowsize=0.5,style="setlinewidth(0.5)"]; "LpRegularizer" [URL="../api/pykeen.regularizers.LpRegularizer.html#pykeen.regularizers.LpRegularizer",fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",target="_top",tooltip="A simple L_p norm based regularizer."]; "Regularizer" -> "LpRegularizer" [arrowsize=0.5,style="setlinewidth(0.5)"]; "Module" [fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",tooltip="Base class for all neural network modules."]; "NoRegularizer" [URL="../api/pykeen.regularizers.NoRegularizer.html#pykeen.regularizers.NoRegularizer",fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",target="_top",tooltip="A regularizer which does not perform any regularization."]; "Regularizer" -> "NoRegularizer" [arrowsize=0.5,style="setlinewidth(0.5)"]; "NormLimitRegularizer" [URL="../api/pykeen.regularizers.NormLimitRegularizer.html#pykeen.regularizers.NormLimitRegularizer",fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",target="_top",tooltip="A regularizer which formulates a soft constraint on a maximum norm."]; "Regularizer" -> "NormLimitRegularizer" [arrowsize=0.5,style="setlinewidth(0.5)"]; "OrthogonalityRegularizer" [URL="../api/pykeen.regularizers.OrthogonalityRegularizer.html#pykeen.regularizers.OrthogonalityRegularizer",fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",target="_top",tooltip="A regularizer for the soft orthogonality constraints from [wang2014]_."]; "Regularizer" -> "OrthogonalityRegularizer" [arrowsize=0.5,style="setlinewidth(0.5)"]; "PowerSumRegularizer" [URL="../api/pykeen.regularizers.PowerSumRegularizer.html#pykeen.regularizers.PowerSumRegularizer",fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",target="_top",tooltip="A simple x^p based regularizer."]; "Regularizer" -> "PowerSumRegularizer" [arrowsize=0.5,style="setlinewidth(0.5)"]; "Regularizer" [URL="#pykeen.regularizers.Regularizer",fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",target="_top",tooltip="A base class for all regularizers."]; "Module" -> "Regularizer" [arrowsize=0.5,style="setlinewidth(0.5)"]; "ABC" -> "Regularizer" [arrowsize=0.5,style="setlinewidth(0.5)"]; }Base Classes
- class Regularizer(weight=1.0, apply_only_once=False, parameters=None)[source]
A base class for all regularizers.
Instantiate the regularizer.
- Parameters:
weight (
float
) – The relative weight of the regularizationapply_only_once (
bool
) – Should the regularization be applied more than once after reset?parameters (
Optional
[Iterable
[Parameter
]]) – Specific parameters to track. if none given, it’s expected that your model automatically delegates to theupdate()
function.
- apply_only_once: bool
Should the regularization only be applied once? This was used for ConvKB and defaults to False.
- abstract forward(x)[source]
Compute the regularization term for one tensor.
- Return type:
FloatTensor
- Parameters:
x (FloatTensor) –
- classmethod get_normalized_name()[source]
Get the normalized name of the regularizer class.
- Return type:
- hpo_default: ClassVar[Mapping[str, Any]] = {'weight': {'high': 1.0, 'low': 0.01, 'scale': 'log', 'type': <class 'float'>}}
The default strategy for optimizing the regularizer’s hyper-parameters
- pop_regularization_term()[source]
Return the weighted regularization term, and reset the regularize afterwards.
- Return type:
FloatTensor
- post_parameter_update()[source]
Reset the regularizer’s term.
Warning
Typically, you want to use the regularization term exactly once to calculate gradients via
pop_regularization_term()
. In this case, there should be no need to manually call this method.
- regularization_term: torch.FloatTensor
The current regularization term (a scalar)
- property term: FloatTensor
Return the weighted regularization term.
- Return type:
FloatTensor
- update(*tensors)[source]
Update the regularization term based on passed tensors.
- Return type:
- Parameters:
tensors (FloatTensor) –
- weight: torch.FloatTensor
The overall regularization weight