Perceptron
Perceptron-like modules.
- class ConcatMLP(input_dim, output_dim=None, dropout=0.1, ratio=2, flatten_dims=2)[source]
A 2-layer MLP with ReLU activation and dropout applied to the flattened token representations.
This is for conveniently choosing a configuration similar to the paper. For more complex aggregation mechanisms, pass an arbitrary callable instead.
See also
https://github.com/migalkin/NodePiece/blob/d731c9990/lp_rp/pykeen105/nodepiece_rotate.py#L57-L65
Initialize the module.
- Parameters
input_dim (
int
) – the input dimensionoutput_dim (
Optional
[int
]) – the output dimension. defaults to input dimdropout (
float
) – the dropout value on the hidden layerratio (
Union
[int
,float
]) – the ratio of the output dimension to the hidden layer size.flatten_dims (
int
) – the number of trailing dimensions to flatten
- forward(xs, dim)[source]
Forward the MLP on the given dimension.
- Parameters
xs (
FloatTensor
) – The tensor to forwarddim (
int
) – Only a parameter to match the signature of torch.mean / torch.sum this class is not thought to be usable from outside
- Return type
FloatTensor
- Returns
The tensor after applying this MLP