Perceptron
Perceptron-like modules.
- class ConcatMLP(num_tokens, embedding_dim, dropout=0.1, ratio=2)[source]
A 2-layer MLP with ReLU activation and dropout applied to the concatenation of token representations.
This is for conveniently choosing a configuration similar to the paper. For more complex aggregation mechanisms, pass an arbitrary callable instead.
See also
https://github.com/migalkin/NodePiece/blob/d731c9990/lp_rp/pykeen105/nodepiece_rotate.py#L57-L65
Initialize the module.
- Parameters
- forward(xs, dim)[source]
Forward the MLP on the given dimension.
- Parameters
xs (
FloatTensor
) – The tensor to forwarddim (
int
) – Only a parameter to match the signature of torch.mean / torch.sum this class is not thought to be usable from outside
- Return type
FloatTensor
- Returns
The tensor after applying this MLP