Perceptron

Perceptron-like modules.

class ConcatMLP(input_dim, output_dim=None, dropout=0.1, ratio=2, flatten_dims=2)[source]

A 2-layer MLP with ReLU activation and dropout applied to the flattened token representations.

This is for conveniently choosing a configuration similar to the paper. For more complex aggregation mechanisms, pass an arbitrary callable instead.

Initialize the module.

Parameters
  • input_dim (int) – the input dimension

  • output_dim (Optional[int]) – the output dimension. defaults to input dim

  • dropout (float) – the dropout value on the hidden layer

  • ratio (Union[int, float]) – the ratio of the output dimension to the hidden layer size.

  • flatten_dims (int) – the number of trailing dimensions to flatten

forward(xs, dim)[source]

Forward the MLP on the given dimension.

Parameters
  • xs (FloatTensor) – The tensor to forward

  • dim (int) – Only a parameter to match the signature of torch.mean / torch.sum this class is not thought to be usable from outside

Return type

FloatTensor

Returns

The tensor after applying this MLP