Back to Annotated Deep Learning Paper Implementations

Pay Attention to MLPs (gMLP)

docs/transformers/gmlp/readme.html

latest745 B
Original Source

hometransformersgmlp

View code on Github

#

Pay Attention to MLPs (gMLP)

This is a PyTorch implementation of the paper Pay Attention to MLPs.

This paper introduces a Multilayer Perceptron (MLP) based architecture with gating, which they name gMLP. It consists of a stack of L gMLP blocks.

Here is the training code for a gMLP model based autoregressive model.

labml.ai