Back to Transformers

RemBERT

docs/source/en/model_doc/rembert.md

5.8.04.0 KB
Original Source
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->

This model was released on 2020-10-24 and added to Hugging Face Transformers on 2021-07-24.

RemBERT

<div class="flex flex-wrap space-x-1"> </div>

Overview

The RemBERT model was proposed in Rethinking Embedding Coupling in Pre-trained Language Models by Hyung Won Chung, Thibault Févry, Henry Tsai, Melvin Johnson, Sebastian Ruder.

The abstract from the paper is the following:

We re-evaluate the standard practice of sharing weights between input and output embeddings in state-of-the-art pre-trained language models. We show that decoupled embeddings provide increased modeling flexibility, allowing us to significantly improve the efficiency of parameter allocation in the input embedding of multilingual models. By reallocating the input embedding parameters in the Transformer layers, we achieve dramatically better performance on standard natural language understanding tasks with the same number of parameters during fine-tuning. We also show that allocating additional capacity to the output embedding provides benefits to the model that persist through the fine-tuning stage even though the output embedding is discarded after pre-training. Our analysis shows that larger output embeddings prevent the model's last layers from overspecializing to the pre-training task and encourage Transformer representations to be more general and more transferable to other tasks and languages. Harnessing these findings, we are able to train models that achieve strong performance on the XTREME benchmark without increasing the number of parameters at the fine-tuning stage.

Usage tips

For fine-tuning, RemBERT can be thought of as a bigger version of mBERT with an ALBERT-like factorization of the embedding layer. The embeddings are not tied in pre-training, in contrast with BERT, which enables smaller input embeddings (preserved during fine-tuning) and bigger output embeddings (discarded at fine-tuning). The tokenizer is also similar to the Albert one rather than the BERT one.

Resources

RemBertConfig

[[autodoc]] RemBertConfig

RemBertTokenizer

[[autodoc]] RemBertTokenizer - get_special_tokens_mask - save_vocabulary

RemBertTokenizerFast

[[autodoc]] RemBertTokenizerFast - get_special_tokens_mask - save_vocabulary

RemBertModel

[[autodoc]] RemBertModel - forward

RemBertForCausalLM

[[autodoc]] RemBertForCausalLM - forward

RemBertForMaskedLM

[[autodoc]] RemBertForMaskedLM - forward

RemBertForSequenceClassification

[[autodoc]] RemBertForSequenceClassification - forward

RemBertForMultipleChoice

[[autodoc]] RemBertForMultipleChoice - forward

RemBertForTokenClassification

[[autodoc]] RemBertForTokenClassification - forward

RemBertForQuestionAnswering

[[autodoc]] RemBertForQuestionAnswering - forward