Back to Transformers

XLM-RoBERTa

docs/source/en/model_doc/xlm-roberta.md

5.8.09.0 KB
Original Source
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->

This model was released on 2019-11-05 and added to Hugging Face Transformers on 2020-11-16.

<div style="float: right;"> <div class="flex flex-wrap space-x-1">
</div>
</div>

XLM-RoBERTa

XLM-RoBERTa is a large multilingual masked language model trained on 2.5TB of filtered CommonCrawl data across 100 languages. It shows that scaling the model provides strong performance gains on high-resource and low-resource languages. The model uses the RoBERTa pretraining objectives on the XLM model.

You can find all the original XLM-RoBERTa checkpoints under the Facebook AI community organization.

[!TIP] Click on the XLM-RoBERTa models in the right sidebar for more examples of how to apply XLM-RoBERTa to different cross-lingual tasks like classification, translation, and question answering.

The example below demonstrates how to predict the <mask> token with [Pipeline], [AutoModel], and from the command line.

<hfoptions id="usage"> <hfoption id="Pipeline">
python
from transformers import pipeline


pipeline = pipeline(
    task="fill-mask",
    model="FacebookAI/xlm-roberta-base",
    device=0
)
# Example in French
pipeline("Bonjour, je suis un modèle <mask>.")
</hfoption> <hfoption id="AutoModel">
python
import torch

from transformers import AutoModelForMaskedLM, AutoTokenizer


tokenizer = AutoTokenizer.from_pretrained(
    "FacebookAI/xlm-roberta-base"
)
model = AutoModelForMaskedLM.from_pretrained(
    "FacebookAI/xlm-roberta-base",
    device_map="auto",
    attn_implementation="sdpa"
)

# Prepare input
inputs = tokenizer("Bonjour, je suis un modèle <mask>.", return_tensors="pt").to(model.device)

with torch.no_grad():
    outputs = model(**inputs)
    predictions = outputs.logits

masked_index = torch.where(inputs['input_ids'] == tokenizer.mask_token_id)[1]
predicted_token_id = predictions[0, masked_index].argmax(dim=-1)
predicted_token = tokenizer.decode(predicted_token_id)

print(f"The predicted token is: {predicted_token}")
</hfoption> </hfoptions>

Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the quantization guide overview for more available quantization backends.

The example below uses bitsandbytes the quantive the weights to 4 bits

python
import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer, BitsAndBytesConfig

quantization_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_compute_dtype=torch.bfloat16
    bnb_4bit_quant_type="nf4",  # or "fp4" for float 4-bit quantization
    bnb_4bit_use_double_quant=True,  # use double quantization for better performance
)
tokenizer = AutoTokenizer.from_pretrained("facebook/xlm-roberta-large")
model = AutoModelForMaskedLM.from_pretrained(
    "facebook/xlm-roberta-large",
    device_map="auto",
    attn_implementation="flash_attention_2",
    quantization_config=quantization_config
)

inputs = tokenizer("Bonjour, je suis un modèle <mask>.", return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Notes

  • Unlike some XLM models, XLM-RoBERTa doesn't require lang tensors to understand what language is being used. It automatically determines the language from the input IDs

Resources

A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with XLM-RoBERTa. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.

<PipelineTag pipeline="text-classification"/> <PipelineTag pipeline="token-classification"/> <PipelineTag pipeline="text-generation"/> <PipelineTag pipeline="fill-mask"/> <PipelineTag pipeline="question-answering"/>

Multiple choice

🚀 Deploy

<Tip>

This implementation is the same as RoBERTa. Refer to the documentation of RoBERTa for usage examples as well as the information relative to the inputs and outputs. </Tip>

XLMRobertaConfig

[[autodoc]] XLMRobertaConfig

XLMRobertaTokenizer

[[autodoc]] XLMRobertaTokenizer - get_special_tokens_mask - save_vocabulary

XLMRobertaTokenizerFast

[[autodoc]] XLMRobertaTokenizerFast

XLMRobertaModel

[[autodoc]] XLMRobertaModel - forward

XLMRobertaForCausalLM

[[autodoc]] XLMRobertaForCausalLM - forward

XLMRobertaForMaskedLM

[[autodoc]] XLMRobertaForMaskedLM - forward

XLMRobertaForSequenceClassification

[[autodoc]] XLMRobertaForSequenceClassification - forward

XLMRobertaForMultipleChoice

[[autodoc]] XLMRobertaForMultipleChoice - forward

XLMRobertaForTokenClassification

[[autodoc]] XLMRobertaForTokenClassification - forward

XLMRobertaForQuestionAnswering

[[autodoc]] XLMRobertaForQuestionAnswering - forward