docs/source/en/model_doc/bert-generation.md
This model was released on 2019-07-29 and added to Hugging Face Transformers on 2020-11-16.
<div style="float: right;"> <div class="flex flex-wrap space-x-1"></div>
BertGeneration leverages pretrained BERT checkpoints for sequence-to-sequence tasks with the [EncoderDecoderModel] architecture. BertGeneration adapts the [BERT] for generative tasks.
You can find all the original BERT checkpoints under the BERT collection.
[!TIP] This model was contributed by patrickvonplaten.
Click on the BertGeneration models in the right sidebar for more examples of how to apply BertGeneration to different sequence generation tasks.
The example below demonstrates how to use BertGeneration with [EncoderDecoderModel] for sequence-to-sequence tasks.
from transformers import AutoTokenizer, EncoderDecoderModel
model = EncoderDecoderModel.from_pretrained("google/roberta2roberta_L-24_discofuse", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_discofuse")
input_ids = tokenizer(
"Plants create energy through ", add_special_tokens=False, return_tensors="pt"
).input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.
The example below uses BitsAndBytesConfig to quantize the weights to 4-bit.
import torch
from transformers import AutoTokenizer, BitsAndBytesConfig, EncoderDecoderModel
# Configure 4-bit quantization
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.float16
)
model = EncoderDecoderModel.from_pretrained(
"google/roberta2roberta_L-24_discofuse",
quantization_config=quantization_config,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_discofuse")
input_ids = tokenizer(
"Plants create energy through ", add_special_tokens=False, return_tensors="pt"
).input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
[BertGenerationEncoder] and [BertGenerationDecoder] should be used in combination with [EncoderDecoderModel] for sequence-to-sequence tasks.
from transformers import BertGenerationEncoder, BertGenerationDecoder, BertTokenizer, EncoderDecoderModel
# leverage checkpoints for Bert2Bert model
# use BERT's cls token as BOS token and sep token as EOS token
encoder = BertGenerationEncoder.from_pretrained("google-bert/bert-large-uncased", bos_token_id=101, eos_token_id=102)
# add cross attention layers and use BERT's cls token as BOS token and sep token as EOS token
decoder = BertGenerationDecoder.from_pretrained(
"google-bert/bert-large-uncased", add_cross_attention=True, is_decoder=True, bos_token_id=101, eos_token_id=102
)
bert2bert = EncoderDecoderModel(encoder=encoder, decoder=decoder)
# create tokenizer
tokenizer = BertTokenizer.from_pretrained("google-bert/bert-large-uncased")
input_ids = tokenizer(
"This is a long article to summarize", add_special_tokens=False, return_tensors="pt"
).input_ids
labels = tokenizer("This is a short summary", return_tensors="pt").to(model.device).input_ids
# train
loss = bert2bert(input_ids=input_ids, decoder_input_ids=labels, labels=labels).loss
loss.backward()
For summarization, sentence splitting, sentence fusion and translation, no special tokens are required for the input.
No EOS token should be added to the end of the input for most generation tasks.
[[autodoc]] BertGenerationConfig
[[autodoc]] BertGenerationTokenizer - save_vocabulary
[[autodoc]] BertGenerationEncoder - forward
[[autodoc]] BertGenerationDecoder - forward