Back to Transformers

OPT

docs/source/en/model_doc/opt.md

5.8.04.7 KB
Original Source
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->

This model was released on 2022-05-02 and added to Hugging Face Transformers on 2022-05-12.

<div style="float: right;"> <div class="flex flex-wrap space-x-1">
</div>
</div>

OPT

OPT is a suite of open-source decoder-only pre-trained transformers whose parameters range from 125M to 175B. OPT models are designed for causal language modeling and aim to enable responsible and reproducible research at scale. OPT-175B is comparable in performance to GPT-3 with only 1/7th the carbon footprint.

You can find all the original OPT checkpoints under the OPT collection.

[!TIP] This model was contributed by ArthurZ, ybelkada, and patrickvonplaten.

Click on the OPT models in the right sidebar for more examples of how to apply OPT to different language tasks.

The example below demonstrates how to generate text with [Pipeline], [AutoModel], and from the command line.

<hfoptions id="usage"> <hfoption id="Pipeline">
python
from transformers import pipeline


pipeline = pipeline(task="text-generation", model="facebook/opt-125m", device=0)
pipeline("Once upon a time, in a land far, far away,", max_length=50, num_return_sequences=1)
</hfoption> <hfoption id="AutoModel">
python
from transformers import AutoModelForCausalLM, AutoTokenizer


model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", device_map="auto", attn_implementation="sdpa")
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")

prompt = ("Once upon a time, in a land far, far away, ")

model_inputs = tokenizer([prompt], return_tensors="pt").to(model.device)

generated_ids = model.generate(**model_inputs, max_new_tokens=30, do_sample=False)
tokenizer.batch_decode(generated_ids)[0]
</hfoption> </hfoptions>

Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.

The example below uses bitsandbytes to quantize the weights to 8-bits.

python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig


bnb_config = BitsAndBytesConfig(load_in_8bit=True)
model = AutoModelForCausalLM.from_pretrained("facebook/opt-13b", attn_implementation="sdpa", quantization_config=bnb_config, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-13b")

prompt = ("Once upon a time, in a land far, far away, ")

model_inputs = tokenizer([prompt], return_tensors="pt").to(model.device)

generated_ids = model.generate(**model_inputs, max_new_tokens=30, do_sample=False)
tokenizer.batch_decode(generated_ids)[0]

Notes

  • OPT adds an EOS token </s> to the beginning of every prompt.

Resources

OPTConfig

[[autodoc]] OPTConfig

OPTModel

[[autodoc]] OPTModel - forward

OPTForCausalLM

[[autodoc]] OPTForCausalLM - forward

OPTForSequenceClassification

[[autodoc]] OPTForSequenceClassification - forward

OPTForQuestionAnswering

[[autodoc]] OPTForQuestionAnswering - forward