docs/source/en/model_doc/llama.md
This model was released on 2023-02-27 and added to Hugging Face Transformers on 2023-03-16.
<div style="float: right;"> <div class="flex flex-wrap space-x-1"></div>
Llama is a family of large language models ranging from 7B to 65B parameters. These models are focused on efficient inference (important for serving language models) by training a smaller model on more tokens rather than training a larger model on fewer tokens. The Llama model is based on the GPT architecture, but it uses pre-normalization to improve training stability, replaces ReLU with SwiGLU to improve performance, and replaces absolute positional embeddings with rotary positional embeddings (RoPE) to better handle longer sequence lengths.
You can find all the original Llama checkpoints under the Huggy Llama organization.
[!TIP] Click on the Llama models in the right sidebar for more examples of how to apply Llama to different language tasks.
The example below demonstrates how to generate text with [Pipeline] or the [AutoModel], and from the command line.
from transformers import pipeline
pipeline = pipeline(
task="text-generation",
model="huggyllama/llama-7b",
device=0
)
pipeline("Plants create energy through a process known as")
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"huggyllama/llama-7b",
)
model = AutoModelForCausalLM.from_pretrained(
"huggyllama/llama-7b",
device_map="auto",
attn_implementation="sdpa"
)
input_ids = tokenizer("Plants create energy through a process known as", return_tensors="pt").to(model.device)
output = model.generate(**input_ids, cache_implementation="static")
print(tokenizer.decode(output[0], skip_special_tokens=True))
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.
The example below uses torchao to only quantize the weights to int4.
# pip install torchao
from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig
quantization_config = TorchAoConfig("int4_weight_only", group_size=128)
model = AutoModelForCausalLM.from_pretrained(
"huggyllama/llama-30b",
device_map="auto",
quantization_config=quantization_config
)
tokenizer = AutoTokenizer.from_pretrained("huggyllama/llama-30b")
input_ids = tokenizer("Plants create energy through a process known as", return_tensors="pt").to(model.device)
output = model.generate(**input_ids, cache_implementation="static")
print(tokenizer.decode(output[0], skip_special_tokens=True))
Use the AttentionMaskVisualizer to better understand what tokens the model can and cannot attend to.
from transformers.utils.attention_visualizer import AttentionMaskVisualizer
visualizer = AttentionMaskVisualizer("huggyllama/llama-7b")
visualizer("Plants create energy through a process known as")
[[autodoc]] LlamaConfig
[[autodoc]] LlamaTokenizer - get_special_tokens_mask - save_vocabulary
[[autodoc]] LlamaTokenizerFast - get_special_tokens_mask - update_post_processor - save_vocabulary
[[autodoc]] LlamaModel - forward
[[autodoc]] LlamaForCausalLM - forward
[[autodoc]] LlamaForSequenceClassification - forward
[[autodoc]] LlamaForQuestionAnswering - forward
[[autodoc]] LlamaForTokenClassification - forward