docs/source/en/model_doc/siglip2.md
This model was released on 2025-02-20 and added to Hugging Face Transformers on 2025-02-21.
<div style="float: right;"> <div class="flex flex-wrap space-x-1"></div>
SigLIP2 is a family of multilingual vision-language encoders that builds on the SigLIP training recipe. It includes decoder-based pretraining, self-distillation, and masked prediction to improve dense prediction tasks (segmentation, depth estimation, etc.). This model is available in two variants:
You can find all the original SigLIP2 checkpoints under the SigLIP2 collection.
[!TIP] Click on the SigLIP2 models in the right sidebar for more examples of how to apply SigLIP2 to different image and text tasks.
The example below demonstrates zero-shot classification with [Pipeline] or the [AutoModel] class.
from transformers import pipeline
image = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
candidate_labels = ["a Pallas cat", "a lion", "a Siberian tiger"]
pipeline = pipeline(task="zero-shot-image-classification", model="google/siglip2-base-patch16-224", device=0)
pipeline(image, candidate_labels=candidate_labels)
import requests
import torch
from PIL import Image
from transformers import AutoModel, AutoProcessor
model = AutoModel.from_pretrained("google/siglip2-base-patch16-224", device_map="auto", attn_implementation="sdpa")
processor = AutoProcessor.from_pretrained("google/siglip2-base-patch16-224")
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
candidate_labels = ["a Pallas cat", "a lion", "a Siberian tiger"]
# follows the pipeline prompt template to get same results
texts = [f'This is a photo of {label}.' for label in candidate_labels]
# IMPORTANT: we pass `padding=max_length` and `max_length=64` since the model was trained with this
inputs = processor(text=texts, images=image, padding="max_length", max_length=64, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
probs = torch.sigmoid(logits_per_image)
print(f"{probs[0][0]:.1%} that image 0 is '{candidate_labels[0]}'")
import requests
import torch
from PIL import Image
from transformers import AutoModel, AutoProcessor
model = AutoModel.from_pretrained("google/siglip2-base-patch16-naflex", device_map="auto", attn_implementation="sdpa")
processor = AutoProcessor.from_pretrained("google/siglip2-base-patch16-naflex")
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
candidate_labels = ["a Pallas cat", "a lion", "a Siberian tiger"]
texts = [f'This is a photo of {label}.' for label in candidate_labels]
# default value for `max_num_patches` is 256, but you can increase resulted image resolution providing higher values e.g. `max_num_patches=512`
inputs = processor(text=texts, images=image, padding="max_length", max_num_patches=256, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
probs = torch.sigmoid(logits_per_image)
print(f"{probs[0][0]:.1%} that image 0 is '{candidate_labels[0]}'")
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.
The example below uses bitsandbytes to only quantize the weights to int4.
import requests
import torch
from PIL import Image
from transformers import AutoModel, AutoProcessor, BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(load_in_4bit=True)
model = AutoModel.from_pretrained("google/siglip2-base-patch16-224", quantization_config=bnb_config, device_map="auto", attn_implementation="sdpa")
processor = AutoProcessor.from_pretrained("google/siglip2-base-patch16-224")
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
candidate_labels = ["a Pallas cat", "a lion", "a Siberian tiger"]
# follows the pipeline prompt template to get same results
texts = [f'This is a photo of {label}.' for label in candidate_labels]
# IMPORTANT: we pass `padding=max_length` and `max_length=64` since the model was trained with this
inputs = processor(text=texts, images=image, padding="max_length", max_length=64, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
probs = torch.sigmoid(logits_per_image)
print(f"{probs[0][0]:.1%} that image 0 is '{candidate_labels[0]}'")
SigLIP2 can be used to generate text embeddings for retrieval or similarity-based tasks (for example, product or caption retrieval).
For best results, the same text preprocessing used during training must be applied. When loading SigLIP2 checkpoints via [AutoProcessor], this preprocessing is handled automatically by the processor.
For SigLIP2 models, the processor applies the following defaults for text inputs:
padding="max_length", max_length=64, truncation=TrueThese defaults ensure consistent and correct text embeddings. Overriding them may lead to degraded retrieval quality.
import torch
from transformers import AutoModel, AutoProcessor
model_id = "google/siglip2-so400m-patch14-384"
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModel.from_pretrained(model_id).eval( device_map="auto")
texts = [
"HOME084 Timbangan Badan Digital Kaca Transparan 28CM Body Scale Personal Scale",
"26cm Timbangan Badan digital personal scale weight",
"33cm Timbangan Badan digital personal scale weight",
]
# NOTE: lowercasing and padding/truncation to length 64 are applied automatically by the processor pipeline.
inputs = processor(text=texts, return_tensors="pt").to(model.device)
with torch.no_grad():
text_features = model.get_text_features(**inputs)
# Normalize embeddings for cosine similarity
text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True)
If you are encoding text without a processor (for example, via [AutoTokenizer]), use [Siglip2Tokenizer].
Siglip2Tokenizer applies lowercasing at the tokenizer backend level (matching SigLIP2 training time normalization), while keeping the same tokenization as the original tokenizer.
When using the tokenizer directly, you should explicitly apply the same padding/truncation settings as used during training (e.g. max_length=64):
from transformers import Siglip2Tokenizer
model_id = "google/siglip2-so400m-patch14-384"
tokenizer = Siglip2Tokenizer.from_pretrained(model_id)
inputs = tokenizer(
["HELLO WORLD"],
padding="max_length",
truncation=True,
max_length=64,
return_tensors="pt",
)
Training is supported for DDP and FSDP on single-node multi-accelerator setups. However, it does not use torch.distributed utilities which may limit the scalability of batch size.
When using the standalone [GemmaTokenizerFast] make sure to pass padding="max_length" and max_length=64 as that's how the model was trained.
Model was trained with lowercased text, so make sure your text labels are preprocessed the same way.
To get the same results as the [Pipeline], a prompt template of "This is a photo of {label}." should be passed to the processor.
The NaFlex variant processes different types of images at the appropriate resolution (using a larger resolution to process document images for example), while also minimizing the impact of aspect ratio distortion for certain inference tasks like OCR.
NaFlex resizes the input image so the height and width are multiples of the patch size after resizing. It keeps the aspect ratio distortion as low as possible and produces a sequence length of at most the desired target sequence length (max_num_patches). After resizing, the image is split into a sequence of patches and a mask with padding information is added.
Toggle the attn_implementation parameter to either "sdpa" or "flash_attention_2" to use a more memory-efficient attention.
# pip install -U flash-attn --no-build-isolation
from transformers import SiglipModel
model = SiglipModel.from_pretrained(
"google/siglip2-so400m-patch14-384",
attn_implementation="flash_attention_2",
device_map="auto",
)
[[autodoc]] Siglip2Config
[[autodoc]] Siglip2TextConfig
[[autodoc]] Siglip2VisionConfig
[[autodoc]] Siglip2ImageProcessor - preprocess
[[autodoc]] Siglip2ImageProcessorPil - preprocess
[[autodoc]] Siglip2Processor - call
[[autodoc]] Siglip2Model - forward - get_text_features - get_image_features
[[autodoc]] Siglip2TextModel - forward
[[autodoc]] Siglip2VisionModel - forward
[[autodoc]] Siglip2ForImageClassification - forward
[[autodoc]] Siglip2Tokenizer - call