Back to Transformers

SigLIP2

docs/source/en/model_doc/siglip2.md

5.8.011.1 KB
Original Source
<!--Copyright 2025 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->

This model was released on 2025-02-20 and added to Hugging Face Transformers on 2025-02-21.

<div style="float: right;"> <div class="flex flex-wrap space-x-1">
</div>
</div>

SigLIP2

Overview

SigLIP2 is a family of multilingual vision-language encoders that builds on the SigLIP training recipe. It includes decoder-based pretraining, self-distillation, and masked prediction to improve dense prediction tasks (segmentation, depth estimation, etc.). This model is available in two variants:

  • NaFlex supports different resolutions and maintains the native image aspect ratio
  • FixRes supports fixed resolutions and is backwards compatible with SigLIP

You can find all the original SigLIP2 checkpoints under the SigLIP2 collection.

[!TIP] Click on the SigLIP2 models in the right sidebar for more examples of how to apply SigLIP2 to different image and text tasks.

The example below demonstrates zero-shot classification with [Pipeline] or the [AutoModel] class.

<hfoptions id="usage"> <hfoption id="Pipeline">
python
from transformers import pipeline


image = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
candidate_labels = ["a Pallas cat", "a lion", "a Siberian tiger"]

pipeline = pipeline(task="zero-shot-image-classification", model="google/siglip2-base-patch16-224", device=0)
pipeline(image, candidate_labels=candidate_labels)
</hfoption> <hfoption id="AutoModel (FixRes)">
python
import requests
import torch
from PIL import Image

from transformers import AutoModel, AutoProcessor


model = AutoModel.from_pretrained("google/siglip2-base-patch16-224", device_map="auto", attn_implementation="sdpa")
processor = AutoProcessor.from_pretrained("google/siglip2-base-patch16-224")

url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
candidate_labels = ["a Pallas cat", "a lion", "a Siberian tiger"]

# follows the pipeline prompt template to get same results
texts = [f'This is a photo of {label}.' for label in candidate_labels]

# IMPORTANT: we pass `padding=max_length` and `max_length=64` since the model was trained with this
inputs = processor(text=texts, images=image, padding="max_length", max_length=64, return_tensors="pt").to(model.device)

with torch.no_grad():
    outputs = model(**inputs)

logits_per_image = outputs.logits_per_image
probs = torch.sigmoid(logits_per_image)
print(f"{probs[0][0]:.1%} that image 0 is '{candidate_labels[0]}'")
</hfoption> <hfoption id="AutoModel (NaFlex)">
python
import requests
import torch
from PIL import Image

from transformers import AutoModel, AutoProcessor


model = AutoModel.from_pretrained("google/siglip2-base-patch16-naflex", device_map="auto", attn_implementation="sdpa")
processor = AutoProcessor.from_pretrained("google/siglip2-base-patch16-naflex")

url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
candidate_labels = ["a Pallas cat", "a lion", "a Siberian tiger"]
texts = [f'This is a photo of {label}.' for label in candidate_labels]

# default value for `max_num_patches` is 256, but you can increase resulted image resolution providing higher values e.g. `max_num_patches=512`
inputs = processor(text=texts, images=image, padding="max_length", max_num_patches=256, return_tensors="pt").to(model.device)

with torch.no_grad():
    outputs = model(**inputs)

logits_per_image = outputs.logits_per_image
probs = torch.sigmoid(logits_per_image)
print(f"{probs[0][0]:.1%} that image 0 is '{candidate_labels[0]}'")
</hfoption> </hfoptions>

Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.

The example below uses bitsandbytes to only quantize the weights to int4.

python
import requests
import torch
from PIL import Image

from transformers import AutoModel, AutoProcessor, BitsAndBytesConfig


bnb_config = BitsAndBytesConfig(load_in_4bit=True)
model = AutoModel.from_pretrained("google/siglip2-base-patch16-224", quantization_config=bnb_config, device_map="auto", attn_implementation="sdpa")
processor = AutoProcessor.from_pretrained("google/siglip2-base-patch16-224")

url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
candidate_labels = ["a Pallas cat", "a lion", "a Siberian tiger"]

# follows the pipeline prompt template to get same results
texts = [f'This is a photo of {label}.' for label in candidate_labels]

# IMPORTANT: we pass `padding=max_length` and `max_length=64` since the model was trained with this
inputs = processor(text=texts, images=image, padding="max_length", max_length=64, return_tensors="pt").to(model.device)

with torch.no_grad():
    outputs = model(**inputs)

logits_per_image = outputs.logits_per_image
probs = torch.sigmoid(logits_per_image)
print(f"{probs[0][0]:.1%} that image 0 is '{candidate_labels[0]}'")

Text embeddings and retrieval

SigLIP2 can be used to generate text embeddings for retrieval or similarity-based tasks (for example, product or caption retrieval).

For best results, the same text preprocessing used during training must be applied. When loading SigLIP2 checkpoints via [AutoProcessor], this preprocessing is handled automatically by the processor.

Default text preprocessing (handled automatically)

For SigLIP2 models, the processor applies the following defaults for text inputs:

  • Lowercasing all input text
  • Fixed padding and truncation: padding="max_length", max_length=64, truncation=True

These defaults ensure consistent and correct text embeddings. Overriding them may lead to degraded retrieval quality.

Example: computing text embeddings

python
import torch

from transformers import AutoModel, AutoProcessor


model_id = "google/siglip2-so400m-patch14-384"
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModel.from_pretrained(model_id).eval( device_map="auto")

texts = [
    "HOME084 Timbangan Badan Digital Kaca Transparan 28CM Body Scale Personal Scale",
    "26cm Timbangan Badan digital personal scale weight",
    "33cm Timbangan Badan digital personal scale weight",
]

# NOTE: lowercasing and padding/truncation to length 64 are applied automatically by the processor pipeline.
inputs = processor(text=texts, return_tensors="pt").to(model.device)

with torch.no_grad():
    text_features = model.get_text_features(**inputs)

# Normalize embeddings for cosine similarity
text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True)

Text-only usage: Siglip2Tokenizer

If you are encoding text without a processor (for example, via [AutoTokenizer]), use [Siglip2Tokenizer].

  • Siglip2Tokenizer applies lowercasing at the tokenizer backend level (matching SigLIP2 training time normalization), while keeping the same tokenization as the original tokenizer.

  • When using the tokenizer directly, you should explicitly apply the same padding/truncation settings as used during training (e.g. max_length=64):

python
from transformers import Siglip2Tokenizer


model_id = "google/siglip2-so400m-patch14-384"
tokenizer = Siglip2Tokenizer.from_pretrained(model_id)

inputs = tokenizer(
    ["HELLO WORLD"],
    padding="max_length",
    truncation=True,
    max_length=64,
    return_tensors="pt",
)

Notes

  • Training is supported for DDP and FSDP on single-node multi-accelerator setups. However, it does not use torch.distributed utilities which may limit the scalability of batch size.

  • When using the standalone [GemmaTokenizerFast] make sure to pass padding="max_length" and max_length=64 as that's how the model was trained.

  • Model was trained with lowercased text, so make sure your text labels are preprocessed the same way.

  • To get the same results as the [Pipeline], a prompt template of "This is a photo of {label}." should be passed to the processor.

  • The NaFlex variant processes different types of images at the appropriate resolution (using a larger resolution to process document images for example), while also minimizing the impact of aspect ratio distortion for certain inference tasks like OCR.

    NaFlex resizes the input image so the height and width are multiples of the patch size after resizing. It keeps the aspect ratio distortion as low as possible and produces a sequence length of at most the desired target sequence length (max_num_patches). After resizing, the image is split into a sequence of patches and a mask with padding information is added.

  • Toggle the attn_implementation parameter to either "sdpa" or "flash_attention_2" to use a more memory-efficient attention.

    py
    # pip install -U flash-attn --no-build-isolation
    
    from transformers import SiglipModel
    
    model = SiglipModel.from_pretrained(
        "google/siglip2-so400m-patch14-384",
        attn_implementation="flash_attention_2",
        device_map="auto",
    )
    

Siglip2Config

[[autodoc]] Siglip2Config

Siglip2TextConfig

[[autodoc]] Siglip2TextConfig

Siglip2VisionConfig

[[autodoc]] Siglip2VisionConfig

Siglip2ImageProcessor

[[autodoc]] Siglip2ImageProcessor - preprocess

Siglip2ImageProcessorPil

[[autodoc]] Siglip2ImageProcessorPil - preprocess

Siglip2Processor

[[autodoc]] Siglip2Processor - call

Siglip2Model

[[autodoc]] Siglip2Model - forward - get_text_features - get_image_features

Siglip2TextModel

[[autodoc]] Siglip2TextModel - forward

Siglip2VisionModel

[[autodoc]] Siglip2VisionModel - forward

Siglip2ForImageClassification

[[autodoc]] Siglip2ForImageClassification - forward

Siglip2Tokenizer

[[autodoc]] Siglip2Tokenizer - call