Back to Transformers

CLIP

docs/source/en/model_doc/clip.md

5.8.04.6 KB
Original Source
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->

This model was released on 2021-02-26 and added to Hugging Face Transformers on 2021-05-12.

<div style="float: right;"> <div class="flex flex-wrap space-x-1">
</div>
</div>

CLIP

CLIP is a is a multimodal vision and language model motivated by overcoming the fixed number of object categories when training a computer vision model. CLIP learns about images directly from raw text by jointly training on 400M (image, text) pairs. Pretraining on this scale enables zero-shot transfer to downstream tasks. CLIP uses an image encoder and text encoder to get visual features and text features. Both features are projected to a latent space with the same number of dimensions and their dot product gives a similarity score.

You can find all the original CLIP checkpoints under the OpenAI organization.

[!TIP] Click on the CLIP models in the right sidebar for more examples of how to apply CLIP to different image and language tasks.

The example below demonstrates how to calculate similarity scores between multiple text descriptions and an image with [Pipeline] or the [AutoModel] class.

<hfoptions id="usage"> <hfoption id="Pipeline">
python
from transformers import pipeline


clip = pipeline(
   task="zero-shot-image-classification",
   model="openai/clip-vit-base-patch32",
   device=0
)
labels = ["a photo of a cat", "a photo of a dog", "a photo of a car"]
clip("http://images.cocodataset.org/val2017/000000039769.jpg", candidate_labels=labels)
</hfoption> <hfoption id="AutoModel">
python
import requests
from PIL import Image

from transformers import AutoModel, AutoProcessor


model = AutoModel.from_pretrained("openai/clip-vit-base-patch32", attn_implementation="sdpa", device_map="auto")
processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32")

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
labels = ["a photo of a cat", "a photo of a dog", "a photo of a car"]

inputs = processor(text=labels, images=image, return_tensors="pt", padding=True).to(model.device)

outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
probs = logits_per_image.softmax(dim=1)
most_likely_idx = probs.argmax(dim=1).item()
most_likely_label = labels[most_likely_idx]
print(f"Most likely label: {most_likely_label} with probability: {probs[0][most_likely_idx].item():.3f}")
</hfoption> </hfoptions>

Notes

  • Use [CLIPImageProcessor] to resize (or rescale) and normalizes images for the model.

CLIPConfig

[[autodoc]] CLIPConfig

CLIPTextConfig

[[autodoc]] CLIPTextConfig

CLIPVisionConfig

[[autodoc]] CLIPVisionConfig

CLIPTokenizer

[[autodoc]] CLIPTokenizer - get_special_tokens_mask - save_vocabulary

CLIPTokenizerFast

[[autodoc]] CLIPTokenizerFast

CLIPImageProcessor

[[autodoc]] CLIPImageProcessor - preprocess

CLIPImageProcessorPil

[[autodoc]] CLIPImageProcessorPil - preprocess

CLIPProcessor

[[autodoc]] CLIPProcessor - call

CLIPModel

[[autodoc]] CLIPModel - forward - get_text_features - get_image_features

CLIPTextModel

[[autodoc]] CLIPTextModel - forward

CLIPTextModelWithProjection

[[autodoc]] CLIPTextModelWithProjection - forward

CLIPVisionModelWithProjection

[[autodoc]] CLIPVisionModelWithProjection - forward

CLIPVisionModel

[[autodoc]] CLIPVisionModel - forward

CLIPForImageClassification

[[autodoc]] CLIPForImageClassification - forward