Back to Transformers

BLIP

docs/source/en/model_doc/blip.md

5.8.04.4 KB
Original Source
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->

This model was released on 2022-01-28 and added to Hugging Face Transformers on 2022-12-21.

<div style="float: right;"> <div class="flex flex-wrap space-x-1">
</div>
</div>

BLIP

BLIP (Bootstrapped Language-Image Pretraining) is a vision-language pretraining (VLP) framework designed for both understanding and generation tasks. Most existing pretrained models are only good at one or the other. It uses a captioner to generate captions and a filter to remove the noisy captions. This increases training data quality and more effectively uses the messy web data.

You can find all the original BLIP checkpoints under the BLIP collection.

[!TIP] This model was contributed by ybelkada.

Click on the BLIP models in the right sidebar for more examples of how to apply BLIP to different vision language tasks.

The example below demonstrates how to visual question answering with [Pipeline] or the [AutoModel] class.

<hfoptions id="usage"> <hfoption id="Pipeline">
python
from transformers import pipeline


pipeline = pipeline(
    task="visual-question-answering",
    model="Salesforce/blip-vqa-base",
    device=0
)
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
pipeline(question="What is the weather in this image?", image=url)
</hfoption> <hfoption id="AutoModel">
python
import requests
import torch
from PIL import Image

from transformers import AutoModelForVisualQuestionAnswering, AutoProcessor


processor = AutoProcessor.from_pretrained("Salesforce/blip-vqa-base")
model = AutoModelForVisualQuestionAnswering.from_pretrained(
    "Salesforce/blip-vqa-base",
    device_map="auto"
)

url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
image = Image.open(requests.get(url, stream=True).raw)

question = "What is the weather in this image?"
inputs = processor(images=image, text=question, return_tensors="pt").to(model.device, torch.float16)

output = model.generate(**inputs)
processor.batch_decode(output, skip_special_tokens=True)[0]
</hfoption> </hfoptions>

Resources

Refer to this notebook to learn how to fine-tune BLIP for image captioning on a custom dataset.

BlipConfig

[[autodoc]] BlipConfig

BlipTextConfig

[[autodoc]] BlipTextConfig

BlipVisionConfig

[[autodoc]] BlipVisionConfig

BlipProcessor

[[autodoc]] BlipProcessor - call

BlipImageProcessor

[[autodoc]] BlipImageProcessor - preprocess

BlipImageProcessorPil

[[autodoc]] BlipImageProcessorPil - preprocess

BlipModel

BlipModel is going to be deprecated in future versions, please use BlipForConditionalGeneration, BlipForImageTextRetrieval or BlipForQuestionAnswering depending on your usecase.

[[autodoc]] BlipModel - forward - get_text_features - get_image_features

BlipTextModel

[[autodoc]] BlipTextModel - forward

BlipTextLMHeadModel

[[autodoc]] BlipTextLMHeadModel - forward

BlipVisionModel

[[autodoc]] BlipVisionModel - forward

BlipForConditionalGeneration

[[autodoc]] BlipForConditionalGeneration - forward

BlipForImageTextRetrieval

[[autodoc]] BlipForImageTextRetrieval - forward

BlipForQuestionAnswering

[[autodoc]] BlipForQuestionAnswering - forward