examples/ai/image_search/README.md
In this example we're implementing image search using the OpenAI CLIP Model, which was trained on a variety of (image, text)-pairs.
We're implementing two methods in the /image_search/main.py file:
seed method generates embeddings for the images in the images folder and upserts them into a collection in Supabase Vector.search method generates an embedding from the search query and performs a vector similarity search query.Before running this example, ensure you have:
pip install poetrypoetry shell
exit)poetry installsupabase startpoetry run seedWhat to expect: The seed command will process all images in the images folder and generate vector embeddings for each one.
poetry run search "bike in front of red brick wall"What to expect: The search will return a list of images ranked by similarity to your search query, along with similarity scores.
DB_CONNECTION with the connection string from your hosted Supabase Dashboard: https://supabase.com/dashboard/project/_/database/settings > Connection string > URITry these search queries to test the image search functionality:
"bike in front of red brick wall""person walking in park""blue sky with clouds""city street at night"Common Issues:
pip install poetryThis example uses the CLIP (Contrastive Language-Image Pre-training) model to:
clip-ViT-B-32 via Hugging Face
Images from https://unsplash.com/license via https://picsum.photos/