examples/ai/aws_bedrock_image_search/README.md
In this example we're implementing image search using the Amazon Titan Multimodal Embeddings G1, a set of pre-trained high-performing image, multimodal, and text model, accessible via a fully managed API.
We're implementing two methods in the /image_search/main.py file:
seed method generates embeddings for the images in the images folder and upserts them into a collection in Supabase Vector.search method generates an embedding from the search query and performs a vector similarity search query.pip install poetrypoetry shell
exit)poetry installsupabase startpoetry run seedpoetry run search "bike in front of red brick wall"DB_CONNECTION with the connection string from your hosted Supabase Dashboard: https://supabase.com/dashboard/project/_/database/settings > Connection string > URIAmazon Titan Multimodal Embeddings G1
Images from https://unsplash.com/license via https://picsum.photos/