content/develop/ai/_index.md
Redis stores and indexes vector embeddings that semantically represent unstructured data including text passages, images, videos, or audio. Store vectors and the associated metadata within [hashes]({{< relref "/develop/data-types/hashes" >}}) or [JSON]({{< relref "/develop/data-types/json" >}}) documents for [indexing]({{< relref "/develop/ai/search-and-query/indexing" >}}) and [querying]({{< relref "/develop/ai/search-and-query/query" >}}).
<div class="grid grid-cols-1 md:grid-cols-3 gap-6 my-8"> {{< image-card image="images/ai-lib.svg" alt="AI Redis icon" title="Redis vector Python client library documentation" url="/develop/ai/redisvl/" >}} {{< image-card image="images/ai-cube.svg" alt="AI Redis icon" title="Use Redis Search to search data" url="/develop/ai/search-and-query/" >}} {{< image-card image="images/ai-brain.svg" alt="AI Redis icon" title="Use LangCache to store LLM responses" url="/develop/ai/langcache/" >}} </div>This page is organized into a few sections depending on what you're trying to do:
FLAT]({{< relref "develop/ai/search-and-query/vectors#flat-index" >}}) and [HNSW]({{< relref "develop/ai/search-and-query/vectors#hnsw-index" >}}) vector index types.Learn to perform vector search and use gateways and semantic caching in your AI/ML projects.
<div class="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-5 gap-4 my-8"> {{< image-card image="images/ai-search.svg" alt="AI Redis icon" title="Vector search guide" url="/develop/ai/search-and-query/query/vector-search" >}} {{< image-card image="images/ai-LLM-memory.svg" alt="LLM memory icon" title="Store memory for LLMs" url="https://redis.io/blog/level-up-rag-apps-with-redis-vector-library/" >}} {{< image-card image="images/ai-brain-2.svg" alt="AI Redis icon" title="Semantic caching for faster, smarter LLM apps" url="https://redis.io/blog/what-is-semantic-caching" >}} {{< image-card image="images/ai-semantic-routing.svg" alt="Semantic routing icon" title="Semantic routing chooses the best tool" url="https://redis.io/blog/level-up-rag-apps-with-redis-vector-library/" >}} {{< image-card image="images/ai-model.svg" alt="AI Redis icon" title="Deploy an enhanced gateway with Redis" url="https://redis.io/blog/ai-gateways-what-are-they-how-can-you-deploy-an-enhanced-gateway-with-redis/" >}} </div>Quickstarts or recipes are useful when you are trying to build specific functionality. For example, you might want to do RAG with LangChain or set up LLM memory for your AI agent.
Get started with these foundational guides:
Retrieval Augmented Generation (aka RAG) is a technique to enhance the ability of an LLM to respond to user queries. The retrieval part of RAG is supported by a vector database, which can return semantically relevant results to a user's query, serving as contextual information to augment the generative capabilities of an LLM.
Explore our [AI notebooks collection]({{< relref "/develop/ai/notebook-collection" >}}) for comprehensive RAG examples including:
Additional resources:
AI agents can act autonomously to plan and execute tasks for the user.
Need a deeper-dive through different use cases and topics?
Explore our comprehensive [ecosystem integrations page]({{< relref "/develop/ai/ecosystem-integrations" >}}) to discover how Redis works with popular AI frameworks, platforms, and tools including:
Watch our [AI video collection]({{< relref "/develop/ai/ai-videos" >}}) featuring practical tutorials and demonstrations on:
See how we stack up against the competition.
See how leaders in the industry are building their RAG apps.