Back to Redis

Semantic caching with LangCache on Redis Cloud

content/operate/rc/langcache/_index.md

latest590 B
Original Source

LangCache is a semantic caching service available as a REST API that stores LLM responses for fast and cheaper retrieval, built on the Redis vector database. By using semantic caching, you can significantly reduce API costs and lower the average latency of your generative AI applications.

For more information about how LangCache works, see the [LangCache overview]({{< relref "/develop/ai/langcache" >}}).

LLM cost reduction with LangCache

{{< embed-md "langcache-cost-reduction.md" >}}

Get started with LangCache on Redis Cloud

{{< embed-md "rc-langcache-get-started.md" >}}