README.md
TensorZero is an open-source LLMOps platform that unifies:
You can take what you need, adopt incrementally, and complement with other tools. It plays nicely with the OpenAI SDK, OpenTelemetry, and every major LLM provider.
TensorZero is used by companies ranging from frontier AI startups to the Fortune 10 and fuels ~1% of global LLM API spend today.
<p align="center"> <b><a href="https://www.tensorzero.com/" target="_blank">Website</a></b> ยท <b><a href="https://www.tensorzero.com/docs" target="_blank">Docs</a></b> ยท <b><a href="https://www.x.com/tensorzero" target="_blank">Twitter</a></b> ยท <b><a href="https://www.tensorzero.com/slack" target="_blank">Slack</a></b> ยท <b><a href="https://www.tensorzero.com/discord" target="_blank">Discord</a></b><b><a href="https://www.tensorzero.com/docs/quickstart" target="_blank">Quick Start (5min)</a></b> ยท <b><a href="https://www.tensorzero.com/docs/deployment/tensorzero-gateway" target="_blank">Deployment Guide</a></b> ยท <b><a href="https://www.tensorzero.com/docs/gateway/api-reference" target="_blank">API Reference</a></b> ยท <b><a href="https://www.tensorzero.com/docs/gateway/configuration-reference" target="_blank">Configuration Reference</a></b>
</p><video src="https://github.com/user-attachments/assets/04a8466e-27d8-4189-b305-e7cecb6881ee"></video>
[!NOTE]
๐ TensorZero Autopilot
TensorZero Autopilot is an automated AI engineer powered by TensorZero that analyzes LLM observability data, sets up evals, optimizes prompts and models, and runs A/B tests.
It dramatically improves the performance of LLM agents across diverse tasks:
Integrate with TensorZero once and access every major LLM provider.
Anthropic, AWS Bedrock, AWS SageMaker, Azure, DeepSeek, Fireworks, GCP Vertex AI Anthropic, GCP Vertex AI Gemini, Google AI Studio (Gemini API), Groq, Hyperbolic, Mistral, OpenAI, OpenRouter, SGLang, TGI, Together AI, vLLM, and xAI (Grok).
Need something else? TensorZero also supports any OpenAI-compatible API (e.g. Ollama).
You can use TensorZero with any OpenAI SDK (Python, Node, Go, etc.) or OpenAI-compatible client.
base_url and model in your OpenAI-compatible client.from openai import OpenAI
# Point the client to the TensorZero Gateway
client = OpenAI(base_url="http://localhost:3000/openai/v1", api_key="not-used")
response = client.chat.completions.create(
# Call any model provider (or TensorZero function)
model="tensorzero::model_name::anthropic::claude-sonnet-4-6",
messages=[
{
"role": "user",
"content": "Share a fun fact about TensorZero.",
}
],
)
See Quick Start for more information.
Zoom in to debug individual API calls, or zoom out to monitor metrics across models and prompts over time โ all using the open-source TensorZero UI.
Send production metrics and human feedback to easily optimize your prompts, models, and inference strategies โ using the UI or programmatically.
Compare prompts, models, and inference strategies using evaluations powered by heuristics and LLM judges.
Ship with confidence with built-in A/B testing, routing, fallbacks, retries, etc.
Build with an open-source stack well-suited for prototypes but designed from the ground up to support the most complex LLM applications and deployments.
How is TensorZero different from other LLM frameworks?
Can I use TensorZero with ___?
Yes. Every major programming language is supported. It plays nicely with the OpenAI SDK, OpenTelemetry, and every major LLM provider.
Is TensorZero production-ready?
Yes. TensorZero is used by companies ranging from frontier AI startups to the Fortune 10 and powers ~1% of the global LLM API spend today.
Here's a case study: Automating Code Changelogs at a Large Bank with LLMs
How much does TensorZero cost?
TensorZero (LLMOps platform) is 100% self-hosted and open-source.
TensorZero Autopilot (automated AI engineer) is a complementary paid product powered by TensorZero.
Who is building TensorZero?
Our technical team includes a former Rust compiler maintainer, machine learning researchers (Stanford, CMU, Oxford, Columbia) with thousands of citations, and the chief product officer of a decacorn startup. We're backed by the same investors as leading open-source projects (e.g. ClickHouse, CockroachDB) and AI labs (e.g. OpenAI, Anthropic). See our $7.3M seed round announcement and coverage from VentureBeat. We're hiring in NYC.
How do I get started?
You can adopt TensorZero incrementally. Our Quick Start goes from a vanilla OpenAI wrapper to a production-ready LLM application with observability and fine-tuning in just 5 minutes.
Start building today. The Quick Start shows it's easy to set up an LLM application with TensorZero.
Questions? Ask us on Slack or Discord.
Using TensorZero at work? Email us at [email protected] to set up a Slack or Teams channel with your team (free).
We are working on a series of complete runnable examples illustrating TensorZero's data & learning flywheel.
Optimizing Data Extraction (NER) with TensorZero
This example shows how to use TensorZero to optimize a data extraction pipeline. We demonstrate techniques like fine-tuning and dynamic in-context learning (DICL). In the end, an optimized GPT-4o Mini model outperforms GPT-4o on this task โ at a fraction of the cost and latency โ using a small amount of training data.
Agentic RAG โ Multi-Hop Question Answering with LLMs
This example shows how to build a multi-hop retrieval agent using TensorZero. The agent iteratively searches Wikipedia to gather information, and decides when it has enough context to answer a complex question.
Writing Haikus to Satisfy a Judge with Hidden Preferences
This example fine-tunes GPT-4o Mini to generate haikus tailored to a specific taste. You'll see TensorZero's "data flywheel in a box" in action: better variants leads to better data, and better data leads to better variants. You'll see progress by fine-tuning the LLM multiple times.
Image Data Extraction โ Multimodal (Vision) Fine-tuning
This example shows how to fine-tune multimodal models (VLMs) like GPT-4o to improve their performance on vision-language tasks. Specifically, we'll build a system that categorizes document images (screenshots of computer science research papers).
Improving LLM Chess Ability with Best-of-N Sampling
This example showcases how best-of-N sampling can significantly enhance an LLM's chess-playing abilities by selecting the most promising moves from multiple generated options.
We write about LLM engineering on the TensorZero Blog. Here are some of our favorite posts: