llama-index-integrations/llms/llama-index-llms-baseten/README.md
This integration allows you to use Baseten's hosted models with LlamaIndex.
Install the required packages:
pip install llama-index-llms-baseten
pip install llama-index
Baseten offers two main ways for inference.
Model APIs are public endpoints for popular open source models (GPT-OSS, Kimi K2, DeepSeek etc) where you can directly use a frontier model via slug e.g. deepseek-ai/DeepSeek-V3-0324 and you will be charged on a per-token basis. You can find the list of supported models here: https://docs.baseten.co/development/model-apis/overview#supported-models.
Dedicated deployments are useful for serving custom models where you want to autoscale production workloads and have fine-grain configuration. You need to deploy a model in your Baseten dashboard and provide the 8 character model id like abcd1234.
By default, we set the model_apis parameter to True. If you want to use a dedicated deployment, you must set the model_apis parameter to False when instantiating the Baseten object.
To use Baseten models with LlamaIndex, first initialize the LLM:
# Model APIs, you can find the model_slug here: https://docs.baseten.co/development/model-apis/overview#supported-models
llm = Baseten(
model_id="MODEL_SLUG",
api_key="YOUR_API_KEY",
model_apis=True, # Default, so not strictly necessary
)
# Dedicated Deployments, you can find the model_id by in the Baseten dashboard here: https://app.baseten.co/overview
llm = Baseten(
model_id="MODEL_ID",
api_key="YOUR_API_KEY",
model_apis=False,
)
Generate a simple completion:
response = llm.complete("Paul Graham is")
print(response.text)
Use chat-style interactions:
from llama_index.core.llms import ChatMessage
messages = [
ChatMessage(
role="system", content="You are a pirate with a colorful personality"
),
ChatMessage(role="user", content="What is your name"),
]
response = llm.chat(messages)
print(response)
Stream completions in real-time:
# Streaming completion
response = llm.stream_complete("Paul Graham is")
for r in response:
print(r.delta, end="")
# Streaming chat
messages = [
ChatMessage(
role="system", content="You are a pirate with a colorful personality"
),
ChatMessage(role="user", content="What is your name"),
]
response = llm.stream_chat(messages)
for r in response:
print(r.delta, end="")
Baseten supports async operations for long-running inference tasks. This is useful for:
The async implementation uses webhooks to deliver results.
Note: Async is only available for dedicated deployments and not for model APIs. achat is not supported because chat does not make sense for async operations.
async_llm = Baseten(
model_id="your_model_id",
api_key="your_api_key",
webhook_endpoint="your_webhook_endpoint",
)
response = await async_llm.acomplete("Paul Graham is")
print(response)
To check the status of an async request:
import requests
model_id = "your_model_id"
request_id = "your_request_id"
api_key = "your_api_key"
resp = requests.get(
f"https://model-{model_id}.api.baseten.co/async_request/{request_id}",
headers={"Authorization": f"Api-Key {api_key}"},
)
print(resp.json())
For async operations, results are posted to your provided webhook endpoint. Your endpoint should validate the webhook signature and handle the results appropriately. The results are NOT stored by Baseten.
For more examples and detailed usage, check out the Baseten Cookbook.
<a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/examples/llm/baseten.ipynb" target="_parent"></a>