llama-index-integrations/llms/llama-index-llms-aibadgr/README.md
AI Badgr (Budget/Utility, OpenAI-compatible)
To install the required packages, run:
pip install llama-index-llms-aibadgr
You need to set either the environment variable AIBADGR_API_KEY or pass your API key directly in the class constructor. Replace <your-api-key> with your actual API key:
from llama_index.llms.aibadgr import AIBadgr
from llama_index.core.llms import ChatMessage
llm = AIBadgr(
api_key="<your-api-key>",
model="premium",
)
You can generate a chat response by sending a list of ChatMessage instances:
message = ChatMessage(role="user", content="Tell me a joke")
resp = llm.chat([message])
print(resp)
To stream responses, use the stream_chat method:
message = ChatMessage(role="user", content="Tell me a story in 250 words")
resp = llm.stream_chat([message])
for r in resp:
print(r.delta, end="")
You can also generate completions with a prompt using the complete method:
resp = llm.complete("Tell me a joke")
print(resp)
To stream completions, use the stream_complete method:
resp = llm.stream_complete("Tell me a story in 250 words")
for r in resp:
print(r.delta, end="")
AI Badgr supports tier-based model names for easy selection:
# Using tier names (recommended)
llm = AIBadgr(model="premium", api_key="your_api_key")
resp = llm.complete("Write a story about a dragon who can code in Rust")
print(resp)
You can also use specific model names directly:
llm = AIBadgr(model="llama3-8b-instruct", api_key="your_api_key")
resp = llm.complete("Explain quantum computing")
print(resp)
OpenAI model names are accepted and mapped automatically.
You can configure AI Badgr using environment variables:
AIBADGR_API_KEY - Your API keyAIBADGR_BASE_URL - Custom base URL (default: https://aibadgr.com/api/v1)export AIBADGR_API_KEY="your_api_key"
export AIBADGR_BASE_URL="https://aibadgr.com/api/v1"