llama-index-integrations/llms/llama-index-llms-cometapi/README.md
To install the required packages, run:
pip install llama-index-llms-cometapi
You can set the API key either as an environment variable COMETAPI_API_KEY or pass it directly:
from llama_index.llms.cometapi import CometAPI
# Method 1: Using environment variable
# export COMETAPI_API_KEY="your-api-key"
llm = CometAPI(model="gpt-4o-mini")
# Method 2: Direct API key
llm = CometAPI(
api_key="your-api-key",
model="gpt-4o-mini",
max_tokens=256,
context_window=4096,
)
from llama_index.core.llms import ChatMessage
message = ChatMessage(role="user", content="Tell me a joke")
resp = llm.chat([message])
print(resp)
message = ChatMessage(role="user", content="Tell me a story")
resp = llm.stream_chat([message])
for r in resp:
print(r.delta, end="")
resp = llm.complete("Tell me a joke")
print(resp)
resp = llm.stream_complete("Tell me a story")
for r in resp:
print(r.delta, end="")
CometAPI supports various state-of-the-art models:
gpt-5-chat-latestchatgpt-4o-latestgpt-5-minigpt-4o-minigpt-4.1-miniclaude-opus-4-1-20250805claude-sonnet-4-20250514claude-3-5-haiku-latestgemini-2.5-progemini-2.5-flashgemini-2.0-flashdeepseek-v3.1grok-4-0709qwen3-30b-a3bFor complete list, visit: https://api.cometapi.com/pricing
# Use different models
llm_claude = CometAPI(model="claude-3-5-haiku-latest")
llm_gemini = CometAPI(model="gemini-2.5-flash")
llm_deepseek = CometAPI(model="deepseek-v3.1")
response = llm_claude.complete("Explain quantum computing")
print(response)