docs/en/learn/litellm-removal-guide.mdx
CrewAI supports two paths for connecting to LLM providers:
This guide explains how to use CrewAI exclusively with native provider integrations, removing any dependency on LiteLLM.
<Warning> The `litellm` package was quarantined on PyPI due to a security/reliability incident. If you rely on LiteLLM-dependent providers, you should migrate to native integrations. CrewAI's native integrations give you full functionality without LiteLLM. </Warning>These providers use their own SDKs and work without LiteLLM installed:
<CardGroup cols={2}> <Card title="OpenAI" icon="bolt"> GPT-4o, GPT-4o-mini, o1, o3-mini, and more. ```bash uv add "crewai[openai]" ``` </Card> <Card title="Anthropic" icon="a"> Claude Sonnet, Claude Haiku, and more. ```bash uv add "crewai[anthropic]" ``` </Card> <Card title="Google Gemini" icon="google"> Gemini 2.0 Flash, Gemini 2.0 Pro, and more. ```bash uv add "crewai[gemini]" ``` </Card> <Card title="Azure OpenAI" icon="microsoft"> Azure-hosted OpenAI models. ```bash uv add "crewai[azure]" ``` </Card> <Card title="AWS Bedrock" icon="aws"> Claude, Llama, Titan, and more via AWS. ```bash uv add "crewai[bedrock]" ``` </Card> </CardGroup> <Info> If you only use native providers, you **never** need to install `crewai[litellm]`. The base `crewai` package plus your chosen provider extra is all you need. </Info>If your code uses model prefixes like these, you're routing through LiteLLM:
| Prefix | Provider | Uses LiteLLM? |
|---|---|---|
ollama/ | Ollama | ✅ Yes |
groq/ | Groq | ✅ Yes |
together_ai/ | Together AI | ✅ Yes |
mistral/ | Mistral | ✅ Yes |
cohere/ | Cohere | ✅ Yes |
huggingface/ | Hugging Face | ✅ Yes |
openai/ | OpenAI | ❌ Native |
anthropic/ | Anthropic | ❌ Native |
gemini/ | Google Gemini | ❌ Native |
azure/ | Azure OpenAI | ❌ Native |
bedrock/ | AWS Bedrock | ❌ Native |
# Using pip
pip show litellm
# Using uv
uv pip show litellm
If the command returns package information, LiteLLM is installed in your environment.
Look at your pyproject.toml for crewai[litellm]:
# If you see this, you have LiteLLM as a dependency
dependencies = [
"crewai[litellm]>=0.100.0", # ← Uses LiteLLM
]
# Change to a native provider extra instead
dependencies = [
"crewai[openai]>=0.100.0", # ← Native, no LiteLLM
]
Find all LLM() calls and model strings in your code:
# Search your codebase for LLM model strings
grep -r "LLM(" --include="*.py" .
grep -r "llm=" --include="*.yaml" .
grep -r "llm:" --include="*.yaml" .
# Before (LiteLLM):
# llm = LLM(model="groq/llama-3.1-70b")
# After (Native):
llm = LLM(model="openai/gpt-4o")
```
```bash
# Install
uv add "crewai[openai]"
# Set your API key
export OPENAI_API_KEY="sk-..."
```
# Before (LiteLLM):
# llm = LLM(model="together_ai/meta-llama/Meta-Llama-3.1-70B")
# After (Native):
llm = LLM(model="anthropic/claude-sonnet-4-20250514")
```
```bash
# Install
uv add "crewai[anthropic]"
# Set your API key
export ANTHROPIC_API_KEY="sk-ant-..."
```
# Before (LiteLLM):
# llm = LLM(model="mistral/mistral-large-latest")
# After (Native):
llm = LLM(model="gemini/gemini-2.0-flash")
```
```bash
# Install
uv add "crewai[gemini]"
# Set your API key
export GEMINI_API_KEY="..."
```
# After (Native):
llm = LLM(
model="azure/your-deployment-name",
api_key="your-azure-api-key",
base_url="https://your-resource.openai.azure.com",
api_version="2024-06-01"
)
```
```bash
# Install
uv add "crewai[azure]"
```
# After (Native):
llm = LLM(
model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0",
aws_region_name="us-east-1"
)
```
```bash
# Install
uv add "crewai[bedrock]"
# Configure AWS credentials
export AWS_ACCESS_KEY_ID="..."
export AWS_SECRET_ACCESS_KEY="..."
export AWS_DEFAULT_REGION="us-east-1"
```
If you're using Ollama and want to keep using it, you can connect via Ollama's OpenAI-compatible API:
from crewai import LLM
# Before (LiteLLM):
# llm = LLM(model="ollama/llama3")
# After (OpenAI-compatible mode, no LiteLLM needed):
llm = LLM(
model="openai/llama3",
base_url="http://localhost:11434/v1",
api_key="ollama" # Ollama doesn't require a real API key
)
# Before (LiteLLM providers):
researcher:
role: Research Specialist
goal: Conduct research
backstory: A dedicated researcher
llm: groq/llama-3.1-70b # ← LiteLLM
# After (Native provider):
researcher:
role: Research Specialist
goal: Conduct research
backstory: A dedicated researcher
llm: openai/gpt-4o # ← Native
Once you've migrated all your model references:
# Remove litellm from your project
uv remove litellm
# Or if using pip
pip uninstall litellm
# Update your pyproject.toml: change crewai[litellm] to your provider extra
# e.g., crewai[openai], crewai[anthropic], crewai[gemini]
Run your project and confirm everything works:
# Run your crew
crewai run
# Or run your tests
uv run pytest
Here are common migration paths from LiteLLM-dependent providers to native ones:
from crewai import LLM
# ─── LiteLLM providers → Native alternatives ────────────────────
# Groq → OpenAI or Anthropic
# llm = LLM(model="groq/llama-3.1-70b")
llm = LLM(model="openai/gpt-4o-mini") # Fast & affordable
llm = LLM(model="anthropic/claude-haiku-3-5") # Fast & affordable
# Together AI → OpenAI or Gemini
# llm = LLM(model="together_ai/meta-llama/Meta-Llama-3.1-70B")
llm = LLM(model="openai/gpt-4o") # High quality
llm = LLM(model="gemini/gemini-2.0-flash") # Fast & capable
# Mistral → Anthropic or OpenAI
# llm = LLM(model="mistral/mistral-large-latest")
llm = LLM(model="anthropic/claude-sonnet-4-20250514") # High quality
# Ollama → OpenAI-compatible (keep using local models)
# llm = LLM(model="ollama/llama3")
llm = LLM(
model="openai/llama3",
base_url="http://localhost:11434/v1",
api_key="ollama"
)