docs/en/guides/migration/upgrading-crewai.mdx
CrewAI releases ship new capabilities regularly. This guide walks you through the practical steps to keep your installation up to date — both the CLI and your project's virtual environment.
If you're starting fresh, see Installation. If you're coming from another framework, see Migrating from LangGraph.
CrewAI lives in two places on your machine, and they upgrade independently:
| What | How it's installed | How to upgrade |
|---|---|---|
The global crewai CLI | uv tool install crewai | uv tool install crewai --upgrade |
| The project venv (what your code runs) | crewai install / uv sync | uv add "crewai[...]>=X.Y.Z" then crewai install |
These can — and often do — get out of sync. Running crewai --version tells you the CLI version. Running uv pip show crewai inside your project tells you the venv version. If they differ, that's normal; what matters for your running code is the venv version.
crewai install Alone Doesn't Upgradecrewai install is a thin wrapper around uv sync. It installs exactly what the current uv.lock file says — it does not bump any version constraints.
If your pyproject.toml says crewai>=1.11.1 and the lock file resolved to 1.11.1, running crewai install will keep you on 1.11.1 forever, even if 1.14.4 is available.
To actually upgrade, you need to:
pyproject.tomluv add does all three in one shot.
# Bump the constraint and re-lock in one command
uv add "crewai[tools]>=1.14.4"
# Sync the venv (crewai install calls uv sync under the hood)
crewai install
# Verify
uv pip show crewai
# → Version: 1.14.4
Replace [tools] with whatever extras your project uses (e.g. [tools,anthropic]). Check your pyproject.toml dependencies list if you're unsure.
The global CLI is separate from your project. Upgrade it with:
uv tool install crewai --upgrade
If your shell warns about PATH after the upgrade, refresh it:
uv tool update-shell
This does not touch your project's venv — you still need uv add + crewai install inside the project.
# Global CLI version
crewai --version
# Project venv version
uv pip show crewai | grep Version
They don't need to match — but your project venv version is what matters for runtime behavior.
<Note> CrewAI requires `Python >=3.10, <3.14`. If `uv` was installed against an older interpreter, recreate the project venv with a supported Python before running `crewai install`. </Note>Most upgrades only require small adjustments. The areas below are the ones that break silently or with confusing tracebacks.
BaseToolThe canonical import location for tools is crewai.tools. Older paths still surface in tutorials but should be updated.
# Before
from crewai_tools import BaseTool
from crewai.agents.tools import tool
# After
from crewai.tools import BaseTool, tool
The @tool decorator and BaseTool subclass both live in crewai.tools. AgentFinish and other internal-agent symbols are no longer part of the public surface — if you were importing them, switch to event listeners or Task callbacks instead.
Agent parameter changesfrom crewai import Agent
agent = Agent(
role="Researcher",
goal="Find authoritative sources on {topic}",
backstory="You are a careful, source-driven researcher.",
llm="gpt-4o-mini", # string model name OR an LLM object
verbose=True, # bool, not an int level
max_iter=15, # default has changed across versions — set explicitly
allow_delegation=False,
)
llm accepts either a string model name (resolved via the configured provider) or an LLM object for fine-grained control.verbose is a plain bool. Passing an integer no longer toggles log levels.max_iter defaults have shifted between releases. If your agent silently stops looping after the first tool call, set max_iter explicitly.Crew parametersfrom crewai import Crew, Process
crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential, # or Process.hierarchical
memory=True,
cache=True,
embedder={"provider": "openai", "config": {"model": "text-embedding-3-small"}},
)
process=Process.hierarchical requires either manager_llm= or manager_agent=. Without one, kickoff raises at validation time.memory=True with a non-default embedding provider needs an embedder dict — see Memory & embedder config below.Task structured outputUse output_pydantic, output_json, or output_file to coerce a task's result into a typed shape:
from pydantic import BaseModel
from crewai import Task
class Article(BaseModel):
title: str
body: str
write = Task(
description="Write an article about {topic}",
expected_output="A short article with a title and body",
agent=writer,
output_pydantic=Article, # the class, NOT an instance
output_file="output/article.md",
)
output_pydantic takes the class itself. Passing Article(title="", body="") is a common mistake and fails with a confusing validation error.
If memory=True and you're not using the default OpenAI embeddings, you must pass an embedder:
crew = Crew(
agents=[...],
tasks=[...],
memory=True,
embedder={
"provider": "ollama",
"config": {"model": "nomic-embed-text"},
},
)
Set the relevant provider credentials (OPENAI_API_KEY, OLLAMA_HOST, etc.) in your .env file. Memory storage paths are project-local by default — delete the project's memory directory if you change embedders, since dimensions don't mix.