README.md
δΈζι θ―» | Community | Installation | Examples | Paper | Citation | Contributing | CAMEL-AI |
</h4> <div align="center" style="background-color: #f0f7ff; padding: 10px; border-radius: 5px; margin: 15px 0;"> <h3 style="color: #1e88e5; margin: 0;"> π OWL achieves <span style="color: #d81b60; font-weight: bold; font-size: 1.2em;">69.09</span> average score on GAIA benchmark and ranks <span style="color: #d81b60; font-weight: bold; font-size: 1.2em;">π οΈ #1</span> among open-source frameworks! π </h3> </div> <div align="center">π¦ OWL is a cutting-edge framework for multi-agent collaboration that pushes the boundaries of task automation, built on top of the CAMEL-AI Framework.
Our vision is to revolutionize how AI agents collaborate to solve real-world tasks. By leveraging dynamic agent interactions, OWL enables more natural, efficient, and robust task automation across diverse domains.
If you find this repo useful, please consider citing our work (citation).
</div> </div> <!-- # Key Features -->Join our community and see your innovative ideas tackled by cutting-edge AI.
https://github.com/user-attachments/assets/2a2a825d-39ea-45c5-9ba1-f9d58efbc372
This video demonstrates how to install OWL locally and showcases its capabilities as a cutting-edge framework for multi-agent collaboration: https://www.youtube.com/watch?v=8XlqVyAZOr8
Before installing OWL, ensure you have Python installed (version 3.10, 3.11, or 3.12 is supported):
Note for GAIA Benchmark Users: When running the GAIA benchmark evaluation, please use the
gaia58.18branch which includes a customized version of the CAMEL framework in theowl/cameldirectory. This version contains enhanced toolkits with improved stability specifically optimized for the GAIA benchmark compared to the standard CAMEL installation.
# Check if Python is installed
python --version
# If not installed, download and install from https://www.python.org/downloads/
# For macOS users with Homebrew:
brew install [email protected]
# For Ubuntu/Debian:
sudo apt update
sudo apt install python3.10 python3.10-venv python3-pip
OWL supports multiple installation methods to fit your workflow preferences.
# Clone github repo
git clone https://github.com/camel-ai/owl.git
# Change directory into project directory
cd owl
# Install uv if you don't have it already
pip install uv
# Create a virtual environment and install dependencies
uv venv .venv --python=3.10
# Activate the virtual environment
# For macOS/Linux
source .venv/bin/activate
# For Windows
.venv\Scripts\activate
# Install CAMEL with all dependencies
uv pip install -e .
# Clone github repo
git clone https://github.com/camel-ai/owl.git
# Change directory into project directory
cd owl
# Create a virtual environment
# For Python 3.10 (also works with 3.11, 3.12)
python3.10 -m venv .venv
# Activate the virtual environment
# For macOS/Linux
source .venv/bin/activate
# For Windows
.venv\Scripts\activate
# Install from requirements.txt
pip install -r requirements.txt --use-pep517
# Clone github repo
git clone https://github.com/camel-ai/owl.git
# Change directory into project directory
cd owl
# Create a conda environment
conda create -n owl python=3.10
# Activate the conda environment
conda activate owl
# Option 1: Install as a package (recommended)
pip install -e .
# Option 2: Install from requirements.txt
pip install -r requirements.txt --use-pep517
# This option downloads a ready-to-use image from Docker Hub
# Fastest and recommended for most users
docker compose up -d
# Run OWL inside the container
docker compose exec owl bash
cd .. && source .venv/bin/activate
playwright install-deps
xvfb-python examples/run.py
# For users who need to customize the Docker image or cannot access Docker Hub:
# 1. Open docker-compose.yml
# 2. Comment out the "image: mugglejinx/owl:latest" line
# 3. Uncomment the "build:" section and its nested properties
# 4. Then run:
docker compose up -d --build
# Run OWL inside the container
docker compose exec owl bash
cd .. && source .venv/bin/activate
playwright install-deps
xvfb-python examples/run.py
# Navigate to container directory
cd .container
# Make the script executable and build the Docker image
chmod +x build_docker.sh
./build_docker.sh
# Run OWL with your question
./run_in_docker.sh "your question"
OWL requires various API keys to interact with different services.
You can set environment variables directly in your terminal:
macOS/Linux (Bash/Zsh):
export OPENAI_API_KEY="your-openai-api-key-here"
# Add other required API keys as needed
Windows (Command Prompt):
set OPENAI_API_KEY=your-openai-api-key-here
Windows (PowerShell):
$env:OPENAI_API_KEY = "your-openai-api-key-here"
Note: Environment variables set directly in the terminal will only persist for the current session.
.env FileIf you prefer using a .env file instead, you can:
Copy and Rename the Template:
# For macOS/Linux
cd owl
cp .env_template .env
# For Windows
cd owl
copy .env_template .env
Alternatively, you can manually create a new file named .env in the owl directory and copy the contents from .env_template.
Configure Your API Keys:
Open the .env file in your preferred text editor and insert your API keys in the corresponding fields.
Note: For the minimal example (
examples/run_mini.py), you only need to configure the LLM API key (e.g.,OPENAI_API_KEY).
If using MCP Desktop Commander within Docker, run:
npx -y @wonderwhy-er/desktop-commander setup --force-file-protocol
For more detailed Docker usage instructions, including cross-platform support, optimized configurations, and troubleshooting, please refer to DOCKER_README.md.
After installation and setting up your environment variables, you can start using OWL right away:
python examples/run.py
Tool Calling: OWL requires models with robust tool calling capabilities to interact with various toolkits. Models must be able to understand tool descriptions, generate appropriate tool calls, and process tool outputs.
Multimodal Understanding: For tasks involving web interaction, image analysis, or video processing, models with multimodal capabilities are required to interpret visual content and context.
For information on configuring AI models, please refer to our CAMEL models documentation.
Note: For optimal performance, we strongly recommend using OpenAI models (GPT-4 or later versions). Our experiments show that other models may result in significantly lower performance on complex tasks and benchmarks, especially those requiring advanced multi-modal understanding and tool use.
OWL supports various LLM backends, though capabilities may vary depending on the model's tool calling and multimodal abilities. You can use the following scripts to run with different models:
# Run with Claude model
python examples/run_claude.py
# Run with Qwen model
python examples/run_qwen_zh.py
# Run with Deepseek model
python examples/run_deepseek_zh.py
# Run with other OpenAI-compatible models
python examples/run_openai_compatible_model.py
# Run with Gemini model
python examples/run_gemini.py
# Run with Azure OpenAI
python examples/run_azure_openai.py
# Run with Ollama
python examples/run_ollama.py
For a simpler version that only requires an LLM API key, you can try our minimal example:
python examples/run_mini.py
You can run OWL agent with your own task by modifying the examples/run.py script:
# Define your own task
task = "Task description here."
society = construct_society(question)
answer, chat_history, token_count = run_society(society)
print(f"\033[94mAnswer: {answer}\033[0m")
For uploading files, simply provide the file path along with your question:
# Task with a local file (e.g., file path: `tmp/example.docx`)
task = "What is in the given DOCX file? Here is the file path: tmp/example.docx"
society = construct_society(question)
answer, chat_history, token_count = run_society(society)
print(f"\033[94mAnswer: {answer}\033[0m")
OWL will then automatically invoke document-related tools to process the file and extract the answer.
Here are some tasks you can try with OWL:
OWL's MCP integration provides a standardized way for AI models to interact with various tools and data sources:
Before using MCP, you need to install Node.js first.
Download the official installer: Node.js.
Check "Add to PATH" option during installation.
sudo apt update
sudo apt install nodejs npm -y
brew install node
npm install -g @executeautomation/playwright-mcp-server
npx playwright install-deps
Try our comprehensive MCP examples:
examples/run_mcp.py - Basic MCP functionality demonstration (local call, requires dependencies)examples/run_mcp_sse.py - Example using the SSE protocol (Use remote services, no dependencies)Important: Effective use of toolkits requires models with strong tool calling capabilities. For multimodal toolkits (Web, Image, Video), models must also have multimodal understanding abilities.
OWL supports various toolkits that can be customized by modifying the tools list in your script:
# Configure toolkits
tools = [
*BrowserToolkit(headless=False).get_tools(), # Browser automation
*VideoAnalysisToolkit(model=models["video"]).get_tools(),
*AudioAnalysisToolkit().get_tools(), # Requires OpenAI Key
*CodeExecutionToolkit(sandbox="subprocess").get_tools(),
*ImageAnalysisToolkit(model=models["image"]).get_tools(),
SearchToolkit().search_duckduckgo,
SearchToolkit().search_google, # Comment out if unavailable
SearchToolkit().search_wiki,
SearchToolkit().search_bocha,
SearchToolkit().search_baidu,
*ExcelToolkit().get_tools(),
*DocumentProcessingToolkit(model=models["document"]).get_tools(),
*FileWriteToolkit(output_dir="./").get_tools(),
]
Key toolkits include:
Additional specialized toolkits: ArxivToolkit, GitHubToolkit, GoogleMapsToolkit, MathToolkit, NetworkXToolkit, NotionToolkit, RedditToolkit, WeatherToolkit, and more. For a complete list, see the CAMEL toolkits documentation.
To customize available tools:
# 1. Import toolkits
from camel.toolkits import BrowserToolkit, SearchToolkit, CodeExecutionToolkit
# 2. Configure tools list
tools = [
*BrowserToolkit(headless=True).get_tools(),
SearchToolkit().search_wiki,
*CodeExecutionToolkit(sandbox="subprocess").get_tools(),
]
# 3. Pass to assistant agent
assistant_agent_kwargs = {"model": models["assistant"], "tools": tools}
Selecting only necessary toolkits optimizes performance and reduces resource usage.
# Start the Chinese version
python owl/webapp_zh.py
# Start the English version
python owl/webapp.py
# Start the Japanese version
python owl/webapp_jp.py
The web interface is built using Gradio and runs locally on your machine. No data is sent to external servers beyond what's required for the model API calls you configure.
To reproduce OWL's GAIA benchmark score:
We also provide an enhanced OWL in the main branch, so you can directly benefit from upgraded toolkits and increased stability even without switching branches.
For the original GAIA-specific performance, we recommend our gaia69 branch.
When running the benchmark evaluation:
Switch to the gaia69 branch:
git checkout gaia69
Run the evaluation script:
python run_gaia_workforce_claude.py
This will execute the same configuration that achieved our top-ranking performance on the GAIA benchmark.
The source code is licensed under Apache 2.0.
We welcome contributions from the community! Here's how you can help:
Current Issues Open for Contribution:
To take on an issue, simply leave a comment stating your interest.
Join us (Discord or WeChat) in pushing the boundaries of finding the scaling laws of agents.
Join us for further discussions!
<!--  -->Q: Why don't I see Chrome running locally after starting the example script?
A: If OWL determines that a task can be completed using non-browser tools (such as search or code execution), the browser will not be launched. The browser window will only appear when OWL determines that browser-based interaction is necessary.
Q: Which Python version should I use?
A: OWL supports Python 3.10, 3.11, and 3.12.
Q: How can I contribute to the project?
A: See our Contributing section for details on how to get involved. We welcome contributions of all kinds, from code improvements to documentation updates.
Q: Which CAMEL version should I use for replicate the role playing result?
A: We provide a modified version of CAMEL (owl/camel) in the gaia58.18 branch. Please make sure you use this CAMEL version for your experiments.
Q: Why are my experiment results lower than the reported numbers?
A: Since the GAIA benchmark evaluates LLM agents in a realistic world, it introduces a significant amount of randomness. Based on user feedback, one of the most common issues for replication is, for example, agents being blocked on certain webpages due to network reasons. We have uploaded a keywords matching script to help quickly filter out these errors here. You can also check this technical report for more details when evaluating LLM agents in realistic open-world environments.
OWL is built on top of the CAMEL Framework, here's how you can explore the CAMEL source code and understand how it works with OWL:
# Clone the CAMEL repository
git clone https://github.com/camel-ai/camel.git
cd camel
If you find this repo useful, please cite:
@misc{hu2025owl,
title={OWL: Optimized Workforce Learning for General Multi-Agent Assistance in Real-World Task Automation},
author={Mengkang Hu and Yuhang Zhou and Wendong Fan and Yuzhou Nie and Bowei Xia and Tao Sun and Ziyu Ye and Zhaoxuan Jin and Yingru Li and Qiguang Chen and Zeyu Zhang and Yifeng Wang and Qianshuo Ye and Bernard Ghanem and Ping Luo and Guohao Li},
year={2025},
eprint={2505.23885},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2505.23885},
}