deploy/docker/README.md
Before we dive in, make sure you have:
docker compose (usually bundled with Docker Desktop).git for cloning the repository.💡 Pro tip: Run
docker infoto check your Docker installation and available resources.
We offer several ways to get the Crawl4AI server running. The quickest way is to use our pre-built Docker Hub images.
Pull and run images directly from Docker Hub without building locally.
Our latest stable release is 0.8.5. Images are built with multi-arch manifests, so Docker automatically pulls the correct version for your system.
# Pull the latest stable version (0.8.5)
docker pull unclecode/crawl4ai:0.8.5
# Or use the latest tag (points to 0.8.0)
docker pull unclecode/crawl4ai:latest
If you plan to use LLMs, create a .llm.env file in your working directory:
# Create a .llm.env file with your API keys
cat > .llm.env << EOL
# OpenAI
OPENAI_API_KEY=sk-your-key
# Anthropic
ANTHROPIC_API_KEY=your-anthropic-key
# Other providers as needed
# DEEPSEEK_API_KEY=your-deepseek-key
# GROQ_API_KEY=your-groq-key
# TOGETHER_API_KEY=your-together-key
# MISTRAL_API_KEY=your-mistral-key
# GEMINI_API_TOKEN=your-gemini-token
EOL
🔑 Note: Keep your API keys secure! Never commit
.llm.envto version control.
Basic run:
docker run -d \
-p 11235:11235 \
--name crawl4ai \
--shm-size=1g \
unclecode/crawl4ai:0.8.5
With LLM support:
# Make sure .llm.env is in the current directory
docker run -d \
-p 11235:11235 \
--name crawl4ai \
--env-file .llm.env \
--shm-size=1g \
unclecode/crawl4ai:0.8.5
The server will be available at
http://localhost:11235. Visit/playgroundto access the interactive testing interface.
docker stop crawl4ai && docker rm crawl4ai
unclecode/crawl4aiLIBRARY_VERSION[-SUFFIX] (e.g., 0.7.0-r1)
LIBRARY_VERSION: The semantic version of the core crawl4ai Python librarySUFFIX: Optional tag for release candidates (``) and revisions (r1)latest Tag: Points to the most recent stable versionlinux/amd64 and linux/arm64 architectures through a single tagDocker Compose simplifies building and running the service, especially for local development and testing.
git clone https://github.com/unclecode/crawl4ai.git
cd crawl4ai
If you plan to use LLMs, copy the example environment file and add your API keys. This file should be in the project root directory.
# Make sure you are in the 'crawl4ai' root directory
cp deploy/docker/.llm.env.example .llm.env
# Now edit .llm.env and add your API keys
Flexible LLM Provider Configuration:
The Docker setup now supports flexible LLM provider configuration through three methods:
Environment Variable (Highest Priority): Set LLM_PROVIDER to override the default
export LLM_PROVIDER="anthropic/claude-3-opus"
# Or in your .llm.env file:
# LLM_PROVIDER=anthropic/claude-3-opus
API Request Parameter: Specify provider per request
{
"url": "https://example.com",
"provider": "groq/mixtral-8x7b"
}
Config File Default: Falls back to config.yml (default: openai/gpt-4o-mini)
The system automatically selects the appropriate API key based on the provider.
The docker-compose.yml file in the project root provides a simplified approach that automatically handles architecture detection using buildx.
Run Pre-built Image from Docker Hub:
# Pulls and runs the release candidate from Docker Hub
# Automatically selects the correct architecture
IMAGE=unclecode/crawl4ai:0.8.5 docker compose up -d
Build and Run Locally:
# Builds the image locally using Dockerfile and runs it
# Automatically uses the correct architecture for your machine
docker compose up --build -d
Customize the Build:
# Build with all features (includes torch and transformers)
INSTALL_TYPE=all docker compose up --build -d
# Build with GPU support (for AMD64 platforms)
ENABLE_GPU=true docker compose up --build -d
The server will be available at
http://localhost:11235.
# Stop the service
docker compose down
If you prefer not to use Docker Compose for direct control over the build and run process.
Follow steps 1 and 2 from the Docker Compose section above (clone repo, cd crawl4ai, create .llm.env in the root).
Use docker buildx to build the image. Crawl4AI now uses buildx to handle multi-architecture builds automatically.
# Make sure you are in the 'crawl4ai' root directory
# Build for the current architecture and load it into Docker
docker buildx build -t crawl4ai-local:latest --load .
# Or build for multiple architectures (useful for publishing)
docker buildx build --platform linux/amd64,linux/arm64 -t crawl4ai-local:latest --load .
# Build with additional options
docker buildx build \
--build-arg INSTALL_TYPE=all \
--build-arg ENABLE_GPU=false \
-t crawl4ai-local:latest --load .
Basic run (no LLM support):
docker run -d \
-p 11235:11235 \
--name crawl4ai-standalone \
--shm-size=1g \
crawl4ai-local:latest
With LLM support:
# Make sure .llm.env is in the current directory (project root)
docker run -d \
-p 11235:11235 \
--name crawl4ai-standalone \
--env-file .llm.env \
--shm-size=1g \
crawl4ai-local:latest
The server will be available at
http://localhost:11235.
docker stop crawl4ai-standalone && docker rm crawl4ai-standalone
Crawl4AI server includes support for the Model Context Protocol (MCP), allowing you to connect the server's capabilities directly to MCP-compatible clients like Claude Code.
MCP is an open protocol that standardizes how applications provide context to LLMs. It allows AI models to access external tools, data sources, and services through a standardized interface.
The Crawl4AI server exposes two MCP endpoints:
http://localhost:11235/mcp/ssews://localhost:11235/mcp/wsYou can add Crawl4AI as an MCP tool provider in Claude Code with a simple command:
# Add the Crawl4AI server as an MCP provider
claude mcp add --transport sse c4ai-sse http://localhost:11235/mcp/sse
# List all MCP providers to verify it was added
claude mcp list
Once connected, Claude Code can directly use Crawl4AI's capabilities like screenshot capture, PDF generation, and HTML processing without having to make separate API calls.
When connected via MCP, the following tools are available:
md - Generate markdown from web contenthtml - Extract preprocessed HTMLscreenshot - Capture webpage screenshotspdf - Generate PDF documentsexecute_js - Run JavaScript on web pagescrawl - Perform multi-URL crawlingask - Query the Crawl4AI library contextYou can test the MCP WebSocket connection using the test file included in the repository:
# From the repository root
python tests/mcp/test_mcp_socket.py
Access the MCP tool schemas at http://localhost:11235/mcp/schema for detailed information on each tool's parameters and capabilities.
In addition to the core /crawl and /crawl/stream endpoints, the server provides several specialized endpoints:
POST /html
Crawls the URL and returns preprocessed HTML optimized for schema extraction.
{
"url": "https://example.com"
}
POST /screenshot
Captures a full-page PNG screenshot of the specified URL.
{
"url": "https://example.com",
"screenshot_wait_for": 2,
"output_path": "/path/to/save/screenshot.png"
}
screenshot_wait_for: Optional delay in seconds before capture (default: 2)output_path: Optional path to save the screenshot (recommended)POST /pdf
Generates a PDF document of the specified URL.
{
"url": "https://example.com",
"output_path": "/path/to/save/document.pdf"
}
output_path: Optional path to save the PDF (recommended)POST /execute_js
Executes JavaScript snippets on the specified URL and returns the full crawl result.
{
"url": "https://example.com",
"scripts": [
"return document.title",
"return Array.from(document.querySelectorAll('a')).map(a => a.href)"
]
}
scripts: List of JavaScript snippets to execute sequentiallyYou can customize the image build process using build arguments (--build-arg). These are typically used via docker buildx build or within the docker-compose.yml file.
# Example: Build with 'all' features using buildx
docker buildx build \
--platform linux/amd64,linux/arm64 \
--build-arg INSTALL_TYPE=all \
-t yourname/crawl4ai-all:latest \
--load \
. # Build from root context
| Argument | Description | Default | Options |
|---|---|---|---|
| INSTALL_TYPE | Feature set | default | default, all, torch, transformer |
| ENABLE_GPU | GPU support (CUDA for AMD64) | false | true, false |
| APP_HOME | Install path inside container (advanced) | /app | any valid path |
| USE_LOCAL | Install library from local source | true | true, false |
| GITHUB_REPO | Git repo to clone if USE_LOCAL=false | (see Dockerfile) | any git URL |
| GITHUB_BRANCH | Git branch to clone if USE_LOCAL=false | main | any branch name |
(Note: PYTHON_VERSION is fixed by the FROM instruction in the Dockerfile)
default: Basic installation, smallest image size. Suitable for most standard web scraping and markdown generation.all: Full features including torch and transformers for advanced extraction strategies (e.g., CosineStrategy, certain LLM filters). Significantly larger image. Ensure you need these extras.buildx for building multi-architecture images, especially for pushing to registries.docker compose profiles (local-amd64, local-arm64) for easy platform-specific local builds.Communicate with the running Docker server via its REST API (defaulting to http://localhost:11235). You can use the Python SDK or make direct HTTP requests.
A built-in web playground is available at http://localhost:11235/playground for testing and generating API requests. The playground allows you to:
CrawlerRunConfig and BrowserConfig using the main library's Python syntaxThis is the easiest way to translate Python configuration to JSON requests when building integrations.
Install the SDK: pip install crawl4ai
import asyncio
from crawl4ai.docker_client import Crawl4aiDockerClient
from crawl4ai import BrowserConfig, CrawlerRunConfig, CacheMode # Assuming you have crawl4ai installed
async def main():
# Point to the correct server port
async with Crawl4aiDockerClient(base_url="http://localhost:11235", verbose=True) as client:
# If JWT is enabled on the server, authenticate first:
# await client.authenticate("[email protected]") # See Server Configuration section
# Example Non-streaming crawl
print("--- Running Non-Streaming Crawl ---")
results = await client.crawl(
["https://httpbin.org/html"],
browser_config=BrowserConfig(headless=True), # Use library classes for config aid
crawler_config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
)
if results: # client.crawl returns None on failure
print(f"Non-streaming results success: {results.success}")
if results.success:
for result in results: # Iterate through the CrawlResultContainer
print(f"URL: {result.url}, Success: {result.success}")
else:
print("Non-streaming crawl failed.")
# Example Streaming crawl
print("\n--- Running Streaming Crawl ---")
stream_config = CrawlerRunConfig(stream=True, cache_mode=CacheMode.BYPASS)
try:
async for result in await client.crawl( # client.crawl returns an async generator for streaming
["https://httpbin.org/html", "https://httpbin.org/links/5/0"],
browser_config=BrowserConfig(headless=True),
crawler_config=stream_config
):
print(f"Streamed result: URL: {result.url}, Success: {result.success}")
except Exception as e:
print(f"Streaming crawl failed: {e}")
# Example Get schema
print("\n--- Getting Schema ---")
schema = await client.get_schema()
print(f"Schema received: {bool(schema)}") # Print whether schema was received
if __name__ == "__main__":
asyncio.run(main())
(SDK parameters like timeout, verify_ssl etc. remain the same)
Crucially, when sending configurations directly via JSON, they must follow the {"type": "ClassName", "params": {...}} structure for any non-primitive value (like config objects or strategies). Dictionaries must be wrapped as {"type": "dict", "value": {...}}.
(Keep the detailed explanation of Configuration Structure, Basic Pattern, Simple vs Complex, Strategy Pattern, Complex Nested Example, Quick Grammar Overview, Important Rules, Pro Tip)
Advanced Crawler Configuration (Keep example, ensure cache_mode uses valid enum value like "bypass")
Extraction Strategy
{
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"extraction_strategy": {
"type": "JsonCssExtractionStrategy",
"params": {
"schema": {
"type": "dict",
"value": {
"baseSelector": "article.post",
"fields": [
{"name": "title", "selector": "h1", "type": "text"},
{"name": "content", "selector": ".content", "type": "html"}
]
}
}
}
}
}
}
}
LLM Extraction Strategy (Keep example, ensure schema uses type/value wrapper) (Keep Deep Crawler Example)
Update URLs to use port 11235.
import requests
# Configuration objects converted to the required JSON structure
browser_config_payload = {
"type": "BrowserConfig",
"params": {"headless": True}
}
crawler_config_payload = {
"type": "CrawlerRunConfig",
"params": {"stream": False, "cache_mode": "bypass"} # Use string value of enum
}
crawl_payload = {
"urls": ["https://httpbin.org/html"],
"browser_config": browser_config_payload,
"crawler_config": crawler_config_payload
}
response = requests.post(
"http://localhost:11235/crawl", # Updated port
# headers={"Authorization": f"Bearer {token}"}, # If JWT is enabled
json=crawl_payload
)
print(f"Status Code: {response.status_code}")
if response.ok:
print(response.json())
else:
print(f"Error: {response.text}")
import json
import httpx # Use httpx for async streaming example
async def test_stream_crawl(token: str = None): # Made token optional
"""Test the /crawl/stream endpoint with multiple URLs."""
url = "http://localhost:11235/crawl/stream" # Updated port
payload = {
"urls": [
"https://httpbin.org/html",
"https://httpbin.org/links/5/0",
],
"browser_config": {
"type": "BrowserConfig",
"params": {"headless": True, "viewport": {"type": "dict", "value": {"width": 1200, "height": 800}}} # Viewport needs type:dict
},
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {"stream": True, "cache_mode": "bypass"}
}
}
headers = {}
# if token:
# headers = {"Authorization": f"Bearer {token}"} # If JWT is enabled
try:
async with httpx.AsyncClient() as client:
async with client.stream("POST", url, json=payload, headers=headers, timeout=120.0) as response:
print(f"Status: {response.status_code} (Expected: 200)")
response.raise_for_status() # Raise exception for bad status codes
# Read streaming response line-by-line (NDJSON)
async for line in response.aiter_lines():
if line:
try:
data = json.loads(line)
# Check for completion marker
if data.get("status") == "completed":
print("Stream completed.")
break
print(f"Streamed Result: {json.dumps(data, indent=2)}")
except json.JSONDecodeError:
print(f"Warning: Could not decode JSON line: {line}")
except httpx.HTTPStatusError as e:
print(f"HTTP error occurred: {e.response.status_code} - {e.response.text}")
except Exception as e:
print(f"Error in streaming crawl test: {str(e)}")
# To run this example:
# import asyncio
# asyncio.run(test_stream_crawl())
For long-running crawls or when you want to avoid keeping connections open, use the job queue endpoints. Instead of polling for results, configure a webhook to receive notifications when jobs complete.
/crawl/job with optional webhook_configtask_id immediately/crawl/job/{task_id}# Submit a crawl job with webhook notification
curl -X POST http://localhost:11235/crawl/job \
-H "Content-Type: application/json" \
-d '{
"urls": ["https://example.com"],
"webhook_config": {
"webhook_url": "https://myapp.com/webhooks/crawl-complete",
"webhook_data_in_payload": false
}
}'
# Response: {"task_id": "crawl_a1b2c3d4"}
Your webhook receives:
{
"task_id": "crawl_a1b2c3d4",
"task_type": "crawl",
"status": "completed",
"timestamp": "2025-10-21T10:30:00.000000+00:00",
"urls": ["https://example.com"]
}
Then fetch the results:
curl http://localhost:11235/crawl/job/crawl_a1b2c3d4
Set webhook_data_in_payload: true to receive the full crawl results directly in the webhook:
curl -X POST http://localhost:11235/crawl/job \
-H "Content-Type: application/json" \
-d '{
"urls": ["https://example.com"],
"webhook_config": {
"webhook_url": "https://myapp.com/webhooks/crawl-complete",
"webhook_data_in_payload": true
}
}'
Your webhook receives the complete data:
{
"task_id": "crawl_a1b2c3d4",
"task_type": "crawl",
"status": "completed",
"timestamp": "2025-10-21T10:30:00.000000+00:00",
"urls": ["https://example.com"],
"data": {
"markdown": "...",
"html": "...",
"links": {...},
"metadata": {...}
}
}
Add custom headers for authentication:
{
"urls": ["https://example.com"],
"webhook_config": {
"webhook_url": "https://myapp.com/webhooks/crawl",
"webhook_data_in_payload": false,
"webhook_headers": {
"X-Webhook-Secret": "your-secret-token",
"X-Service-ID": "crawl4ai-prod"
}
}
}
Configure a default webhook URL in config.yml for all jobs:
webhooks:
enabled: true
default_url: "https://myapp.com/webhooks/default"
data_in_payload: false
retry:
max_attempts: 5
initial_delay_ms: 1000
max_delay_ms: 32000
timeout_ms: 30000
Now jobs without webhook_config automatically use the default webhook.
If you prefer polling instead of webhooks, just omit webhook_config:
# Submit job
curl -X POST http://localhost:11235/crawl/job \
-H "Content-Type: application/json" \
-d '{"urls": ["https://example.com"]}'
# Response: {"task_id": "crawl_xyz"}
# Poll for status
curl http://localhost:11235/crawl/job/crawl_xyz
The response includes status field: "processing", "completed", or "failed".
The same webhook system works for LLM extraction jobs via /llm/job:
# Submit LLM extraction job with webhook
curl -X POST http://localhost:11235/llm/job \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com/article",
"q": "Extract the article title, author, and main points",
"provider": "openai/gpt-4o-mini",
"webhook_config": {
"webhook_url": "https://myapp.com/webhooks/llm-complete",
"webhook_data_in_payload": true,
"webhook_headers": {
"X-Webhook-Secret": "your-secret-token"
}
}
}'
# Response: {"task_id": "llm_1234567890"}
Your webhook receives:
{
"task_id": "llm_1234567890",
"task_type": "llm_extraction",
"status": "completed",
"timestamp": "2025-10-22T12:30:00.000000+00:00",
"urls": ["https://example.com/article"],
"data": {
"extracted_content": {
"title": "Understanding Web Scraping",
"author": "John Doe",
"main_points": ["Point 1", "Point 2", "Point 3"]
}
}
}
Key Differences for LLM Jobs:
"llm_extraction" instead of "crawl"data.extracted_contentschema parameter💡 Pro tip: See WEBHOOK_EXAMPLES.md for detailed examples including TypeScript client code, Flask webhook handlers, and failure handling.
Keep an eye on your crawler with these endpoints:
/health - Quick health check/metrics - Detailed Prometheus metrics/schema - Full API schemaExample health check:
curl http://localhost:11235/health
(Deployment Scenarios and Complete Examples sections remain the same, maybe update links if examples moved)
The server's behavior can be customized through the config.yml file.
The configuration file is loaded from /app/config.yml inside the container. By default, the file from deploy/docker/config.yml in the repository is copied there during the build.
Here's a detailed breakdown of the configuration options (using defaults from deploy/docker/config.yml):
# Application Configuration
app:
title: "Crawl4AI API"
version: "1.0.0" # Consider setting this to match library version, e.g., "0.5.1"
host: "0.0.0.0"
port: 8020 # NOTE: This port is used ONLY when running server.py directly. Gunicorn overrides this (see supervisord.conf).
reload: False # Default set to False - suitable for production
timeout_keep_alive: 300
# Default LLM Configuration
llm:
provider: "openai/gpt-4o-mini" # Can be overridden by LLM_PROVIDER env var
# api_key: sk-... # If you pass the API key directly (not recommended)
# Redis Configuration (Used by internal Redis server managed by supervisord)
redis:
host: "localhost"
port: 6379
db: 0
password: ""
# ... other redis options ...
# Rate Limiting Configuration
rate_limiting:
enabled: True
default_limit: "1000/minute"
trusted_proxies: []
storage_uri: "memory://" # Use "redis://localhost:6379" if you need persistent/shared limits
# Security Configuration
security:
enabled: false # Master toggle for security features
jwt_enabled: false # Enable JWT authentication (requires security.enabled=true)
https_redirect: false # Force HTTPS (requires security.enabled=true)
trusted_hosts: ["*"] # Allowed hosts (use specific domains in production)
headers: # Security headers (applied if security.enabled=true)
x_content_type_options: "nosniff"
x_frame_options: "DENY"
content_security_policy: "default-src 'self'"
strict_transport_security: "max-age=63072000; includeSubDomains"
# Crawler Configuration
crawler:
memory_threshold_percent: 95.0
rate_limiter:
base_delay: [1.0, 2.0] # Min/max delay between requests in seconds for dispatcher
timeouts:
stream_init: 30.0 # Timeout for stream initialization
batch_process: 300.0 # Timeout for non-streaming /crawl processing
# Logging Configuration
logging:
level: "INFO"
format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
# Observability Configuration
observability:
prometheus:
enabled: True
endpoint: "/metrics"
health_check:
endpoint: "/health"
(JWT Authentication section remains the same, just note the default port is now 11235 for requests)
(Configuration Tips and Best Practices remain the same)
You can override the default config.yml.
deploy/docker/config.yml file in your local repository clone.docker buildx or docker compose --profile local-... up --build. The modified file will be copied into the image.Create your custom configuration file, e.g., my-custom-config.yml locally. Ensure it contains all necessary sections.
Mount it when running the container:
Using docker run:
# Assumes my-custom-config.yml is in the current directory
docker run -d -p 11235:11235 \
--name crawl4ai-custom-config \
--env-file .llm.env \
--shm-size=1g \
-v $(pwd)/my-custom-config.yml:/app/config.yml \
unclecode/crawl4ai:latest # Or your specific tag
Using docker-compose.yml: Add a volumes section to the service definition:
services:
crawl4ai-hub-amd64: # Or your chosen service
image: unclecode/crawl4ai:latest
profiles: ["hub-amd64"]
<<: *base-config
volumes:
# Mount local custom config over the default one in the container
- ./my-custom-config.yml:/app/config.yml
# Keep the shared memory volume from base-config
- /dev/shm:/dev/shm
(Note: Ensure my-custom-config.yml is in the same directory as docker-compose.yml)
💡 When mounting, your custom file completely replaces the default one. Ensure it's a valid and complete configuration.
Security First 🔒
Resource Management 💻
Monitoring 📊
Performance Tuning ⚡
We're here to help you succeed with Crawl4AI! Here's how to get support:
In this guide, we've covered everything you need to get started with Crawl4AI's Docker deployment:
The new playground interface at http://localhost:11235/playground makes it much easier to test configurations and generate the corresponding JSON for API requests.
For AI application developers, the MCP integration allows tools like Claude Code to directly access Crawl4AI's capabilities without complex API handling.
Remember, the examples in the examples folder are your friends - they show real-world usage patterns that you can adapt for your needs.
Keep exploring, and don't hesitate to reach out if you need help! We're building something amazing together. 🚀
Happy crawling! 🕷️