docs/md_v2/blog/releases/0.6.0.md
We're excited to announce the release of Crawl4AI v0.6.0, our biggest and most feature-rich update yet. This version introduces major architectural upgrades, brand-new capabilities for geo-aware crawling, high-efficiency scraping, and real-time streaming support for scalable deployments.
Crawl as if you’re anywhere in the world. With v0.6.0, each crawl can simulate:
Example:
CrawlerRunConfig(
url="https://browserleaks.com/geo",
locale="en-US",
timezone_id="America/Los_Angeles",
geolocation=GeolocationConfig(
latitude=34.0522,
longitude=-118.2437,
accuracy=10.0
)
)
Great for accessing region-specific content or testing global behavior.
Extract HTML tables directly into usable formats like Pandas DataFrames or CSV with zero parsing hassle. All table data is available under result.media["tables"].
Example:
raw_df = pd.DataFrame(
result.media["tables"][0]["rows"],
columns=result.media["tables"][0]["headers"]
)
This makes it ideal for scraping financial data, pricing pages, or anything tabular.
We've overhauled browser management. Now, multiple browser instances can be pooled and pages pre-warmed for ultra-fast launches:
This powers the new Docker Playground experience and streamlines heavy-load crawling.
Need full visibility? You can now capture:
No more guesswork on what happened during your crawl.
We’re exposing MCP socket and SSE endpoints, allowing:
This is a major step towards making Crawl4AI real-time ready.
Want to test performance under heavy load? v0.6.0 includes a new memory stress-test suite that supports 1,000+ URL workloads. Ideal for:
crawl4ai/browser/* modules are removed. Update imports accordingly.AsyncPlaywrightCrawlerStrategy.get_page now uses a new function signature.DefaultMarkdownGenerator with warning.Want a visual walkthrough of all these updates? Watch the video: 🔗 https://youtu.be/9x7nVcjOZks
If you're new to Crawl4AI, start here: 🔗 https://www.youtube.com/watch?v=xo3qK6Hg9AA&t=15s
We’ve just opened up our Discord for the public. Join us to:
💬 https://discord.gg/wpYFACrHR4
pip install -U crawl4ai
Live long and import crawl4ai. 🖖