docs/blog/release-v0.7.0.md
January 28, 2025 β’ 10 min read
Today I'm releasing Crawl4AI v0.7.0βthe Adaptive Intelligence Update. This release introduces fundamental improvements in how Crawl4AI handles modern web complexity through adaptive learning, intelligent content discovery, and advanced extraction capabilities.
The Problem: Websites change. Class names shift. IDs disappear. Your carefully crafted selectors break at 3 AM, and you wake up to empty datasets and angry stakeholders.
My Solution: I implemented an adaptive learning system that observes patterns, builds confidence scores, and adjusts extraction strategies on the fly. It's like having a junior developer who gets better at their job with every page they scrape.
The Adaptive Crawler maintains a persistent state for each domain, tracking:
from crawl4ai import AsyncWebCrawler, AdaptiveCrawler, AdaptiveConfig
import asyncio
async def main():
# Configure adaptive crawler
config = AdaptiveConfig(
strategy="statistical", # or "embedding" for semantic understanding
max_pages=10,
confidence_threshold=0.7, # Stop at 70% confidence
top_k_links=3, # Follow top 3 links per page
min_gain_threshold=0.05 # Need 5% information gain to continue
)
async with AsyncWebCrawler(verbose=False) as crawler:
adaptive = AdaptiveCrawler(crawler, config)
print("Starting adaptive crawl about Python decorators...")
result = await adaptive.digest(
start_url="https://docs.python.org/3/glossary.html",
query="python decorators functions wrapping"
)
print(f"\nβ
Crawling Complete!")
print(f"β’ Confidence Level: {adaptive.confidence:.0%}")
print(f"β’ Pages Crawled: {len(result.crawled_urls)}")
print(f"β’ Knowledge Base: {len(adaptive.state.knowledge_base)} documents")
# Get most relevant content
relevant = adaptive.get_relevant_content(top_k=3)
print(f"\nMost Relevant Pages:")
for i, page in enumerate(relevant, 1):
print(f"{i}. {page['url']} (relevance: {page['score']:.2%})")
asyncio.run(main())
Expected Real-World Impact:
The Problem: Modern web apps only render what's visible. Scroll down, new content appears, old content vanishes into the void. Traditional crawlers capture that first viewport and miss 90% of the content. It's like reading only the first page of every book.
My Solution: I built Virtual Scroll support that mimics human browsing behavior, capturing content as it loads and preserving it before the browser's garbage collector strikes.
from crawl4ai import VirtualScrollConfig
# For social media feeds (Twitter/X style)
twitter_config = VirtualScrollConfig(
container_selector="[data-testid='primaryColumn']",
scroll_count=20, # Number of scrolls
scroll_by="container_height", # Smart scrolling by container size
wait_after_scroll=1.0 # Let content load
)
# For e-commerce product grids (Instagram style)
grid_config = VirtualScrollConfig(
container_selector="main .product-grid",
scroll_count=30,
scroll_by=800, # Fixed pixel scrolling
wait_after_scroll=1.5 # Images need time
)
# For news feeds with lazy loading
news_config = VirtualScrollConfig(
container_selector=".article-feed",
scroll_count=50,
scroll_by="page_height", # Viewport-based scrolling
wait_after_scroll=0.5 # Wait for content to load
)
# Use it in your crawl
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
"https://twitter.com/trending",
config=CrawlerRunConfig(
virtual_scroll_config=twitter_config,
# Combine with other features
extraction_strategy=JsonCssExtractionStrategy({
"tweets": {
"selector": "[data-testid='tweet']",
"fields": {
"text": {"selector": "[data-testid='tweetText']", "type": "text"},
"likes": {"selector": "[data-testid='like']", "type": "text"}
}
}
})
)
)
print(f"Captured {len(result.extracted_content['tweets'])} tweets")
Key Capabilities:
Expected Real-World Impact:
The Problem: You crawl a page and get 200 links. Which ones matter? Which lead to the content you actually want? Traditional crawlers force you to follow everything or build complex filters.
My Solution: I implemented a three-layer scoring system that analyzes links like a human wouldβconsidering their position, context, and relevance to your goals.
import asyncio
from crawl4ai import CrawlerRunConfig, CacheMode, AsyncWebCrawler
from crawl4ai.adaptive_crawler import LinkPreviewConfig
async def main():
# Configure intelligent link analysis
link_config = LinkPreviewConfig(
include_internal=True,
include_external=False,
max_links=10,
concurrency=5,
query="python tutorial", # For contextual scoring
score_threshold=0.3,
verbose=True
)
# Use in your crawl
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
"https://www.geeksforgeeks.org/",
config=CrawlerRunConfig(
link_preview_config=link_config,
score_links=True, # Enable intrinsic scoring
cache_mode=CacheMode.BYPASS
)
)
# Access scored and sorted links
if result.success and result.links:
for link in result.links.get("internal", []):
text = link.get('text', 'No text')[:40]
print(
text,
f"{link.get('intrinsic_score', 0):.1f}/10" if link.get('intrinsic_score') is not None else "0.0/10",
f"{link.get('contextual_score', 0):.2f}/1" if link.get('contextual_score') is not None else "0.00/1",
f"{link.get('total_score', 0):.3f}" if link.get('total_score') is not None else "0.000"
)
asyncio.run(main())
Scoring Components:
Intrinsic Score: Based on link quality indicators
Contextual Score: Relevance to your query using BM25 algorithm
Total Score: Combined score for final ranking
Expected Real-World Impact:
The Problem: You want to crawl an entire domain but only have the homepage. Or worse, you want specific content types across thousands of pages. Manual URL discovery? That's a job for machines, not humans.
My Solution: I built Async URL Seederβa turbocharged URL discovery engine that combines multiple sources with intelligent filtering and relevance scoring.
import asyncio
from crawl4ai import AsyncUrlSeeder, SeedingConfig
async def main():
async with AsyncUrlSeeder() as seeder:
# Discover Python tutorial URLs
config = SeedingConfig(
source="sitemap", # Use sitemap
pattern="*python*", # URL pattern filter
extract_head=True, # Get metadata
query="python tutorial", # For relevance scoring
scoring_method="bm25",
score_threshold=0.2,
max_urls=10
)
print("Discovering Python async tutorial URLs...")
urls = await seeder.urls("https://www.geeksforgeeks.org/", config)
print(f"\nβ
Found {len(urls)} relevant URLs:")
for i, url_info in enumerate(urls[:5], 1):
print(f"\n{i}. {url_info['url']}")
if url_info.get('relevance_score'):
print(f" Relevance: {url_info['relevance_score']:.3f}")
if url_info.get('head_data', {}).get('title'):
print(f" Title: {url_info['head_data']['title'][:60]}...")
asyncio.run(main())
Discovery Methods:
Expected Real-World Impact:
This release includes significant performance improvements through optimized resource handling, better concurrency management, and reduced memory footprint.
# Optimized crawling with v0.7.0 improvements
results = []
for url in urls:
result = await crawler.arun(
url,
config=CrawlerRunConfig(
# Performance optimizations
wait_until="domcontentloaded", # Faster than networkidle
cache_mode=CacheMode.ENABLED # Enable caching
)
)
results.append(result)
Performance Gains:
link_extractor renamed to link_preview (better reflects functionality)CrawlerConfig split into CrawlerRunConfig and BrowserConfig# Old (v0.6.x)
from crawl4ai import CrawlerConfig
config = CrawlerConfig(timeout=30000)
# New (v0.7.0)
from crawl4ai import CrawlerRunConfig, BrowserConfig
browser_config = BrowserConfig(timeout=30000)
run_config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
I'm currently working on bringing advanced automation capabilities to Crawl4AI. This includes:
These features are under active development and will revolutionize how we approach web automation. Stay tuned!
pip install crawl4ai==0.7.0
Check out the updated documentation.
Questions? Issues? I'm always listening:
Happy crawling! π·οΈ
P.S. If you're using Crawl4AI in production, I'd love to hear about it. Your use cases inspire the next features.