docs/examples/quickstart.ipynb
<a href="https://trendshift.io/repositories/11716" target="_blank"></a>
Crawl4AI simplifies asynchronous web crawling and data extraction, making it accessible for large language models (LLMs) and AI applications. ππ
Use the Crawl4AI GPT Assistant as your AI-powered copilot! With this assistant, you can:
Install Crawl4AI and necessary dependencies:
# %%capture
!pip install crawl4ai
!pip install nest_asyncio
!playwright install
import asyncio
import nest_asyncio
nest_asyncio.apply()
import asyncio
from crawl4ai import AsyncWebCrawler
async def simple_crawl():
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business",
bypass_cache=True # By default this is False, meaning the cache will be used
)
print(result.markdown.raw_markdown[:500]) # Print the first 500 characters
asyncio.run(simple_crawl())
async def crawl_dynamic_content():
# You can use wait_for to wait for a condition to be met before returning the result
# wait_for = """() => {
# return Array.from(document.querySelectorAll('article.tease-card')).length > 10;
# }"""
# wait_for can be also just a css selector
# wait_for = "article.tease-card:nth-child(10)"
async with AsyncWebCrawler(verbose=True) as crawler:
js_code = [
"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();"
]
result = await crawler.arun(
url="https://www.nbcnews.com/business",
js_code=js_code,
# wait_for=wait_for,
bypass_cache=True,
)
print(result.markdown.raw_markdown[:500]) # Print first 500 characters
asyncio.run(crawl_dynamic_content())
async def clean_content():
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://janineintheworld.com/places-to-visit-in-central-mexico",
excluded_tags=['nav', 'footer', 'aside'],
remove_overlay_elements=True,
word_count_threshold=10,
bypass_cache=True
)
full_markdown_length = len(result.markdown.raw_markdown)
fit_markdown_length = len(result.markdown.fit_markdown)
print(f"Full Markdown Length: {full_markdown_length}")
print(f"Fit Markdown Length: {fit_markdown_length}")
print(result.markdown.fit_markdown[:1000])
asyncio.run(clean_content())
async def link_analysis():
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business",
bypass_cache=True,
exclude_external_links=True,
exclude_social_media_links=True,
# exclude_domains=["facebook.com", "twitter.com"]
)
print(f"Found {len(result.links['internal'])} internal links")
print(f"Found {len(result.links['external'])} external links")
for link in result.links['internal'][:5]:
print(f"Href: {link['href']}\nText: {link['text']}\n")
asyncio.run(link_analysis())
async def media_handling():
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business",
bypass_cache=True,
exclude_external_images=False,
screenshot=True
)
for img in result.media['images'][:5]:
print(f"Image URL: {img['src']}, Alt: {img['alt']}, Score: {img['score']}")
asyncio.run(media_handling())
Hooks in Crawl4AI allow you to run custom logic at specific stages of the crawling process. This can be invaluable for scenarios like setting custom headers, logging activities, or processing content before it is returned. Below is an example of a basic workflow using a hook, followed by a complete list of available hooks and explanations on their usage.
async def custom_hook_workflow():
async with AsyncWebCrawler() as crawler:
# Set a 'before_goto' hook to run custom code just before navigation
crawler.crawler_strategy.set_hook("before_goto", lambda page: print("[Hook] Preparing to navigate..."))
# Perform the crawl operation
result = await crawler.arun(
url="https://crawl4ai.com",
bypass_cache=True
)
print(result.markdown.raw_markdown[:500]) # Display the first 500 characters
asyncio.run(custom_hook_workflow())
List of available hooks and examples for each stage of the crawling process:
on_browser_created
async def on_browser_created_hook(browser):
print("[Hook] Browser created")
before_goto
async def before_goto_hook(page):
await page.set_extra_http_headers({"X-Test-Header": "test"})
after_goto
async def after_goto_hook(page):
print(f"[Hook] Navigated to {page.url}")
on_execution_started
async def on_execution_started_hook(page):
print("[Hook] JavaScript execution started")
before_return_html
async def before_return_html_hook(page, html):
print(f"[Hook] HTML length: {len(html)}")
When to Use Session-Based Crawling: Session-based crawling is especially beneficial when navigating through multi-page content where each page load needs to maintain the same session context. For instance, in cases where a βNext Pageβ button must be clicked to load subsequent data, the new data often replaces the previous content. Here, session-based crawling keeps the browser state intact across each interaction, allowing for sequential actions within the same session.
Example: Multi-Page Navigation Using JavaScript In this example, weβll navigate through multiple pages by clicking a "Next Page" button. After each page load, we extract the new content and repeat the process.
async def multi_page_session_crawl():
async with AsyncWebCrawler() as crawler:
session_id = "page_navigation_session"
url = "https://example.com/paged-content"
for page_number in range(1, 4):
result = await crawler.arun(
url=url,
session_id=session_id,
js_code="document.querySelector('.next-page-button').click();" if page_number > 1 else None,
css_selector=".content-section",
bypass_cache=True
)
print(f"Page {page_number} Content:")
print(result.markdown.raw_markdown[:500]) # Print first 500 characters
# asyncio.run(multi_page_session_crawl())
LLM Extraction
This example demonstrates how to use language model-based extraction to retrieve structured data from a pricing page on OpenAIβs site.
from crawl4ai import LLMExtractionStrategy
from pydantic import BaseModel, Field
import os, json
class OpenAIModelFee(BaseModel):
model_name: str = Field(..., description="Name of the OpenAI model.")
input_fee: str = Field(..., description="Fee for input token for the OpenAI model.")
output_fee: str = Field(
..., description="Fee for output token for the OpenAI model."
)
async def extract_structured_data_using_llm(provider: str, api_token: str = None, extra_headers: dict = None):
print(f"\n--- Extracting Structured Data with {provider} ---")
# Skip if API token is missing (for providers that require it)
if api_token is None and provider != "ollama":
print(f"API token is required for {provider}. Skipping this example.")
return
extra_args = {"extra_headers": extra_headers} if extra_headers else {}
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url="https://openai.com/api/pricing/",
word_count_threshold=1,
extraction_strategy=LLMExtractionStrategy(
provider=provider,
api_token=api_token,
schema=OpenAIModelFee.schema(),
extraction_type="schema",
instruction="""Extract all model names along with fees for input and output tokens."
"{model_name: 'GPT-4', input_fee: 'US$10.00 / 1M tokens', output_fee: 'US$30.00 / 1M tokens'}.""",
**extra_args
),
bypass_cache=True,
)
print(json.loads(result.extracted_content)[:5])
# Usage:
await extract_structured_data_using_llm("openai/gpt-4o-mini", os.getenv("OPENAI_API_KEY"))
Cosine Similarity Strategy
This strategy uses semantic clustering to extract relevant content based on contextual similarity, which is helpful when extracting related sections from a single topic.
from crawl4ai import CosineStrategy
async def cosine_similarity_extraction():
async with AsyncWebCrawler() as crawler:
strategy = CosineStrategy(
word_count_threshold=10,
max_dist=0.2, # Maximum distance between two words
linkage_method="ward", # Linkage method for hierarchical clustering (ward, complete, average, single)
top_k=3, # Number of top keywords to extract
sim_threshold=0.3, # Similarity threshold for clustering
semantic_filter="McDonald's economic impact, American consumer trends", # Keywords to filter the content semantically using embeddings
verbose=True
)
result = await crawler.arun(
url="https://www.nbcnews.com/business/consumer/how-mcdonalds-e-coli-crisis-inflation-politics-reflect-american-story-rcna177156",
extraction_strategy=strategy
)
print(json.loads(result.extracted_content)[:5])
asyncio.run(cosine_similarity_extraction())
Youβve explored core features of Crawl4AI, including dynamic content handling, link analysis, and advanced extraction strategies. Visit our documentation for further details on using Crawl4AIβs extensive features.
Happy Crawling with Crawl4AI! π·οΈπ€