Back to Crawl4ai

1. **BrowserConfig** – Controlling the Browser

docs/md_v2/api/parameters.md

0.8.643.3 KB
Original Source

1. BrowserConfig – Controlling the Browser

BrowserConfig focuses on how the browser is launched and behaves. This includes headless mode, proxies, user agents, and other environment tweaks.

python
from crawl4ai import AsyncWebCrawler, BrowserConfig

browser_cfg = BrowserConfig(
    browser_type="chromium",
    headless=True,
    viewport_width=1280,
    viewport_height=720,
    proxy_config="http://user:pass@proxy:8080",
    user_agent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 Chrome/116.0.0.0 Safari/537.36",
)

1.1 Parameter Highlights

ParameterType / DefaultWhat It Does
browser_type"chromium", "firefox", "webkit"
(default: "chromium")Which browser engine to use. "chromium" is typical for many sites, "firefox" or "webkit" for specialized tests.
headlessbool (default: True)Headless means no visible UI. False is handy for debugging.
browser_modestr (default: "dedicated")How browser is initialized: "dedicated" (new instance), "builtin" (CDP background), "custom" (explicit CDP), "docker" (container).
use_managed_browserbool (default: False)Launch browser via CDP for advanced control. Set automatically based on browser_mode.
cdp_urlstr (default: None)Chrome DevTools Protocol endpoint URL (e.g., "ws://localhost:9222/devtools/browser/"). Set automatically based on browser_mode.
debugging_portint (default: 9222)Port for browser debugging protocol.
hoststr (default: "localhost")Host for browser connection.
viewport_widthint (default: 1080)Initial page width (in px). Useful for testing responsive layouts.
viewport_heightint (default: 600)Initial page height (in px).
viewportdict (default: None)Viewport dimensions dict. If set, overrides viewport_width and viewport_height.
device_scale_factorfloat (default: 1.0)Device pixel ratio for rendering. Use 2.0 for Retina-quality screenshots. Higher values produce larger images and use more memory.
proxystr (deprecated)Deprecated. Use proxy_config instead. If set, it will be auto-converted internally.
proxy_configProxyConfig or dict (default: None)For advanced or multi-proxy needs, specify ProxyConfig object or dict like {"server": "...", "username": "...", "password": "..."}.
use_persistent_contextbool (default: False)If True, uses a persistent browser context (keep cookies, sessions across runs). Also sets use_managed_browser=True.
user_data_dirstr or None (default: None)Directory to store user data (profiles, cookies). Must be set if you want permanent sessions.
chrome_channelstr (default: "chromium")Chrome channel to launch (e.g., "chrome", "msedge"). Only for browser_type="chromium". Auto-set to empty for Firefox/WebKit.
channelstr (default: "chromium")Alias for chrome_channel.
accept_downloadsbool (default: False)Whether to allow file downloads. Requires downloads_path if True.
downloads_pathstr or None (default: None)Directory to store downloaded files.
storage_statestr or dict or None (default: None)In-memory storage state (cookies, localStorage) to restore browser state.
ignore_https_errorsbool (default: True)If True, continues despite invalid certificates (common in dev/staging).
java_script_enabledbool (default: True)Disable if you want no JS overhead, or if only static content is needed.
sleep_on_closebool (default: False)Add a small delay when closing browser (can help with cleanup issues).
cookieslist (default: [])Pre-set cookies, each a dict like {"name": "session", "value": "...", "url": "..."}.
headersdict (default: {})Extra HTTP headers for every request, e.g. {"Accept-Language": "en-US"}.
user_agentstr (default: Chrome-based UA)Your custom user agent string.
user_agent_modestr (default: "")Set to "random" to randomize user agent from a pool (helps with bot detection).
user_agent_generator_configdict (default: {})Configuration dict for user agent generation when user_agent_mode="random".
text_modebool (default: False)If True, tries to disable images/other heavy content for speed.
light_modebool (default: False)Disables some background features for performance gains.
avoid_adsbool (default: False)If True, blocks requests to common ad/tracker domains (Google Analytics, DoubleClick, Facebook, Hotjar, etc.) at the browser context level.
avoid_cssbool (default: False)If True, blocks loading of CSS files (.css, .less, .scss, .sass) for faster, leaner crawls when only text content is needed.
extra_argslist (default: [])Additional flags for the underlying browser process, e.g. ["--disable-extensions"].
enable_stealthbool (default: False)Enable playwright-stealth mode to bypass bot detection. Cannot be used with browser_mode="builtin".

Tips:

  • Set headless=False to visually debug how pages load or how interactions proceed.
  • If you need authentication storage or repeated sessions, consider use_persistent_context=True and specify user_data_dir.
  • For large pages, you might need a bigger viewport_width and viewport_height to handle dynamic content.

2. CrawlerRunConfig – Controlling Each Crawl

While BrowserConfig sets up the environment, CrawlerRunConfig details how each crawl operation should behave: caching, content filtering, link or domain blocking, timeouts, JavaScript code, etc.

python
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig

run_cfg = CrawlerRunConfig(
    wait_for="css:.main-content",
    word_count_threshold=15,
    excluded_tags=["nav", "footer"],
    exclude_external_links=True,
    stream=True,  # Enable streaming for arun_many()
)

2.1 Parameter Highlights

We group them by category.

A) Content Processing

ParameterType / DefaultWhat It Does
word_count_thresholdint (default: ~200)Skips text blocks below X words. Helps ignore trivial sections.
extraction_strategyExtractionStrategy (default: None)If set, extracts structured data (CSS-based, LLM-based, etc.).
chunking_strategyChunkingStrategy (default: RegexChunking())Strategy to chunk content before extraction. Can be customized for different chunking approaches.
markdown_generatorMarkdownGenerationStrategy (None)If you want specialized markdown output (citations, filtering, chunking, etc.). Can be customized with options such as content_source parameter to select the HTML input source ('cleaned_html', 'raw_html', or 'fit_html').
css_selectorstr (None)Retains only the part of the page matching this selector. Affects the entire extraction process.
target_elementsList[str] (None)List of CSS selectors for elements to focus on for markdown generation and data extraction, while still processing the entire page for links, media, etc. Provides more flexibility than css_selector.
excluded_tagslist (None)Removes entire tags (e.g. ["script", "style"]).
excluded_selectorstr (None)Like css_selector but to exclude. E.g. "#ads, .tracker".
only_textbool (False)If True, tries to extract text-only content.
prettiifybool (False)If True, beautifies final HTML (slower, purely cosmetic).
keep_data_attributesbool (False)If True, preserve data-* attributes in cleaned HTML.
keep_attrslist (default: [])List of HTML attributes to keep during processing (e.g., ["id", "class", "data-value"]).
remove_formsbool (False)If True, remove all <form> elements.
parser_typestr (default: "lxml")HTML parser to use (e.g., "lxml", "html.parser").
scraping_strategyContentScrapingStrategy (default: LXMLWebScrapingStrategy())Strategy to use for content scraping. Can be customized for different scraping needs (e.g., PDF extraction).

B) Browser Location and Identity

ParameterType / DefaultWhat It Does
localestr or None (None)Browser's locale (e.g., "en-US", "fr-FR") for language preferences.
timezone_idstr or None (None)Browser's timezone (e.g., "America/New_York", "Europe/Paris").
geolocationGeolocationConfig or None (None)GPS coordinates configuration. Use GeolocationConfig(latitude=..., longitude=..., accuracy=...).
fetch_ssl_certificatebool (False)If True, fetches and includes SSL certificate information in the result.
proxy_configProxyConfig, list[ProxyConfig], or None (None)Proxy configuration for this specific crawl. Pass a single proxy or an ordered list of proxies to try. See Anti-Bot & Fallback.
proxy_rotation_strategyProxyRotationStrategy (None)Strategy for rotating proxies during crawl operations.
max_retriesint (0)Number of retry rounds when anti-bot blocking is detected. Each round tries all proxies in proxy_config.
fallback_fetch_functionasync (str) -> str or None (None)Async function called as last resort after all retries are exhausted. Takes URL, returns raw HTML. See Anti-Bot & Fallback.

C) Caching & Session

ParameterType / DefaultWhat It Does
cache_modeCacheMode or NoneControls how caching is handled (ENABLED, BYPASS, DISABLED, etc.). If None, typically defaults to ENABLED.
session_idstr or NoneAssign a unique ID to reuse a single browser session across multiple arun() calls.
bypass_cachebool (False)Deprecated. If True, acts like CacheMode.BYPASS. Use cache_mode instead.
disable_cachebool (False)Deprecated. If True, acts like CacheMode.DISABLED. Use cache_mode instead.
no_cache_readbool (False)Deprecated. If True, acts like CacheMode.WRITE_ONLY (writes cache but never reads). Use cache_mode instead.
no_cache_writebool (False)Deprecated. If True, acts like CacheMode.READ_ONLY (reads cache but never writes). Use cache_mode instead.
shared_datadict or None (None)Shared data to be passed between hooks and accessible across crawl operations.

Use these for controlling whether you read or write from a local content cache. Handy for large batch crawls or repeated site visits.


D) Page Navigation & Timing

ParameterType / DefaultWhat It Does
wait_untilstr (domcontentloaded)Condition for navigation to "complete". Often "networkidle" or "domcontentloaded".
page_timeoutint (60000 ms)Timeout for page navigation or JS steps. Increase for slow sites.
wait_forstr or NoneWait for a CSS ("css:selector") or JS ("js:() => bool") condition before content extraction.
wait_for_timeoutint or None (None)Specific timeout in ms for the wait_for condition. If None, uses page_timeout.
wait_for_imagesbool (False)Wait for images to load before finishing. Slows down if you only want text.
delay_before_return_htmlfloat (0.1)Additional pause (seconds) before final HTML is captured. Good for last-second updates.
check_robots_txtbool (False)Whether to check and respect robots.txt rules before crawling. If True, caches robots.txt for efficiency.
mean_delay and max_rangefloat (0.1, 0.3)If you call arun_many(), these define random delay intervals between crawls, helping avoid detection or rate limits.
semaphore_countint (5)Max concurrency for arun_many(). Increase if you have resources for parallel crawls.

E) Page Interaction

ParameterType / DefaultWhat It Does
js_codestr or list[str] (None)JavaScript to run after wait_for and delay_before_return_html, on the fully-loaded page. E.g. "document.querySelector('button')?.click();".
js_code_before_waitstr or list[str] (None)JavaScript to run before wait_for. Use for triggering loading that wait_for then checks (e.g. clicking a tab, then waiting for its content).
c4a_scriptstr or list[str] (None)C4A script that compiles to JavaScript. Alternative to writing raw JS.
js_onlybool (False)If True, indicates we're reusing an existing session and only applying JS. No full reload.
ignore_body_visibilitybool (True)Skip checking if <body> is visible. Usually best to keep True.
scan_full_pagebool (False)If True, auto-scroll the page to load dynamic content (infinite scroll).
scroll_delayfloat (0.2)Delay between scroll steps when scanning the full page (scan_full_page=True) or capturing full-page screenshots.
max_scroll_stepsint or None (None)Maximum number of scroll steps during full page scan. If None, scrolls until entire page is loaded.
process_iframesbool (False)Inlines iframe content for single-page extraction.
flatten_shadow_dombool (False)Flattens Shadow DOM content into the light DOM before HTML capture. Resolves slots, strips shadow-scoped styles, and force-opens closed shadow roots. Essential for sites built with Web Components (Stencil, Lit, Shoelace, etc.).
remove_overlay_elementsbool (False)Removes potential modals/popups blocking the main content.
remove_consent_popupsbool (False)Removes GDPR/cookie consent popups from known CMP providers (OneTrust, Cookiebot, TrustArc, Quantcast, Didomi, Sourcepoint, FundingChoices, etc.). Tries clicking "Accept All" first, then falls back to DOM removal.
simulate_userbool (False)Simulate user interactions (mouse movements) to avoid bot detection.
override_navigatorbool (False)Override navigator properties in JS for stealth.
magicbool (False)Automatic handling of popups/consent banners. Experimental.
adjust_viewport_to_contentbool (False)Resizes viewport to match page content height.

If your page is a single-page app with repeated JS updates, set js_only=True in subsequent calls, plus a session_id for reusing the same tab.


F) Media Handling

ParameterType / DefaultWhat It Does
screenshotbool (False)Capture a screenshot (base64) in result.screenshot.
screenshot_wait_forfloat or NoneExtra wait time before the screenshot.
screenshot_height_thresholdint (~20000)If the page is taller than this, alternate screenshot strategies are used.
force_viewport_screenshotbool (False)If True, always captures a viewport-only screenshot regardless of page height. Faster and smaller than full-page screenshots.
pdfbool (False)If True, returns a PDF in result.pdf.
capture_mhtmlbool (False)If True, captures an MHTML snapshot of the page in result.mhtml. MHTML includes all page resources (CSS, images, etc.) in a single file.
image_description_min_word_thresholdint (~50)Minimum words for an image's alt text or description to be considered valid.
image_score_thresholdint (~3)Filter out low-scoring images. The crawler scores images by relevance (size, context, etc.).
exclude_external_imagesbool (False)Exclude images from other domains.
exclude_all_imagesbool (False)If True, excludes all images from processing (both internal and external).
table_score_thresholdint (7)Minimum score threshold for processing a table. Lower values include more tables.
table_extractionTableExtractionStrategy (DefaultTableExtraction)Strategy for table extraction. Defaults to DefaultTableExtraction with configured threshold.

ParameterType / DefaultWhat It Does
exclude_social_media_domainslist (e.g. Facebook/Twitter)A default list can be extended. Any link to these domains is removed from final output.
exclude_external_linksbool (False)Removes all links pointing outside the current domain.
exclude_social_media_linksbool (False)Strips links specifically to social sites (like Facebook or Twitter).
exclude_domainslist ([])Provide a custom list of domains to exclude (like ["ads.com", "trackers.io"]).
exclude_internal_linksbool (False)If True, excludes internal links from the results.
score_linksbool (False)If True, calculates intrinsic quality scores for all links using URL structure, text quality, and contextual metrics.
preserve_https_for_internal_linksbool (False)If True, preserves HTTPS scheme for internal links even when the server redirects to HTTP. Useful for security-conscious crawling.

Use these for link-level content filtering (often to keep crawls “internal” or to remove spammy domains).


H) Debug, Logging & Network Monitoring

ParameterType / DefaultWhat It Does
verbosebool (True)Prints logs detailing each step of crawling, interactions, or errors.
log_consolebool (False)Logs the page's JavaScript console output if you want deeper JS debugging.
capture_network_requestsbool (False)If True, captures network requests made by the page in result.captured_requests.
capture_console_messagesbool (False)If True, captures console messages from the page in result.console_messages.

I) Connection & HTTP Parameters

ParameterType / DefaultWhat It Does
methodstr ("GET")HTTP method to use when using AsyncHTTPCrawlerStrategy (e.g., "GET", "POST").
streambool (False)If True, enables streaming mode for arun_many() to process URLs as they complete rather than waiting for all.
urlstr or None (None)URL for this specific config. Not typically set directly but used internally for URL-specific configurations.
user_agentstr or None (None)Custom User-Agent string for this crawl. Can override browser-level user agent.
user_agent_modestr or None (None)Set to "random" to randomize user agent. Can override browser-level setting.
user_agent_generator_configdict ({})Configuration for user agent generation when user_agent_mode="random".

J) Virtual Scroll Configuration

ParameterType / DefaultWhat It Does
virtual_scroll_configVirtualScrollConfig or dict (None)Configuration for handling virtualized scrolling on sites like Twitter/Instagram where content is replaced rather than appended.

When sites use virtual scrolling (content replaced as you scroll), use VirtualScrollConfig:

python
from crawl4ai import VirtualScrollConfig

virtual_config = VirtualScrollConfig(
    container_selector="#timeline",    # CSS selector for scrollable container
    scroll_count=30,                   # Number of times to scroll
    scroll_by="container_height",      # How much to scroll: "container_height", "page_height", or pixels (e.g. 500)
    wait_after_scroll=0.5             # Seconds to wait after each scroll for content to load
)

config = CrawlerRunConfig(
    virtual_scroll_config=virtual_config
)

VirtualScrollConfig Parameters:

ParameterType / DefaultWhat It Does
container_selectorstr (required)CSS selector for the scrollable container (e.g., "#feed", ".timeline")
scroll_countint (10)Maximum number of scrolls to perform
scroll_bystr or int ("container_height")Scroll amount: "container_height", "page_height", or pixels (e.g., 500)
wait_after_scrollfloat (0.5)Time in seconds to wait after each scroll for new content to load

When to use Virtual Scroll vs scan_full_page:

  • Use virtual_scroll_config when content is replaced during scroll (Twitter, Instagram)
  • Use scan_full_page when content is appended during scroll (traditional infinite scroll)

See Virtual Scroll documentation for detailed examples.


K) URL Matching Configuration

ParameterType / DefaultWhat It Does
url_matcherUrlMatcher (None)Pattern(s) to match URLs against. Can be: string (glob), function, or list of mixed types. None means match ALL URLs
match_modeMatchMode (MatchMode.OR)How to combine multiple matchers in a list: MatchMode.OR (any match) or MatchMode.AND (all must match)

The url_matcher parameter enables URL-specific configurations when used with arun_many():

python
from crawl4ai import CrawlerRunConfig, MatchMode
from crawl4ai.processors.pdf import PDFContentScrapingStrategy
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy

# Simple string pattern (glob-style)
pdf_config = CrawlerRunConfig(
    url_matcher="*.pdf",
    scraping_strategy=PDFContentScrapingStrategy()
)

# Multiple patterns with OR logic (default)
blog_config = CrawlerRunConfig(
    url_matcher=["*/blog/*", "*/article/*", "*/news/*"],
    match_mode=MatchMode.OR  # Any pattern matches
)

# Function matcher
api_config = CrawlerRunConfig(
    url_matcher=lambda url: 'api' in url or url.endswith('.json'),
    # Other settings like extraction_strategy
)

# Mixed: String + Function with AND logic
complex_config = CrawlerRunConfig(
    url_matcher=[
        lambda url: url.startswith('https://'),  # Must be HTTPS
        "*.org/*",                               # Must be .org domain
        lambda url: 'docs' in url                # Must contain 'docs'
    ],
    match_mode=MatchMode.AND  # ALL conditions must match
)

# Combined patterns and functions with AND logic
secure_docs = CrawlerRunConfig(
    url_matcher=["https://*", lambda url: '.doc' in url],
    match_mode=MatchMode.AND  # Must be HTTPS AND contain .doc
)

# Default config - matches ALL URLs
default_config = CrawlerRunConfig()  # No url_matcher = matches everything

UrlMatcher Types:

  • None (default): When url_matcher is None or not set, the config matches ALL URLs
  • String patterns: Glob-style patterns like "*.pdf", "*/api/*", "https://*.example.com/*"
  • Functions: lambda url: bool - Custom logic for complex matching
  • Lists: Mix strings and functions, combined with MatchMode.OR or MatchMode.AND

Important Behavior:

  • When passing a list of configs to arun_many(), URLs are matched against each config's url_matcher in order. First match wins!
  • If no config matches a URL and there's no default config (one without url_matcher), the URL will fail with "No matching configuration found"
  • Always include a default config as the last item if you want to handle all URLs

L) Advanced Crawling Features

ParameterType / DefaultWhat It Does
deep_crawl_strategyDeepCrawlStrategy or None (None)Strategy for deep/recursive crawling. Enables automatic link following and multi-level site crawling.
link_preview_configLinkPreviewConfig or dict or None (None)Configuration for link head extraction and scoring. Fetches and scores link metadata without full page loads.
experimentaldict or None (None)Dictionary for experimental/beta features not yet integrated into main parameters. Use with caution.

Deep Crawl Strategy enables automatic site exploration by following links according to defined rules. Useful for sitemap generation or comprehensive site archiving.

Link Preview Config allows efficient link discovery and scoring by fetching only the <head> section of linked pages, enabling smart crawl prioritization without the overhead of full page loads.

Experimental parameters are features in beta testing. They may change or be removed in future versions. Check documentation for currently available experimental features.


2.2 Helper Methods

Both BrowserConfig and CrawlerRunConfig provide a clone() method to create modified copies:

python
# Create a base configuration
base_config = CrawlerRunConfig(
    cache_mode=CacheMode.ENABLED,
    word_count_threshold=200
)

# Create variations using clone()
stream_config = base_config.clone(stream=True)
no_cache_config = base_config.clone(
    cache_mode=CacheMode.BYPASS,
    stream=True
)

The clone() method is particularly useful when you need slightly different configurations for different use cases, without modifying the original config.

Class-Level Defaults (set_defaults / get_defaults / reset_defaults)

Both config classes support class-level default overrides. When deploying in a server or cloud context, this eliminates the need to pass the same parameters at every call site.

Resolution order: explicit arg > class-level default > hardcoded default

python
from crawl4ai import BrowserConfig, CrawlerRunConfig

# Set once at application startup
BrowserConfig.set_defaults(
    cache_cdp_connection=True,
    cdp_close_delay=0,
    create_isolated_context=True,
)
CrawlerRunConfig.set_defaults(verbose=False)

# All new instances inherit the class defaults
cfg1 = BrowserConfig(cdp_url="ws://localhost:9222")
# → cache_cdp_connection=True, cdp_close_delay=0

cfg2 = BrowserConfig(cdp_url="ws://localhost:9222", cache_cdp_connection=False)
# → cache_cdp_connection=False (explicit value wins)

# Inspect current defaults
BrowserConfig.get_defaults()
# → {"cache_cdp_connection": True, "cdp_close_delay": 0, "create_isolated_context": True}

# Remove a single default
BrowserConfig.reset_defaults("cdp_close_delay")

# Remove all defaults
BrowserConfig.reset_defaults()

API Reference:

MethodSignatureDescription
set_defaultsset_defaults(**kwargs)Set class-level defaults for new instances. Raises ValueError if any key is not a valid __init__ parameter.
get_defaultsget_defaults() → dictReturn a deep copy of the current class-level defaults.
reset_defaultsreset_defaults(*names)With no args, clears all defaults. With args, removes only the named defaults.

Notes:

  • Defaults are independent per class — BrowserConfig.set_defaults() has no effect on CrawlerRunConfig.
  • Mutable values (lists, dicts) are deep-copied on storage and on each instance creation, so instances do not share objects.
  • clone(), dump()/load(), and from_kwargs() all work correctly with class defaults — serialized data is self-contained and independent of the current class defaults.
  • Defaults are stored in memory for the lifetime of the process. They are not persisted to disk.

2.3 Example Usage

python
import asyncio
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode

async def main():
    # Configure the browser
    browser_cfg = BrowserConfig(
        headless=False,
        viewport_width=1280,
        viewport_height=720,
        proxy_config="http://user:pass@myproxy:8080",
        text_mode=True
    )

    # Configure the run
    run_cfg = CrawlerRunConfig(
        cache_mode=CacheMode.BYPASS,
        session_id="my_session",
        css_selector="main.article",
        excluded_tags=["script", "style"],
        exclude_external_links=True,
        wait_for="css:.article-loaded",
        screenshot=True,
        stream=True
    )

    async with AsyncWebCrawler(config=browser_cfg) as crawler:
        result = await crawler.arun(
            url="https://example.com/news",
            config=run_cfg
        )
        if result.success:
            print("Final cleaned_html length:", len(result.cleaned_html))
            if result.screenshot:
                print("Screenshot captured (base64, length):", len(result.screenshot))
        else:
            print("Crawl failed:", result.error_message)

if __name__ == "__main__":
    asyncio.run(main())

2.4 Compliance & Ethics

ParameterType / DefaultWhat It Does
check_robots_txtbool (False)When True, checks and respects robots.txt rules before crawling. Uses efficient caching with SQLite backend.
user_agentstr (None)User agent string to identify your crawler. Used for robots.txt checking when enabled.
python
run_config = CrawlerRunConfig(
    check_robots_txt=True,  # Enable robots.txt compliance
    user_agent="MyBot/1.0"  # Identify your crawler
)

3. LLMConfig - Setting up LLM providers

LLMConfig is useful to pass LLM provider config to strategies and functions that rely on LLMs to do extraction, filtering, schema generation etc. Currently it can be used in the following -

  1. LLMExtractionStrategy
  2. LLMContentFilter
  3. JsonCssExtractionStrategy.generate_schema
  4. JsonXPathExtractionStrategy.generate_schema
  5. AdaptiveConfig.embedding_llm_config (embedding model for adaptive crawling)
  6. AdaptiveConfig.query_llm_config (chat completion model for query expansion in adaptive crawling)

3.1 Parameters

ParameterType / DefaultWhat It Does
provider"ollama/llama3","groq/llama3-70b-8192","groq/llama3-8b-8192", "openai/gpt-4o-mini" ,"openai/gpt-4o","openai/o1-mini","openai/o1-preview","openai/o3-mini","openai/o3-mini-high","anthropic/claude-3-haiku-20240307","anthropic/claude-3-opus-20240229","anthropic/claude-3-sonnet-20240229","anthropic/claude-3-5-sonnet-20240620","gemini/gemini-pro","gemini/gemini-1.5-pro","gemini/gemini-2.0-flash","gemini/gemini-2.0-flash-exp","gemini/gemini-2.0-flash-lite-preview-02-05","deepseek/deepseek-chat"
(default: "openai/gpt-4o-mini")Which LLM provider to use.
api_token1.Optional. When not provided explicitly, api_token will be read from environment variables based on provider. For example: If a gemini model is passed as provider then,"GEMINI_API_KEY" will be read from environment variables
  1. API token of LLM provider eg: api_token = "gsk_1ClHGGJ7Lpn4WGybR7vNWGdyb3FY7zXEw3SCiy0BAVM9lL8CQv"
  2. Environment variable - use with prefix "env:" eg:api_token = "env: GROQ_API_KEY" | API token to use for the given provider | base_url |Optional. Custom API endpoint | If your provider has a custom endpoint | backoff_base_delay |Optional. int (default: 2) | Seconds to wait before the first retry when the provider throttles a request. | backoff_max_attempts |Optional. int (default: 3) | Total tries (initial call + retries) before surfacing an error. | backoff_exponential_factor |Optional. int (default: 2) | Multiplier that increases the wait time for each retry (delay = base_delay * factor^attempt).

3.2 Example Usage

python
llm_config = LLMConfig(
    provider="openai/gpt-4o-mini",
    api_token=os.getenv("OPENAI_API_KEY"),
    backoff_base_delay=1, # optional
    backoff_max_attempts=5, # optional
    backoff_exponential_factor=3, # optional
)

4. Putting It All Together

  • Use BrowserConfig for global browser settings: engine, headless, proxy, user agent.
  • Use CrawlerRunConfig for each crawl’s context: how to filter content, handle caching, wait for dynamic elements, or run JS.
  • Pass both configs to AsyncWebCrawler (the BrowserConfig) and then to arun() (the CrawlerRunConfig).
  • Use LLMConfig for LLM provider configurations that can be used across all extraction, filtering, schema generation, and adaptive crawling tasks. Can be used in - LLMExtractionStrategy, LLMContentFilter, JsonCssExtractionStrategy.generate_schema, JsonXPathExtractionStrategy.generate_schema, and AdaptiveConfig (embedding_llm_config / query_llm_config)
python
# Create a modified copy with the clone() method
stream_cfg = run_cfg.clone(
    stream=True,
    cache_mode=CacheMode.BYPASS
)

# Or set project-wide defaults once at startup
BrowserConfig.set_defaults(headless=True, text_mode=True)
CrawlerRunConfig.set_defaults(cache_mode=CacheMode.BYPASS)