docs/commands/url.md
f urlInspect or crawl URLs into compact AI-friendly summaries.
# Thin single-page summary
f url inspect https://developers.cloudflare.com/changelog/post/2026-03-10-br-crawl-endpoint/
# Force Cloudflare Browser Rendering markdown
f url inspect --provider cloudflare https://developers.cloudflare.com/changelog/post/2026-03-10-br-crawl-endpoint/
# Machine-readable output
f url inspect --json https://linear.app/fl2024008/project/llm-proxy-v1-6cd0a041bd76/overview
# Explicit site crawl (Cloudflare Browser Rendering)
f url crawl https://developers.cloudflare.com/browser-rendering/rest-api/crawl-endpoint/ --limit 3 --records 2
inspectf url inspect <url> uses this provider order:
[skills.seq]Default output is intentionally compact:
Use --full to include the full markdown/content body.
crawlf url crawl <url> is the explicit multi-page path.
It currently uses Cloudflare Browser Rendering crawl and polls until the job completes or the wait timeout is reached.
Useful flags:
f url crawl <url> --limit 10 --records 5
f url crawl <url> --depth 2 --render
f url crawl <url> --include-pattern "https://developers.cloudflare.com/browser-rendering/*"
f url crawl <url> --exclude-pattern "*/changelog/*"
f url crawl <url> --json
Defaults are tuned to stay small:
--limit 10--depth 2--records 5--render falseCloudflare auth is read from:
Required keys:
CLOUDFLARE_ACCOUNT_IDCLOUDFLARE_API_TOKENNo daemon is required.
If you have a local scraper backend, f url inspect reuses [skills.seq] settings from repo flow.toml or global ~/.config/flow/flow.toml:
[skills.seq]
scraper_base_url = "http://127.0.0.1:7444"
scraper_api_key = "..."
cache_ttl_hours = 2
allow_direct_fallback = true