docs/guides/http-clients.mdx
import ApiLink from '@site/src/components/ApiLink'; import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; import RunnableCodeBlock from '@site/src/components/RunnableCodeBlock';
import CheerioGotScrapingExample from '!!raw-loader!roa-loader!./http-clients/cheerio-got-scraping-example.ts'; import CheerioImpitExample from '!!raw-loader!roa-loader!./http-clients/cheerio-impit-example.ts';
HTTP clients are utilized by HTTP-based crawlers (e.g., <ApiLink to="cheerio-crawler/class/CheerioCrawler">CheerioCrawler</ApiLink>) to communicate with web servers. They use external HTTP libraries for communication rather than a browser. Examples of such libraries include impit or got-scraping. After retrieving page content, an HTML parsing library is typically used to facilitate data extraction. Examples of such libraries include Cheerio, jsdom or linkedom. These crawlers are faster than browser-based crawlers but generally cannot execute client-side JavaScript.
---
config:
class:
hideEmptyMembersBox: true
---
classDiagram
%% ========================
%% Abstract classes
%% ========================
class BaseHttpClient {
<<abstract>>
}
%% ========================
%% Specific classes
%% ========================
class ImpitHttpClient
class GotScrapingHttpClient
%% ========================
%% Inheritance arrows
%% ========================
BaseHttpClient --|> ImpitHttpClient
BaseHttpClient --|> GotScrapingHttpClient
Crawlee currently provides two main HTTP clients: <ApiLink to="core/class/GotScrapingHttpClient">GotScrapingHttpClient</ApiLink>, which uses the got-scraping library, and <ApiLink to="impit-client/class/ImpitHttpClient">ImpitHttpClient</ApiLink>, which uses the impit library. You can switch between them by setting the BasehttpClient parameter when initializing a crawler class. The default HTTP client is <ApiLink to="core/class/GotScrapingHttpClient">GotScrapingHttpClient</ApiLink>. For more details on anti-blocking features, see our avoid getting blocked guide.
Below are examples of how to configure the HTTP client for the <ApiLink to="cheerio-crawler/class/CheerioCrawler">CheerioCrawler</ApiLink>:
Since <ApiLink to="core/class/GotScrapingHttpClient">GotScrapingHttpClient</ApiLink> is the default HTTP client, it's included with the base Crawlee installation and requires no additional packages.
For <ApiLink to="impit-client/class/ImpitHttpClient">ImpitHttpClient</ApiLink>, you need to install a separate @crawlee/impit-client package:
npm i @crawlee/impit-client
Crawlee provides an interface, <ApiLink to="core/interface/BaseHttpClient">BaseHttpClient</ApiLink>, which defines the interface that all HTTP clients must implement. This allows you to create custom HTTP clients tailored to your specific requirements.
HTTP clients are responsible for several key operations:
To create a custom HTTP client, you need to implement the <ApiLink to="core/interface/BaseHttpClient">BaseHttpClient</ApiLink> interface. Your implementation must be async-compatible and include proper cleanup and resource management to work seamlessly with Crawlee's concurrent processing model.
This guide introduced you to the HTTP clients available in Crawlee and demonstrated how to switch between them, including their installation requirements and usage examples. You also learned about the responsibilities of HTTP clients and how to implement your own custom HTTP client by inheriting from the <ApiLink to="core/interface/BaseHttpClient">BaseHttpClient</ApiLink> base class.
If you have questions or need assistance, feel free to reach out on our GitHub or join our Discord community. Happy scraping!