docs/developers/sdk.mdx
As part of code generation, Reworkd generates code in its own custom SDK called Harambe. Harambe is web scraping SDK with a number of useful methods and features for:
These methods, what they do, how they work, and some examples of how to use them will be highlighted below.
save_dataSave scraped data and validate its type matches the current schema
Signature:
def save_data(self, data: dict[str, Any], source_url: str | None = None) -> None
Params:
data: Rows of data (as dictionaries) to savesource_url: Optional URL to associate with the data, defaults to current page URL. Only use this if the source of the data is different than the current page when the data is savedRaises:
SchemaValidationError: If any of the saved data does not match the provided schemaExample:
await sdk.save_data({ "title": "example", "description": "another_example" })
await sdk.save_data({ "title": "example", "description": "another_example" }, source_url="https://www.example.com/product/example_id")
enqueueEnqueue url(s) to be scraped later.
Signature:
def enqueue(self, urls: str | Awaitable[str], context: dict[str, Any] | None = None, options: dict[str, Any] | None = None) -> None
Params:
urls: urls to enqueuecontext: additional context to pass to the next run of the next stage/url. Typically just data that is only available on the current page but required in the schema. Only use this when some data is available on this page, but not on the page that is enqueued.options: job level options to pass to the next stage/urlExample:
await sdk.enqueue("https://www.test.com")
await sdk.enqueue("/some-path") # This will automatically be converted into an absolute url
paginateSDK method to automatically facilitate paginating a list of elements. Simply define a function that should return any of:
sdk.paginate at the end of your scrape function. The element will automatically be used to paginate the site and run the scraping code against all pages
Pagination will conclude once all pages are reached no next page element is found.
This method should ALWAYS be used for pagination instead of manual for loops and if statements.Signature:
def paginate(self, get_next_page_element: Callable[Ellipsis, Awaitable[str | playwright.async_api._generated.ElementHandle | None]], timeout: int = 2000) -> None
Params:
get_next_page_element: the url or ElementHandle of the next pagetimeout: milliseconds to sleep for before continuing. Only use if there is no other wait optionExample:
async def pager():
return await page.query_selector("div.pagination > .pager.next")
await sdk.paginate(pager)
capture_urlCapture the url of a click event. This will click the element and return the url via network request interception. This is useful for capturing urls that are generated dynamically (eg: redirects to document downloads).
Signature:
def capture_url(self, clickable: ElementHandle, resource_type: Literal[document, stylesheet, image, media, font, script, texttrack, xhr, fetch, eventsource, websocket, manifest, other, *] = 'document', timeout: int | None = 10000) -> str | None
Params:
clickable: the element to clickresource_type: the type of resource to capturetimeout: the time to wait for the new page to open (in ms)Return Value: url: the url of the captured resource or None if no match was found
Raises:
ValueError: if more than one page is created by the click eventcapture_downloadCapture a download event that gets triggered by clicking an element. This method will:
Signature:
def capture_download(self, clickable: ElementHandle, override_filename: str | None = None, override_url: str | None = None, timeout: float | None = None) -> DownloadMeta
Return Value:
DownloadMeta: A typed dict containing the download metadata such as the url and filename
capture_htmlCapture and download the html content of the document or a specific element. The returned HTML will be cleaned of any excluded elements and will be wrapped in a proper HTML document structure.
Signature:
def capture_html(self, selector: str = 'html', exclude_selectors: list[str] | None = None, soup_transform: Callable[BeautifulSoup, None] | None = None, html_converter_type: Literal[markdown, text] = 'markdown') -> HTMLMetadata
Params:
selector: CSS selector of element to capture. Defaults to "html" for the document element.exclude_selectors: List of CSS selectors for elements to exclude from capture.soup_transform: A function to transform the BeautifulSoup html prior to saving. Use this to remove aspects of the returned contenthtml_converter_type: Type of HTML converter to use for the inner text. Defaults to "markdown".Return Value:
HTMLMetadata containing the html of the element, the formatted text of the element, along with the url and filename of the document
Raises:
ValueError: If the specified selector doesn't match any element.Example:
meta = await sdk.capture_html(selector="div.content")
await sdk.save_data({"name": meta["filename"], "text": meta["text"], "download_url": meta["url"]})
capture_pdfCapture the current page as a pdf and then apply some download handling logic from the observer to transform to a usable URL
Signature:
def capture_pdf(self) -> DownloadMeta
Return Value:
DownloadMeta: A typed dict containing the download metadata such as the url and filename
Example:
meta = await sdk.capture_pdf()
await sdk.save_data({"file_name": meta["filename"], "download_url": meta["url"]})
logLog a message via both print and console.log if a browser is running
Concatenates all arguments with spaces.
Args:
*args: Values to log (will be concatenated)
Signature:
def log(self, args) -> None