agent-skill/Scrapling-Skill/references/parsing/selection.md
Scrapling currently supports parsing HTML pages exclusively (no XML feeds), because the adaptive feature does not work with XML.
In Scrapling, there are five main ways to find elements:
There are also other indirect ways to find elements. Scrapling can also find elements similar to a given element; see Finding Similar Elements.
CSS is a language for applying styles to HTML documents. It defines selectors to associate those styles with specific HTML elements.
Scrapling implements CSS3 selectors as described in the W3C specification. CSS selectors support comes from cssselect, so it's better to read about which selectors are supported from cssselect and pseudo-functions/elements.
Also, Scrapling implements some non-standard pseudo-elements like:
::text.::attr(name) where name is the name of the attribute that you want the value ofThe selector logic follows the same conventions as Scrapy/Parsel.
To select elements with CSS selectors, use the css method, which returns Selectors. Use [0] to get the first element, or .get() / .getall() to extract text values from text/attribute pseudo-selectors.
XPath is a language for selecting nodes in XML documents, which can also be used with HTML. This cheatsheet is a good resource for learning about XPath. Scrapling adds XPath selectors directly through lxml.
The logic follows the same conventions as Scrapy/Parsel. However, Scrapling does not implement the XPath extension function has-class as Scrapy/Parsel does. Instead, it provides the has_class method on returned elements.
To select elements with XPath selectors, use the xpath method, which follows the same logic as the CSS selectors method above.
Note that each method of
cssandxpathhas additional arguments, but we didn't explain them here, as they are all about the adaptive feature. The adaptive feature will have its own page later to be described in detail.
Let's see some shared examples of using CSS and XPath Selectors.
Select all elements with the class product.
products = page.css('.product')
products = page.xpath('//*[@class="product"]')
Note: The XPath version won't be accurate if there's another class; it's always better to rely on CSS for selecting by class.
Select the first element with the class product.
product = page.css('.product')[0]
product = page.xpath('//*[@class="product"]')[0]
Get the text of the first element with the h1 tag name
title = page.css('h1::text').get()
title = page.xpath('//h1//text()').get()
Which is the same as doing
title = page.css('h1')[0].text
title = page.xpath('//h1')[0].text
Get the href attribute of the first element with the a tag name
link = page.css('a::attr(href)').get()
link = page.xpath('//a/@href').get()
Select the text of the first element with the h1 tag name, which contains Phone, and under an element with class product.
title = page.css('.product h1:contains("Phone")::text').get()
title = page.xpath('//*[@class="product"]//h1[contains(text(),"Phone")]/text()').get()
You can nest and chain selectors as you want, given that they return results
page.css('.product')[0].css('h1:contains("Phone")::text').get()
page.xpath('//*[@class="product"]')[0].xpath('//h1[contains(text(),"Phone")]/text()').get()
page.xpath('//*[@class="product"]')[0].css('h1:contains("Phone")::text').get()
Another example
All links that have 'image' in their 'href' attribute
links = page.css('a[href*="image"]')
links = page.xpath('//a[contains(@href, "image")]')
for index, link in enumerate(links):
link_value = link.attrib['href'] # Cleaner than link.css('::attr(href)').get()
link_text = link.text
print(f'Link number {index} points to this url {link_value} with text content as "{link_text}"')
Scrapling provides two ways to select elements based on their direct text content:
find_by_text method.find_by_regex method.Anything achievable with find_by_text can also be done with find_by_regex, but both are provided for convenience.
With find_by_text, you pass the text as the first argument; with find_by_regex, the regex pattern is the first argument. Both methods share the following arguments:
True (the default), the method used will return the first result it finds.True, the case of the letters will be considered.True, all whitespaces and consecutive spaces will be replaced with a single space before matching.By default, Scrapling searches for the exact matching of the text/pattern you pass to find_by_text, so the text content of the wanted element has to be ONLY the text you input, but that's why it also has one extra argument, which is:
find_by_text will return elements that contain the input text. So it's not an exact match anymoreNote: The method find_by_regex can accept both regular strings and a compiled regex pattern as its first argument.
Scrapling can find elements similar to a given element, inspired by the AutoScraper library but usable with elements found by any method.
Given an element (e.g., a product found by title), calling .find_similar() on it causes Scrapling to:
Arguments for find_similar():
('href', 'src',) because URLs can change significantly across elements, making them unreliable.True, the element's text content will be considered when matching (Step 3). Using this argument in typical cases is not recommended, but it depends.Examples of finding elements with raw text, regex, and find_similar.
from scrapling.fetchers import Fetcher
page = Fetcher.get('https://books.toscrape.com/index.html')
Find the first element whose text fully matches this text
>>> page.find_by_text('Tipping the Velvet')
<data='<a href="catalogue/tipping-the-velvet_99...' parent='<h3><a href="catalogue/tipping-the-velve...'>
Combining it with page.urljoin to return the full URL from the relative href.
>>> page.find_by_text('Tipping the Velvet').attrib['href']
'catalogue/tipping-the-velvet_999/index.html'
>>> page.urljoin(page.find_by_text('Tipping the Velvet').attrib['href'])
'https://books.toscrape.com/catalogue/tipping-the-velvet_999/index.html'
Get all matches if there are more (notice it returns a list)
>>> page.find_by_text('Tipping the Velvet', first_match=False)
[<data='<a href="catalogue/tipping-the-velvet_99...' parent='<h3><a href="catalogue/tipping-the-velve...'>]
Get all elements that contain the word the (Partial matching)
>>> results = page.find_by_text('the', partial=True, first_match=False)
>>> [i.text for i in results]
['A Light in the ...',
'Tipping the Velvet',
'The Requiem Red',
'The Dirty Little Secrets ...',
'The Coming Woman: A ...',
'The Boys in the ...',
'The Black Maria',
'Mesaerion: The Best Science ...',
"It's Only the Himalayas"]
The search is case-insensitive by default, so those results include The, not just the lowercase the. To limit to exact case:
>>> results = page.find_by_text('the', partial=True, first_match=False, case_sensitive=True)
>>> [i.text for i in results]
['A Light in the ...',
'Tipping the Velvet',
'The Boys in the ...',
"It's Only the Himalayas"]
Get the first element whose text content matches my price regex
>>> page.find_by_regex(r'£[\d\.]+')
<data='<p class="price_color">£51.77</p>' parent='<div class="product_price"> <p class="pr...'>
>>> page.find_by_regex(r'£[\d\.]+').text
'£51.77'
It's the same if you pass the compiled regex as well; Scrapling will detect the input type and act upon that:
>>> import re
>>> regex = re.compile(r'£[\d\.]+')
>>> page.find_by_regex(regex)
<data='<p class="price_color">£51.77</p>' parent='<div class="product_price"> <p class="pr...'>
>>> page.find_by_regex(regex).text
'£51.77'
Get all elements that match the regex
>>> page.find_by_regex(r'£[\d\.]+', first_match=False)
[<data='<p class="price_color">£51.77</p>' parent='<div class="product_price"> <p class="pr...'>,
<data='<p class="price_color">£53.74</p>' parent='<div class="product_price"> <p class="pr...'>,
<data='<p class="price_color">£50.10</p>' parent='<div class="product_price"> <p class="pr...'>,
<data='<p class="price_color">£47.82</p>' parent='<div class="product_price"> <p class="pr...'>,
...]
And so on...
Find all elements similar to the current element in location and attributes. For our case, ignore the 'title' attribute while matching
>>> element = page.find_by_text('Tipping the Velvet')
>>> element.find_similar(ignore_attributes=['title'])
[<data='<a href="catalogue/a-light-in-the-attic_...' parent='<h3><a href="catalogue/a-light-in-the-at...'>,
<data='<a href="catalogue/soumission_998/index....' parent='<h3><a href="catalogue/soumission_998/in...'>,
<data='<a href="catalogue/sharp-objects_997/ind...' parent='<h3><a href="catalogue/sharp-objects_997...'>,
...]
The number of elements is 19, not 20, because the current element is not included in the results:
>>> len(element.find_similar(ignore_attributes=['title']))
19
Get the href attribute from all similar elements
>>> [
element.attrib['href']
for element in element.find_similar(ignore_attributes=['title'])
]
['catalogue/a-light-in-the-attic_1000/index.html',
'catalogue/soumission_998/index.html',
'catalogue/sharp-objects_997/index.html',
...]
Getting all books' data using that element as a starting point:
>>> for product in element.parent.parent.find_similar():
print({
"name": product.css('h3 a::text').get(),
"price": product.css('.price_color')[0].re_first(r'[\d\.]+'),
"stock": product.css('.availability::text').getall()[-1].clean()
})
{'name': 'A Light in the ...', 'price': '51.77', 'stock': 'In stock'}
{'name': 'Soumission', 'price': '50.10', 'stock': 'In stock'}
{'name': 'Sharp Objects', 'price': '47.82', 'stock': 'In stock'}
...
Advanced examples using the find_similar method:
E-commerce Product Extraction
def extract_product_grid(page):
# Find the first product card
first_product = page.find_by_text('Add to Cart').find_ancestor(
lambda e: e.has_class('product-card')
)
# Find similar product cards
products = first_product.find_similar()
return [
{
'name': p.css('h3::text').get(),
'price': p.css('.price::text').re_first(r'\d+\.\d{2}'),
'stock': 'In stock' in p.text,
'rating': p.css('.rating')[0].attrib.get('data-rating')
}
for p in products
]
Table Row Extraction
def extract_table_data(page):
# Find the first data row
first_row = page.css('table tbody tr')[0]
# Find similar rows
rows = first_row.find_similar()
return [
{
'column1': row.css('td:nth-child(1)::text').get(),
'column2': row.css('td:nth-child(2)::text').get(),
'column3': row.css('td:nth-child(3)::text').get()
}
for row in rows
]
Form Field Extraction
def extract_form_fields(page):
# Find first form field container
first_field = page.css('input')[0].find_ancestor(
lambda e: e.has_class('form-field')
)
# Find similar field containers
fields = first_field.find_similar()
return [
{
'label': f.css('label::text').get(),
'type': f.css('input')[0].attrib.get('type'),
'required': 'required' in f.css('input')[0].attrib
}
for f in fields
]
Extracting reviews from a website
def extract_reviews(page):
# Find first review
first_review = page.find_by_text('Great product!')
review_container = first_review.find_ancestor(
lambda e: e.has_class('review')
)
# Find similar reviews
all_reviews = review_container.find_similar()
return [
{
'text': r.css('.review-text::text').get(),
'rating': r.attrib.get('data-rating'),
'author': r.css('.reviewer::text').get()
}
for r in all_reviews
]
Inspired by BeautifulSoup's find_all function, elements can be found using the find_all and find methods. Both methods accept multiple filters and return all elements on the pages where all filters apply.
To be more specific:
find_by_regex methodIt collects all passed arguments and keywords, and each filter passes its results to the following filter in a waterfall-like filtering system.
It filters all elements in the current page/element in the following order:
Notes:
>>> from scrapling.fetchers import Fetcher
>>> page = Fetcher.get('https://quotes.toscrape.com/')
Find all elements with the tag name div.
>>> page.find_all('div')
[<data='<div class="container"> <div class="row...' parent='<body> <div class="container"> <div clas...'>,
<data='<div class="row header-box"> <div class=...' parent='<div class="container"> <div class="row...'>,
...]
Find all div elements with a class that equals quote.
>>> page.find_all('div', class_='quote')
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]
Same as above.
>>> page.find_all('div', {'class': 'quote'})
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]
Find all elements with a class that equals quote.
>>> page.find_all({'class': 'quote'})
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]
Find all div elements with a class that equals quote and contains the element .text, which contains the word 'world' in its content.
>>> page.find_all('div', {'class': 'quote'}, lambda e: "world" in e.css('.text::text').get())
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>]
Find all elements that have children.
>>> page.find_all(lambda element: len(element.children) > 0)
[<data='<html lang="en"><head><meta charset="UTF...'>,
<data='<head><meta charset="UTF-8"><title>Quote...' parent='<html lang="en"><head><meta charset="UTF...'>,
<data='<body> <div class="container"> <div clas...' parent='<html lang="en"><head><meta charset="UTF...'>,
...]
Find all elements that contain the word 'world' in their content.
>>> page.find_all(lambda element: "world" in element.text)
[<data='<span class="text" itemprop="text">“The...' parent='<div class="quote" itemscope itemtype="h...'>,
<data='<a class="tag" href="/tag/world/page/1/"...' parent='<div class="tags"> Tags: <meta class="ke...'>]
Find all span elements that match the given regex
>>> page.find_all('span', re.compile(r'world'))
[<data='<span class="text" itemprop="text">“The...' parent='<div class="quote" itemscope itemtype="h...'>]
Find all div and span elements with class 'quote' (No span elements like that, so only div returned)
>>> page.find_all(['div', 'span'], {'class': 'quote'})
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]
Mix things up
>>> page.find_all({'itemtype':"http://schema.org/CreativeWork"}, 'div').css('.author::text').getall()
['Albert Einstein',
'J.K. Rowling',
...]
A bonus pro tip: Find all elements whose href attribute's value ends with the word 'Einstein'.
>>> page.find_all({'href$': 'Einstein'})
[<data='<a href="/author/Albert-Einstein">(about...' parent='<span>by <small class="author" itemprop=...'>,
<data='<a href="/author/Albert-Einstein">(about...' parent='<span>by <small class="author" itemprop=...'>,
<data='<a href="/author/Albert-Einstein">(about...' parent='<span>by <small class="author" itemprop=...'>]
Another pro tip: Find all elements whose href attribute's value has '/author/' in it
>>> page.find_all({'href*': '/author/'})
[<data='<a href="/author/Albert-Einstein">(about...' parent='<span>by <small class="author" itemprop=...'>,
<data='<a href="/author/J-K-Rowling">(about)</a...' parent='<span>by <small class="author" itemprop=...'>,
<data='<a href="/author/Albert-Einstein">(about...' parent='<span>by <small class="author" itemprop=...'>,
...]
And so on...
CSS/XPath selectors can be generated for any element, regardless of the method used to find it.
Generate a short CSS selector for the url_element element (if possible, create a short one; otherwise, it's a full selector)
>>> url_element = page.find({'href*': '/author/'})
>>> url_element.generate_css_selector
'body > div > div:nth-of-type(2) > div > div > span:nth-of-type(2) > a'
Generate a full CSS selector for the url_element element from the start of the page
>>> url_element.generate_full_css_selector
'body > div > div:nth-of-type(2) > div > div > span:nth-of-type(2) > a'
Generate a short XPath selector for the url_element element (if possible, create a short one; otherwise, it's a full selector)
>>> url_element.generate_xpath_selector
'//body/div/div[2]/div/div/span[2]/a'
Generate a full XPath selector for the url_element element from the start of the page
>>> url_element.generate_full_xpath_selector
'//body/div/div[2]/div/div/span[2]/a'
Note: When generating a short selector, Scrapling tries to find a unique element (e.g., one with an id attribute) as a stop point. If none exists, the short and full selectors will be identical.
Similar to parsel/scrapy, re and re_first methods are available for extracting data using regular expressions. These methods exist in Selector, Selectors, TextHandler, and TextHandlers, so they can be used directly on elements even without selecting a text node. See the TextHandler class for details.
Examples:
>>> page.css('.price_color')[0].re_first(r'[\d\.]+')
'51.77'
>>> page.css('.price_color').re_first(r'[\d\.]+')
'51.77'
>>> page.css('.price_color').re(r'[\d\.]+')
['51.77',
'53.74',
'50.10',
'47.82',
'54.23',
...]
>>> page.css('.product_pod h3 a::attr(href)').re(r'catalogue/(.*)/index.html')
['a-light-in-the-attic_1000',
'tipping-the-velvet_999',
'soumission_998',
'sharp-objects_997',
...]
>>> filtering_function = lambda e: e.parent.tag == 'h3' and e.parent.parent.has_class('product_pod') # As above selector
>>> page.find('a', filtering_function).attrib['href'].re(r'catalogue/(.*)/index.html')
['a-light-in-the-attic_1000']
>>> page.find_by_text('Tipping the Velvet').attrib['href'].re(r'catalogue/(.*)/index.html')
['tipping-the-velvet_999']
See the TextHandler class for more details on regex methods.