Spiders
Spiders are classes which define how a certain site (or a group of sites) will be scraped, including how to perform the crawl (i.e. follow links) and how to extract structured data from their pages (i.e. scraping items). In other words, Spiders are the place where you define the custom behaviour for crawling and parsing pages for a particular site (or, in some cases, a group of sites).
For spiders, the scraping cycle goes through something like this:
You start by generating the initial requests to crawl the first URLs, and specify a callback function to be called with the response downloaded from those requests.
The first requests to perform are obtained by iterating the
start()method, which by default yields aRequestobject for each URL in thestart_urlsspider attribute, with theparsemethod set ascallbackfunction to handle eachResponse.In the callback function, you parse the response (web page) and return item objects,
Requestobjects, or an iterable of these objects. Those Requests will also contain a callback (maybe the same) and will then be downloaded by Scrapy and then their response handled by the specified callback.In callback functions, you parse the page contents, typically using Selectors (but you can also use BeautifulSoup, lxml or whatever mechanism you prefer) and generate items with the parsed data.
Finally, the items returned from the spider will be typically persisted to a database (in some Item Pipeline) or written to a file using Feed exports.
Even though this cycle applies (more or less) to any kind of spider, there are different kinds of default spiders bundled into Scrapy for different purposes. We will talk about those types here.
scrapy.Spider
- class scrapy.spiders.Spider
Let’s see an example:
import scrapy
class MySpider(scrapy.Spider):
name = "example.com"
allowed_domains = ["example.com"]
start_urls = [
"http://www.example.com/1.html",
"http://www.example.com/2.html",
"http://www.example.com/3.html",
]
def parse(self, response):
self.logger.info("A response from %s just arrived!", response.url)
Return multiple Requests and items from a single callback:
import scrapy
class MySpider(scrapy.Spider):
name = "example.com"
allowed_domains = ["example.com"]
start_urls = [
"http://www.example.com/1.html",
"http://www.example.com/2.html",
"http://www.example.com/3.html",
]
def parse(self, response):
for h3 in response.xpath("//h3").getall():
yield {"title": h3}
for href in response.xpath("//a/@href").getall():
yield scrapy.Request(response.urljoin(href), self.parse)
Instead of start_urls you can use start()
directly; to give data more structure you can use Item
objects:
import scrapy
from myproject.items import MyItem
class MySpider(scrapy.Spider):
name = "example.com"
allowed_domains = ["example.com"]
async def start(self):
yield scrapy.Request("http://www.example.com/1.html", self.parse)
yield scrapy.Request("http://www.example.com/2.html", self.parse)
yield scrapy.Request("http://www.example.com/3.html", self.parse)
def parse(self, response):
for h3 in response.xpath("//h3").getall():
yield MyItem(title=h3)
for href in response.xpath("//a/@href").getall():
yield scrapy.Request(response.urljoin(href), self.parse)
Spider arguments
Spiders can receive arguments that modify their behaviour. Some common uses for spider arguments are to define the start URLs or to restrict the crawl to certain sections of the site, but they can be used to configure any functionality of the spider.
Spider arguments are passed through the crawl command using the
-a option. For example:
scrapy crawl myspider -a category=electronics
Spiders can access arguments in their __init__ methods:
import scrapy
class MySpider(scrapy.Spider):
name = "myspider"
def __init__(self, category=None, *args, **kwargs):
super(MySpider, self).__init__(*args, **kwargs)
self.start_urls = [f"http://www.example.com/categories/{category}"]
# ...
The default __init__ method will take any spider arguments and copy them to the spider as attributes. The above example can also be written as follows:
import scrapy
class MySpider(scrapy.Spider):
name = "myspider"
async def start(self):
yield scrapy.Request(f"http://www.example.com/categories/{self.category}")
If you are running Scrapy from a script, you can
specify spider arguments when calling
CrawlerProcess.crawl or
CrawlerRunner.crawl:
process = CrawlerProcess()
process.crawl(MySpider, category="electronics")
Keep in mind that spider arguments are only strings.
The spider will not do any parsing on its own.
If you were to set the start_urls attribute from the command line,
you would have to parse it on your own into a list
using something like ast.literal_eval() or json.loads()
and then set it as an attribute.
Otherwise, you would cause iteration over a start_urls string
(a very common python pitfall)
resulting in each character being seen as a separate url.
A valid use case is to set the http auth credentials
used by HttpAuthMiddleware
or the user agent
used by UserAgentMiddleware:
scrapy crawl myspider -a http_user=myuser -a http_pass=mypassword -a user_agent=mybot
Spider arguments can also be passed through the Scrapyd schedule.json API.
See Scrapyd documentation.
Start requests
Start requests are Request objects yielded from the
start() method of a spider or from the
process_start() method of a
spider middleware.
See also
start-request-order
Delaying start request iteration
You can override the start() method as follows to pause
its iteration whenever there are scheduled requests:
async def start(self):
async for item_or_request in super().start():
if self.crawler.engine.needs_backout():
await self.crawler.signals.wait_for(signals.scheduler_empty)
yield item_or_request
This can help minimize the number of requests in the scheduler at any given
time, to minimize resource usage (memory or disk, depending on
JOBDIR).
Generic Spiders
Scrapy comes with some useful generic spiders that you can use to subclass your spiders from. Their aim is to provide convenient functionality for a few common scraping cases, like following all links on a site based on certain rules, crawling from Sitemaps, or parsing an XML/CSV feed.
For the examples used in the following spiders, we’ll assume you have a project
with a TestItem declared in a myproject.items module:
import scrapy
class TestItem(scrapy.Item):
id = scrapy.Field()
name = scrapy.Field()
description = scrapy.Field()
CrawlSpider
- class scrapy.spiders.CrawlSpider
This is the most commonly used spider for crawling regular websites, as it provides a convenient mechanism for following links by defining a set of rules. It may not be the best suited for your particular web sites or project, but it’s generic enough for several cases, so you can start from it and override it as needed for more custom functionality, or just implement your own spider.
Apart from the attributes inherited from Spider (that you must specify), this class supports a new attribute:
- rules
Which is a list of one (or more)
Ruleobjects. EachRuledefines a certain behaviour for crawling the site. Rules objects are described below. If multiple rules match the same link, the first one will be used, according to the order they’re defined in this attribute.
This spider also exposes an overridable method:
- parse_start_url(response, **kwargs)
This method is called for each response produced for the URLs in the spider’s
start_urlsattribute. It allows to parse the initial responses and must return either an item object, aRequestobject, or an iterable containing any of them.
Crawling rules
CrawlSpider example
Let’s now take a look at an example CrawlSpider with rules:
import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
class MySpider(CrawlSpider):
name = "example.com"
allowed_domains = ["example.com"]
start_urls = ["http://www.example.com"]
rules = (
# Extract links matching 'category.php' (but not matching 'subsection.php')
# and follow links from them (since no callback means follow=True by default).
Rule(LinkExtractor(allow=(r"category\.php",), deny=(r"subsection\.php",))),
# Extract links matching 'item.php' and parse them with the spider's method parse_item
Rule(LinkExtractor(allow=(r"item\.php",)), callback="parse_item"),
)
def parse_item(self, response):
self.logger.info("Hi, this is an item page! %s", response.url)
item = scrapy.Item()
item["id"] = response.xpath('//td[@id="item_id"]/text()').re(r"ID: (\d+)")
item["name"] = response.xpath('//td[@id="item_name"]/text()').get()
item["description"] = response.xpath(
'//td[@id="item_description"]/text()'
).get()
item["link_text"] = response.meta["link_text"]
url = response.xpath('//td[@id="additional_data"]/@href').get()
return response.follow(
url, self.parse_additional_page, cb_kwargs=dict(item=item)
)
def parse_additional_page(self, response, item):
item["additional_data"] = response.xpath(
'//p[@id="additional_data"]/text()'
).get()
return item
This spider would start crawling example.com’s home page, collecting category
links, and item links, parsing the latter with the parse_item method. For
each item response, some data will be extracted from the HTML using XPath, and
an Item will be filled with it.
XMLFeedSpider
- class scrapy.spiders.XMLFeedSpider
XMLFeedSpider is designed for parsing XML feeds by iterating through them by a certain node name. The iterator can be chosen from:
iternodes,xml, andhtml. It’s recommended to use theiternodesiterator for performance reasons, since thexmlandhtmliterators generate the whole DOM at once in order to parse it. However, usinghtmlas the iterator may be useful when parsing XML with bad markup.To set the iterator and the tag name, you must define the following class attributes:
- iterator
A string which defines the iterator to use. It can be either:
'iternodes'- a fast iterator based on regular expressions'html'- an iterator which usesSelector. Keep in mind this uses DOM parsing and must load all DOM in memory which could be a problem for big feeds'xml'- an iterator which usesSelector. Keep in mind this uses DOM parsing and must load all DOM in memory which could be a problem for big feeds
It defaults to:
'iternodes'.
- itertag
A string with the name of the node (or element) to iterate in. Example:
itertag = 'product'
- namespaces
A list of
(prefix, uri)tuples which define the namespaces available in that document that will be processed with this spider. Theprefixanduriwill be used to automatically register namespaces using theregister_namespace()method.You can then specify nodes with namespaces in the
itertagattribute.Example:
class YourSpider(XMLFeedSpider): namespaces = [('n', 'http://www.sitemaps.org/schemas/sitemap/0.9')] itertag = 'n:url' # ...
Apart from these new attributes, this spider has the following overridable methods too:
- adapt_response(response)
A method that receives the response as soon as it arrives from the spider middleware, before the spider starts parsing it. It can be used to modify the response body before parsing it. This method receives a response and also returns a response (it could be the same or another one).
- parse_node(response, selector)
This method is called for the nodes matching the provided tag name (
itertag). Receives the response and anSelectorfor each node. Overriding this method is mandatory. Otherwise, you spider won’t work. This method must return an item object, aRequestobject, or an iterable containing any of them.
- process_results(response, results)
This method is called for each result (item or request) returned by the spider, and it’s intended to perform any last time processing required before returning the results to the framework core, for example setting the item IDs. It receives a list of results and the response which originated those results. It must return a list of results (items or requests).
Warning
Because of its internal implementation, you must explicitly set callbacks for new requests when writing
XMLFeedSpider-based spiders; unexpected behaviour can occur otherwise.
XMLFeedSpider example
These spiders are pretty easy to use, let’s have a look at one example:
from scrapy.spiders import XMLFeedSpider
from myproject.items import TestItem
class MySpider(XMLFeedSpider):
name = "example.com"
allowed_domains = ["example.com"]
start_urls = ["http://www.example.com/feed.xml"]
iterator = "iternodes" # This is actually unnecessary, since it's the default value
itertag = "item"
def parse_node(self, response, node):
self.logger.info(
"Hi, this is a <%s> node!: %s", self.itertag, "".join(node.getall())
)
item = TestItem()
item["id"] = node.xpath("@id").get()
item["name"] = node.xpath("name").get()
item["description"] = node.xpath("description").get()
return item
Basically what we did up there was to create a spider that downloads a feed from
the given start_urls, and then iterates through each of its item tags,
prints them out, and stores some random data in an Item.
CSVFeedSpider
- class scrapy.spiders.CSVFeedSpider
This spider is very similar to the XMLFeedSpider, except that it iterates over rows, instead of nodes. The method that gets called in each iteration is
parse_row().- delimiter
A string with the separator character for each field in the CSV file Defaults to
','(comma).
- quotechar
A string with the enclosure character for each field in the CSV file Defaults to
'"'(quotation mark).
- headers
A list of the column names in the CSV file.
- parse_row(response, row)
Receives a response and a dict (representing each row) with a key for each provided (or detected) header of the CSV file. This spider also gives the opportunity to override
adapt_responseandprocess_resultsmethods for pre- and post-processing purposes.
CSVFeedSpider example
Let’s see an example similar to the previous one, but using a
CSVFeedSpider:
from scrapy.spiders import CSVFeedSpider
from myproject.items import TestItem
class MySpider(CSVFeedSpider):
name = "example.com"
allowed_domains = ["example.com"]
start_urls = ["http://www.example.com/feed.csv"]
delimiter = ";"
quotechar = "'"
headers = ["id", "name", "description"]
def parse_row(self, response, row):
self.logger.info("Hi, this is a row!: %r", row)
item = TestItem()
item["id"] = row["id"]
item["name"] = row["name"]
item["description"] = row["description"]
return item
SitemapSpider
- class scrapy.spiders.SitemapSpider
SitemapSpider allows you to crawl a site by discovering the URLs using Sitemaps.
It supports nested sitemaps and discovering sitemap urls from robots.txt.
- sitemap_urls
A list of urls pointing to the sitemaps whose urls you want to crawl.
You can also point to a robots.txt and it will be parsed to extract sitemap urls from it.
- sitemap_rules
A list of tuples
(regex, callback)where:regexis a regular expression to match urls extracted from sitemaps.regexcan be either a str or a compiled regex object.callback is the callback to use for processing the urls that match the regular expression.
callbackcan be a string (indicating the name of a spider method) or a callable.
For example:
sitemap_rules = [('/product/', 'parse_product')]
Rules are applied in order, and only the first one that matches will be used.
If you omit this attribute, all urls found in sitemaps will be processed with the
parsecallback.
- sitemap_follow
A list of regexes of sitemap that should be followed. This is only for sites that use Sitemap index files that point to other sitemap files.
By default, all sitemaps are followed.
- sitemap_alternate_links
Specifies if alternate links for one
urlshould be followed. These are links for the same website in another language passed within the sameurlblock.For example:
<url> <loc>http://example.com/</loc> <xhtml:link rel="alternate" hreflang="de" href="http://example.com/de"/> </url>
With
sitemap_alternate_linksset, this would retrieve both URLs. Withsitemap_alternate_linksdisabled, onlyhttp://example.com/would be retrieved.Default is
sitemap_alternate_linksdisabled.
- sitemap_filter(entries)
This is a filter function that could be overridden to select sitemap entries based on their attributes.
For example:
<url> <loc>http://example.com/</loc> <lastmod>2005-01-01</lastmod> </url>
We can define a
sitemap_filterfunction to filterentriesby date:from datetime import datetime from scrapy.spiders import SitemapSpider class FilteredSitemapSpider(SitemapSpider): name = "filtered_sitemap_spider" allowed_domains = ["example.com"] sitemap_urls = ["http://example.com/sitemap.xml"] def sitemap_filter(self, entries): for entry in entries: date_time = datetime.strptime(entry["lastmod"], "%Y-%m-%d") if date_time.year >= 2005: yield entry
This would retrieve only
entriesmodified on 2005 and the following years.Entries are dict objects extracted from the sitemap document. Usually, the key is the tag name and the value is the text inside it.
It’s important to notice that:
as the loc attribute is required, entries without this tag are discarded
alternate links are stored in a list with the key
alternate(seesitemap_alternate_links)namespaces are removed, so lxml tags named as
{namespace}tagnamebecome onlytagname
If you omit this method, all entries found in sitemaps will be processed, observing other attributes and their settings.
SitemapSpider examples
Simplest example: process all urls discovered through sitemaps using the
parse callback:
from scrapy.spiders import SitemapSpider
class MySpider(SitemapSpider):
sitemap_urls = ["http://www.example.com/sitemap.xml"]
def parse(self, response):
pass # ... scrape item here ...
Process some urls with certain callback and other urls with a different callback:
from scrapy.spiders import SitemapSpider
class MySpider(SitemapSpider):
sitemap_urls = ["http://www.example.com/sitemap.xml"]
sitemap_rules = [
("/product/", "parse_product"),
("/category/", "parse_category"),
]
def parse_product(self, response):
pass # ... scrape product ...
def parse_category(self, response):
pass # ... scrape category ...
Follow sitemaps defined in the robots.txt file and only follow sitemaps
whose url contains /sitemap_shop:
from scrapy.spiders import SitemapSpider
class MySpider(SitemapSpider):
sitemap_urls = ["http://www.example.com/robots.txt"]
sitemap_rules = [
("/shop/", "parse_shop"),
]
sitemap_follow = ["/sitemap_shops"]
def parse_shop(self, response):
pass # ... scrape shop here ...
Combine SitemapSpider with other sources of urls:
from scrapy.spiders import SitemapSpider
class MySpider(SitemapSpider):
sitemap_urls = ["http://www.example.com/robots.txt"]
sitemap_rules = [
("/shop/", "parse_shop"),
]
other_urls = ["http://www.example.com/about"]
async def start(self):
async for item_or_request in super().start():
yield item_or_request
for url in self.other_urls:
yield Request(url, self.parse_other)
def parse_shop(self, response):
pass # ... scrape shop here ...
def parse_other(self, response):
pass # ... scrape other here ...