Search code examples
pythonxpathscrapyweb-crawlere-commerce

scrapy CrawlSpider do not follow links with restrict_xpaths


I am trying to use Scrapy's CrawlSpider to crawl products from an e-commerce website: The spider must browse the website doing one of two things:

  1. If the link is category, sub-category or next page: the spider must just follow the link.
  2. If the link is product page: the spider must call a especial parsing mehtod to extract product data.

This is my spider's code:

from scrapy.spiders import CrawlSpider, Rule
from ecommerce.items import EcommerceItem
from scrapy.linkextractors import LinkExtractor


class ecommerce(CrawlSpider):
      name = "ecommerce"
      allowed_domains = ['HarveyNorman.com.au']
      start_urls = ['https://www.HarveyNorman.com.au/']

      rules = (
          Rule(
              LinkExtractor(restrict_xpaths=[
                  "//*[@id='wrapper']/div[2]/div[1]/div/div/ul/li/ul/li/ul/li/ul/li/a",
                  "//*[@id='content']/div[2]/div[1]/div/div[2]/div/div/div/div[2]/div/a",
                  "//*[@id='toolbar-btm']/div/div[4]/div/ol/li[7]/a",
                  "//*[@id='toolbar-btm']/div/div[4]/div/ol/li[6]/a"]),
              follow=True
          ),
          Rule(
              LinkExtractor(restrict_xpaths="//*[@id='category-grid']/div/div/div[3]/a"),
              callback='parse_main_item'
          ),
      )

      def parse_main_item(self, response):
          #I put anything here for the moment.
          item = EcommerceItem()
          return item

To run the spider and save the results in a csv file I execute the command:

scrapy crawl ecommerce -t csv -o ec.csv

My spider stops on the start url and does not follow any link, this is its output:

ScrapyDeprecationWarning: ('The -t command line option is deprecated in favor of specifying     the output format within the output URI. See the documentation of the -o and -O options for more information.',)
  feeds = feed_process_params_from_cli(
2021-02-26 21:55:53 [scrapy.utils.log] INFO: Scrapy 2.4.1 started (bot: ecommerce)
2021-02-26 21:55:53 [scrapy.utils.log] INFO: Versions: lxml 4.6.2.0, libxml2 2.9.10,     cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 20.3.0, Python 3.8.5 (default, Jan 27 2021,     15:41:15) - [GCC 9.3.0], pyOpenSSL 20.0.1 (OpenSSL 1.1.1f  31 Mar 2020), cryptography 2.8,     Platform Linux-5.8.0-43-generic-x86_64-with-glibc2.29
2021-02-26 21:55:53 [scrapy.utils.log] DEBUG: Using reactor:     twisted.internet.epollreactor.EPollReactor
2021-02-26 21:55:53 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'ecommerce',
 'DOWNLOAD_DELAY': 0.25,
 'NEWSPIDER_MODULE': 'ecommerce.spiders',
 'SPIDER_MODULES': ['ecommerce.spiders']}
2021-02-26 21:55:53 [scrapy.extensions.telnet] INFO: Telnet Password: 5dccfc3692d38bc5
2021-02-26 21:55:54 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.feedexport.FeedExporter',
 'scrapy.extensions.logstats.LogStats']
2021-02-26 21:55:54 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2021-02-26 21:55:54 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2021-02-26 21:55:54 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2021-02-26 21:55:54 [scrapy.core.engine] INFO: Spider opened
2021-02-26 21:55:54 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min),     scraped 0 items (at 0 items/min)
2021-02-26 21:55:54 [scrapy.extensions.telnet] INFO: Telnet console listening on     127.0.0.1:6023
2021-02-26 21:55:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET     https://www.HarveyNorman.com.au/> (referer: None)
2021-02-26 21:55:57 [scrapy.spidermiddlewares.offsite] DEBUG: Filtered offsite request to     'www.harveynorman.com.au': <GET https://www.harveynorman.com.au/computers-tablets/computers    /laptops>
2021-02-26 21:55:57 [scrapy.core.engine] INFO: Closing spider (finished)
2021-02-26 21:55:57 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 223,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 72967,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'elapsed_time_seconds': 3.584481,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2021, 2, 26, 20, 55, 57, 930058),
 'log_count/DEBUG': 2,
 'log_count/INFO': 10,
 'memusage/max': 54886400,
 'memusage/startup': 54886400,
 'offsite/domains': 1,
 'offsite/filtered': 577,
 'request_depth_max': 1,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2021, 2, 26, 20, 55, 54, 345577)}
2021-02-26 21:55:57 [scrapy.core.engine] INFO: Spider closed (finished)

Any solution?


Solution

  • I found the problem: The output line:

    [scrapy.spidermiddlewares.offsite] DEBUG: Filtered offsite request to     'www.harveynorman.com.au': <GET https://www.harveynorman.com.au/computers-tablets/computers    /laptops>
    

    This line is telling that the domain of the link is not allowed, it is because the parameter allowed_domain is case sensitive. So I just replaced:

    allowed_domains = ['HarveyNorman.com.au']
    

    by:

    allowed_domains = ['harveynorman.com.au']