Search code examples
python-3.xubuntuscrapyscrapy-pipeline

Scrapy not calling the assigned pipeline when run from a script


I have a piece of code to test scrapy. My goal is to use scrapy without having to call the scrapy command from the terminal, so I can embed this code somewhere else.

The code is the following:

from scrapy import Spider
from scrapy.selector import Selector
from scrapy.item import Item, Field
from scrapy.crawler import CrawlerProcess
import json


class JsonWriterPipeline(object):

    file = None

    def open_spider(self, spider):
        self.file = open('items.json', 'wb')

    def close_spider(self, spider):
        self.file.close()

    def process_item(self, item, spider):
        line = json.dumps(dict(item)) + "\n"
        self.file.write(line)
        return item


class StackItem(Item):
    title = Field()
    url = Field()


class StackSpider(Spider):
    name = "stack"
    allowed_domains = ["stackoverflow.com"]
    start_urls = ["http://stackoverflow.com/questions?pagesize=50&sort=newest"]

    def parse(self, response):

        questions = Selector(response).xpath('//div[@class="summary"]/h3')

        for question in questions:
            item = StackItem()
            item['title'] = question.xpath('a[@class="question-hyperlink"]/text()').extract()[0]
            item['url'] = question.xpath('a[@class="question-hyperlink"]/@href').extract()[0]

            yield item

if __name__ == '__main__':

    settings = dict()
    settings['USER_AGENT'] = 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
    settings['ITEM_PIPELINES'] = {'JsonWriterPipeline': 1}

    process = CrawlerProcess(settings=settings)

    spider = StackSpider()
    process.crawl(spider)
    process.start()

As you see, the code is self contained and I override two settings; the USER_AGENT and the ITEM_PIPELINES. However when I set debug points in the JsonWriterPipeline class, I see that the code is executed and the debug points are never reached, thus the custom pipeline is not being used.

How can this be fixed?


Solution

  • I get 2 errors when running your script with scrapy 1.3.2 and Python 3.5.

    First:

    Unhandled error in Deferred:
    2017-02-21 13:47:23 [twisted] CRITICAL: Unhandled error in Deferred:
    
    2017-02-21 13:47:23 [twisted] CRITICAL: 
    Traceback (most recent call last):
      File "/home/paul/.virtualenvs/scrapy13.py3/lib/python3.5/site-packages/scrapy/utils/misc.py", line 39, in load_object
        dot = path.rindex('.')
    ValueError: substring not found
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/home/paul/.virtualenvs/scrapy13.py3/lib/python3.5/site-packages/twisted/internet/defer.py", line 1301, in _inlineCallbacks
        result = g.send(result)
      File "/home/paul/.virtualenvs/scrapy13.py3/lib/python3.5/site-packages/scrapy/crawler.py", line 72, in crawl
        self.engine = self._create_engine()
      File "/home/paul/.virtualenvs/scrapy13.py3/lib/python3.5/site-packages/scrapy/crawler.py", line 97, in _create_engine
        return ExecutionEngine(self, lambda _: self.stop())
      File "/home/paul/.virtualenvs/scrapy13.py3/lib/python3.5/site-packages/scrapy/core/engine.py", line 70, in __init__
        self.scraper = Scraper(crawler)
      File "/home/paul/.virtualenvs/scrapy13.py3/lib/python3.5/site-packages/scrapy/core/scraper.py", line 71, in __init__
        self.itemproc = itemproc_cls.from_crawler(crawler)
      File "/home/paul/.virtualenvs/scrapy13.py3/lib/python3.5/site-packages/scrapy/middleware.py", line 58, in from_crawler
        return cls.from_settings(crawler.settings, crawler)
      File "/home/paul/.virtualenvs/scrapy13.py3/lib/python3.5/site-packages/scrapy/middleware.py", line 34, in from_settings
        mwcls = load_object(clspath)
      File "/home/paul/.virtualenvs/scrapy13.py3/lib/python3.5/site-packages/scrapy/utils/misc.py", line 41, in load_object
        raise ValueError("Error loading object '%s': not a full path" % path)
    ValueError: Error loading object 'JsonWriterPipeline': not a full path
    

    You need to give a complete path for the pipeline. For example here, the __main__ namespace works:

    settings['ITEM_PIPELINES'] = {'__main__.JsonWriterPipeline': 1}
    

    Second (with this pipeline class fix above), you get loads of:

    2017-02-21 13:47:52 [scrapy.core.scraper] ERROR: Error processing {'title': 'Apply Remote Commits to a Local Pull Request',
     'url': '/questions/42367647/apply-remote-commits-to-a-local-pull-request'}
    Traceback (most recent call last):
      File "/home/paul/.virtualenvs/scrapy13.py3/lib/python3.5/site-packages/twisted/internet/defer.py", line 653, in _runCallbacks
        current.result = callback(current.result, *args, **kw)
      File "test.py", line 20, in process_item
        self.file.write(line)
    TypeError: a bytes-like object is required, not 'str'
    

    which you can fix with writing items JSON as bytes:

        def process_item(self, item, spider):
            line = json.dumps(dict(item)) + "\n"
            self.file.write(line.encode('ascii'))
            return item