Search code examples
pythonweb-scrapingscrapyscrapy-pipeline

Pass file_name argument to pipeline for csv export in scrapy


I need scrapy to take an argument (-a FILE_NAME="stuff") from the command line and apply that to the file created in my CSVWriterPipeLine in pipelines.py file. (The reason I went with pipeline.py was that the built in exporter was repeating data and repeating the header in the output file. Same code, but writing in the pipeline fixed it.)

I tried from scrapy.utils.project import get_project_settings as seen in

How to access scrapy settings from item Pipeline

but I couldn't change the file name from the command line.

I've also tried implementing @avaleske's solution that's on the page, since it specifically addresses this, but I don't know where to place the code he talks about in my scrapy folder.

Help?

settings.py:

BOT_NAME = 'internal_links'

SPIDER_MODULES = ['internal_links.spiders']
NEWSPIDER_MODULE = 'internal_links.spiders'
CLOSESPIDER_PAGECOUNT = 100
ITEM_PIPELINES = ['internal_links.pipelines.CsvWriterPipeline']
# Crawl responsibly by identifying yourself (and your website) on the       user-agent
USER_AGENT = 'internal_links (+http://www.mycompany.com)'
FILE_NAME = "mytestfilename"

pipelines.py:

import csv

class CsvWriterPipeline(object):

    def __init__(self, file_name):
        header = ["URL"]
        self.file_name = file_name
        self.csvwriter = csv.writer(open(self.file_name, 'wb'))
        self.csvwriter.writerow(header)


    def process_item(self, item, internallinkspider):
        # build your row to export, then export the row
        row = [item['url']]
        self.csvwriter.writerow(row)
        return item

spider.py:

from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule
from internal_links.items import MyItem



class MySpider(CrawlSpider):
    name = 'internallinkspider'
    allowed_domains = ['angieslist.com']
    start_urls = ['http://www.angieslist.com']

    rules = (Rule(SgmlLinkExtractor(), callback='parse_url', follow=True), )

    def parse_url(self, response):
        item = MyItem()
        item['url'] = response.url

        return item

Solution

  • You can use the "settings" notion and the -s command-line argument:

    scrapy crawl internallinkspider -s FILE_NAME="stuff"
    

    Then, in the pipeline:

    import csv
    
    class CsvWriterPipeline(object):
        @classmethod
        def from_crawler(cls, crawler):
            settings = crawler.settings
            file_name = settings.get("FILE_NAME")
            return cls(file_name)
    
        def __init__(self, file_name):
            header = ["URL"]
            self.csvwriter = csv.writer(open(file_name, 'wb'))
            self.csvwriter.writerow(header)
    
        def process_item(self, item, internallinkspider):
            # build your row to export, then export the row
            row = [item['url']]
            self.csvwriter.writerow(row)
            return item