Search code examples
pythonweb-scrapingscrapyscrapy-pipeline

Name 'MyItemName' is not defined - Scrapy Item name


Hello guys,

I'm trying to get the data from a website, I already did some projects using scrapy but I don't know how to fix this NameError...

My spider : crawlingVacature.py

import scrapy
from scrapy.http.request import Request
from scrapy import Spider

from crawlVacature.items import CrawlvacatureItem


class CrawlingvacatureSpider(scrapy.Spider):
    name = 'crawlingVacature'
    allowed_domains = ['vacature.com']
    start_urls = ['https://www.vacature.com/nl-be/jobs/zoeken/BI/1']

    def parse(self,response):
        all_links = response.xpath('//div[@class="search-vacancies__prerendered-results"]/a/@href').extract()
        for link in all_links:
            yield Request('https://www.vacature.com/' + link, callback=self.parseAnnonce)

    def parseAnnonce(self,response):
         item = CrawlvacatureItem()
         item[titre] = response.css('h1::text').extract()
         item[corpus] = response.xpath('//div[@class="wrapper__content"]/section').css("div")[-1].xpath('//dl/dd/a/text()').extract()
         yield item

My item file : items.py

import scrapy


class CrawlvacatureItem(scrapy.Item):
    titre = scrapy.Field()
    corpus = scrapy.Field()

My pipeline file : pipelines.py

import json

class JsonWriterPipeline(object):

    def open_spider(self, spider):
        self.file = open('items.js', 'w')

    def close_spider(self, spider):
        self.file.close()

    def process_item(self, item, spider):
        line = json.dumps(dict(item)) + "\n"
        self.file.write(line)
        return item

And of course, I had the following into my settings.py file :

ITEM_PIPELINES = {
    'crawlVacature.pipelines.JsonWriterPipeline': 800,
}

And i run my project with this command :

>>>scrapy crawl crawlingVacature

And about the error I have is :

NameError: name 'titre' is not defined

or

NameError: name 'corpus' is not defined

Thanks in advance for your help :-)


Solution

  • To define common output data format Scrapy provides the Item class. Item objects are simple containers used to collect the scraped data. They provide a dictionary-like API with a convenient syntax for declaring their available fields.

    You should use strings as keys, instead of variables

    def parseAnnonce(self,response):
         item = CrawlvacatureItem()
         item['titre'] = response.css('h1::text').extract()
         item['corpus'] = response.xpath('//div[@class="wrapper__content"]/section').css("div")[-1].xpath('//dl/dd/a/text()').extract()
         yield item