Search code examples
pythonjsonscrapyexportscrapy-pipeline

.json export formating in Scrapy


Just a quick question about json export formatting in Scrapy. My exported file looks like this.

{"pages": {"title": "x", "text": "x", "tags": "x", "url": "x"}}
{"pages": {"title": "x", "text": "x", "tags": "x", "url": "x"}}
{"pages": {"title": "x", "text": "x", "tags": "x", "url": "x"}}

But I would like it to be in this exact format. Somehow I need to get all the other information under "pages".

{"pages": [
     {"title": "x", "text": "x", "tags": "x", "url": "x"},
     {"title": "x", "text": "x", "tags": "x", "url": "x"},
     {"title": "x", "text": "x", "tags": "x", "url": "x"}
]}

I'm not very experienced in scrapy or python, but I have gotten everything else done in my spider except the export format. This is my pipelines.py, which I just got working.

from scrapy.exporters import JsonItemExporter
import json

class RautahakuPipeline(object):

    def open_spider(self, spider):
        self.file = open('items.json', 'w')

    def close_spider(self, spider):
        self.file.close()

    def process_item(self, item, spider):
        line = json.dumps(dict(item)) + "\n"
        self.file.write(line)
        return item

These are the items in my spider.py I need to extract

        items = []
        for title, text, tags, url in zip(product_title, product_text, product_tags, product_url):
            item = TechbbsItem()
            item['pages'] = {}
            item['pages']['title'] = title
            item['pages']['text'] = text
            item['pages']['tags'] = tags
            item['pages']['url'] = url
            items.append(item)
        return items

Any help is greatly appreciated, as this is the last obstacle in my project.

EDIT

items = {'pages':[{'title':title,'text':text,'tags':tags,'url':url} for title, text, tags, url in zip(product_title, product_text, product_tags, product_url)]}

This extracts the .json in this format

{"pages": [{"title": "x", "text": "x", "tags": "x", "url": "x"}]} {"pages": [{"title": "x", "text": "x", "tags": "x", "url": "x"}]} {"pages": [{"title": "x", "text": "x", "tags": "x", "url": "x"}]}

This is getting better but I would still need only one "pages" on the start of the file and everything else inside an array under it.

EDIT 2

I think my spider.py is the reason why "pages" gets added to every line in the .json file and I should have originally posted the whole code of it. Here it is.

# -*- coding: utf-8 -*-
import scrapy
from urllib.parse import urljoin

class TechbbsItem(scrapy.Item):
    pages = scrapy.Field()
    title = scrapy.Field()
    text= scrapy.Field()
    tags= scrapy.Field()
    url = scrapy.Field()

class TechbbsSpider(scrapy.Spider):
    name = 'techbbs'
    allowed_domains = ['bbs.io-tech.fi']
    start_urls = ['https://bbs.io-tech.fi/forums/prosessorit-emolevyt-ja-muistit.73/?prefix_id=1' #This is a list page full of used pc-part listings
             ]
    def parse(self, response): #This visits product links in the product list page
        links = response.css('a.PreviewTooltip::attr(href)').extract()
        for l in links:
            url = response.urljoin(l)
            yield scrapy.Request(url, callback=self.parse_product)
        next_page_url = response.xpath('//a[contains(.,"Seuraava ")]/@href').extract_first()
        if next_page_url:
           next_page_url =  response.urljoin(next_page_url) 
           yield scrapy.Request(url=next_page_url, callback=self.parse)

    def parse_product(self, response): #This extracts data from inside the links
        product_title = response.xpath('normalize-space(//h1/span/following-sibling::text())').extract()
        product_text = response.xpath('//b[contains(.,"Hinta:")]/following-sibling::text()[1]').re('([0-9]+)')
        tags = "tags" #This is just a placeholder
        product_tags = tags
        product_url = response.xpath('//html/head/link[7]/@href').extract()

        items = []
        for title, text, tags, url in zip(product_title, product_text, product_tags, product_url):
            item = TechbbsItem()
            item['pages'] = {}
            item['pages']['title'] = title
            item['pages']['text'] = text
            item['pages']['tags'] = tags
            item['pages']['url'] = url
            items.append(item)
        return items

So my spider starts crawling from a page full of product listings. It visits every one of the 50 product links and scrapes 4 items, title, text, tags and url. After scraping every link in one page, it goes to next one and so on. I suspect the loops in the code prevent your suggestions from working for me.

I would like to get the .json export to the exact form mentioned in the original question. Se there would be {"pages": [ on the beginning of the file, then all the indented item lines {"title": "x", "text": "x", "tags": "x", "url": "x"}, and in the end ]}


Solution

  • In terms of memory usage, it's not a good practice, but an option is to keep an object and write it at the end of the process:

    class RautahakuPipeline(object):
    
        def open_spider(self, spider):
            self.items = { "pages":[] }
            self.file = null # open('items.json', 'w')
    
        def close_spider(self, spider):
            self.file = open('items.json', 'w')
            self.file.write(json.dumps(self.items))
            self.file.close()
    
        def process_item(self, item, spider):            
            self.items["pages"].append(dict(item))
            return item
    

    Then, if memory is an issue (must be treat with attention anyway), try writing the json file as follows:

    class RautahakuPipeline(object):
    
        def open_spider(self, spider):
            self.file = open('items.json', 'w')
            header='{"pages": ['
            self.file.write(header)
    
        def close_spider(self, spider):
            footer=']}'
            self.file.write(footer)
            self.file.close()
    
        def process_item(self, item, spider):
            line = json.dumps(dict(item)) + "\n"
            self.file.write(line)
            return item
    

    I hope it helps.