Search code examples
pythonxpathweb-scrapingscrapyhref

How to follow links in Scrapy if there is no href?


I am trying to follow links in Scrapy when I already parsed one page and extract information from there. The problem is, webpage has no href, so I can't just follow it with ease. I have managed to expand my XPath query with @data-param and finally got something: page=2.

The problem is I am not sure how to follow this link as I want to pass listName["listLinkMaker"] to my URL generator or composer.

Should I make another "def" and call it say, def parse_pagination to follow links?

JSON used in code is really simple:

[
{"storeName": "Interspar", "storeLinkMaker": "https://popusti.njuskalo.hr/trgovina/Interspar"}
]

Code below:

# -*- coding: utf-8 -*-
import scrapy
import json


class LoclocatorTestSpider(scrapy.Spider):
    name = "loclocator_test"
    start_urls = []

    with open("test_one_url.json", encoding="utf-8") as json_file:
        data = json.load(json_file)
        for store in data:
            storeName = store["storeName"]
            storeLinkUrl = store["storeLinkMaker"]
            start_urls.append(storeLinkUrl)

    def parse(self, response):
        selector = "//div[@class='mainContentWrapInner cf']"

        store_name_selector = ".//h1[@class='title']/text()"
        store_branches_selector = ".//li/a[@class='xiti']/@href"

        for basic_info in response.xpath(selector):
            store_branches = {}

            store_branches["storeName"] = basic_info.xpath(store_name_selector).extract_first()
            # This specific XPath extracts 1st part of link needed to crawl all of store branches
            store_branches["storeBranchesLink"] = basic_info.xpath(store_branches_selector).extract_first() + "?"

            store_branches_url = basic_info.xpath(store_branches_selector).extract_first()
            yield response.follow(store_branches_url, self.parse_pagination, meta={"store_branches": store_branches})


    def parse_branches(self, response):
        store_branches_name_selector = "//li[@class='xiti']"
        store_branches = response.meta["store_branches"]

        for store_branch in response.xpath(store_branches_name_selector):
            store_branches["storeBranchName"] = store_branch.xpath(".//span[@class='title']/text()").extract_first()

            yield store_branches

        # This specific XPath extracts 2nd part of link needed to crawl all of store branches
        # URL should look like: https://popusti.njuskalo.hr/trgovina/Interspar?page=n where n>0
        links = response.selector.xpath("//li[@class='next']/button[@class='nBtn link xiti']/@data-param").extract()
        for link in links:
            absolute_url = #LIST FROM FIRST PARSE (ie. store_branches["storeBranchesLink"]) + link
            yield scrapy.Request(absolute_url, callback=self.parse_branches)

Thank you.


Solution

  • I managed to find a solution by myself and I was relatively close to the solution.

    Under the part:

        # This specific XPath extracts 2nd part of link needed to crawl all of store branches
        # URL should look like: https://popusti.njuskalo.hr/trgovina/Interspar?page=n where n>0
        links = response.selector.xpath("//@data-param").extract()
        store_branches = response.meta["store_branches"]
        for link in links:
            absolute_url = store_branches["storeBranchesLink"]) + link
            yield scrapy.Request(absolute_url, callback=self.parse_branches)
    

    I believe the solution was to add response from store_branches as it was than able to find all possible pages (?page=n where n>0). If anyone knows more technical information as my understand of code is relatively rudimentary, please be sure to answer.