Search code examples
cachingscrapy

How to disable cache in scrapy?


I am trying to crawl a webpage on a particular website.The webpage varies a little for different set of cookies that I sent through scrapy.Request().

If I make the request to webpage one by one , it gives me the correct result, but when I send these cookies in for loop, it is giving me the same result. I think scrapy is creating cache for me and in the second request its taking the response from that cache.Here is my code :

def start_requests(self):
        meta = {'REDIRECT_ENABLED':True}
        productUrl = "http://xyz"
        cookies = [{'name': '', 'value': '=='},{'name': '', 'value': '=='}]
        for cook in cookies:

            header = {"User-Agent":"Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.85 Safari/537.36"}
            productResponse = scrapy.Request(productUrl,callback=self.parseResponse,method='GET',meta=meta,body=str(),cookies=[cook],encoding='utf-8',priority=0,dont_filter=True)
            yield productResponse


def parseResponse(self,response): 
     selector = Selector(response)
     print selector.xpath("xpaths here").extract()
     yield None

I expect that the print statement should give different result for the two requests.

If anything isn't clear , please mention in comments.


Solution

  • Cache can be disable in 2 ways

    1. Changing values in cache related settings in setting.py file. By Keeping HTTPCACHE_ENABLED=False
    2. Or it can be done in runtime " scrapy crawl crawl-name --set HTTPCACHE_ENABLED=False