Search code examples
pythonproxyscrapyiptor

Signal.NEWNYM not giving new ip address when used in scrapy middleware


I am using a scrapy web crawler with privoxy and tor. Everything is correctly configured and I can scrape through tor network via privoxy.

I want the ip used to scrape each address to change with each request/x number of requests. I'm using controller.signal(Signal.NEWNYM) with proxy middleware to attempt this following the answer from here: Scrapy with Privoxy and Tor: how to renew IP, but I'm not getting any ip change.

This is the middleware being used to change tor circuits and ip:

def _set_new_ip():
    with Controller.from_port(port=9051) as controller:
        controller.authenticate(password='password')
        controller.signal(Signal.NEWNYM)

class ProxyMiddleware(object):

    def process_request(self, request, spider):
        _set_new_ip()
        request.meta['proxy'] = 'http://127.0.0.1:8118'
        spider.log('Proxy : %s' % request.meta['proxy'])

I am aware that changing tor circuits doesn't necessarily mean a change of ip, however I tested out controller.signal(Signal.NEWNYM) in a separate script and found that change of tor circuits does lead to periodic change in ip. Here is the script I used to test:

def set_new_ip():
    """Change IP using TOR"""
    with Controller.from_port(port=9051) as controller:
        controller.authenticate(password='password')
        controller.signal(Signal.NEWNYM)

while True:

    set_new_ip()

    local_proxy = '127.0.0.1:8118'
    http_proxy = {
        'http': local_proxy,
        'https': local_proxy
    }

    current_ip = requests.get(
        url='http://icanhazip.com/',
        proxies=http_proxy,
        verify=False
    )

    print(current_ip.content)

From this script I'd get output like the following, showing periodic ip change:

09.70.100.27\n'
b'109.70.100.27\n'
b'109.70.100.27\n'
b'109.70.100.27\n'
b'198.98.58.135\n'
b'198.98.58.135\n'
b'198.98.58.135\n'
b'198.98.58.135\n'
b'198.98.58.135\n'
b'198.98.58.135\n'
b'198.98.58.135\n'
b'198.98.58.135\n'
b'198.98.58.135\n'
b'198.98.58.135\n'
b'198.98.58.135\n'
b'198.98.58.135\n'
b'185.220.101.2\n'
b'185.220.101.2\n'
b'185.220.101.2\n'
b'185.220.101.2\n'
b'185.220.101.2\n'
b'185.220.101.2\n'
b'185.220.101.2\n'

Yet with my spider I don't get this periodic change. In ip-log.csv I just get a giant list of the same ip address repeated over and over again. What am I doing wrong?

This is the spider code I'm using:

class Spider(scrapy.Spider):

   name = 'spider'
   dir = '/a/path'
   with open(dir + 'results.csv', 'w', newline='') #open .csv file to record results from scraping

   with open(dir + 'ip-log.csv', 'w', newline='') #open .csv file to record ip used for each request

   def start_requests(self):

      url = 'https://url.com'
      yield scrapy.request(url, callback=self.parse)


   #collect listed urls
   def parse(self, response):

       path = '/response/xpath/@href'
       if response.xpath(path):

           for href in response.xpath(path).extract():

               yield Request(url=response.urljoin(href), callback=self.save_result)

               url = response.request.url.split('&')
               for item in url:

                   if item.startswith('index='):
                       page_index = item.split('=')[-1]

               next_page = ['index=' + str(int(page_index) + 24) if x.startswith('index=') else x for x in url]
               next_page = '&'.join(next_page)
               yield scrapy.Request(url=next_page, callback=self.parse)

               #use icanhazip.com to get ip used for request
               yield scrapy.Request('https://icanhazip.com', callback=self.check_ip, dont_filter=True)   

   #record ip
   def check_ip(self, response):

      ip = response.xpath('/html/body/p').extract()
      dir = '/a/path'

         with open(dir + '/ip-log.csv', 'a+', newline='') as f:   #write request ip in .csv file

            writer = csv.writer(f)
            writer.writerow([ip])

      yield scrapy.Request('https://icanhazip.com', callback=self.parse, dont_filter=True)

    #visit each url and save results
    def save_result(self, response):

       dir = '/a/path'

       path = '/desired/xpath'
       result = response.xpath(path).extract()

       with open(dir + '/results.csv', 'a+', newline='') as f:

           writer = csv.writer(f)
           writer.writerow([price])   #save results to results.csv

Solution

  • Apparently tor doesn't want to switch ip when visiting icanhazip.com. I tried the same code with a different website ('http://whatsmyuseragent.org/') and the ip is now changing periodically. Tested this with relevant middleware disabled (http://whatsmyuseragent.org/ showed same unhidden ip without periodic change).