Search code examples
pythonproxyscreen-scraping

Intelligent screen scraping using different proxies and user-agents randomly?


I want to download few HTML pages from http://abc.com/view_page.aspx?ID= The ID is from an array of different numbers.

I would be interested in visiting multiple instances of this URL and saving the file as [ID].HTML using different proxy IP/ports.

I want to use different user-agents and I want to randomize the wait times before each download.

What is the best way of doing this? urllib2? pycURL? cURL? What do you prefer for the task at hand?

Please advise. Thanks guys!


Solution

  • Use something like:

    import urllib2
    import time
    import random
    
    MAX_WAIT = 5
    ids = ...
    agents = ...
    proxies = ...
    
    for id in ids:
        url = 'http://abc.com/view_page.aspx?ID=%d' % id
        opener = urllib2.build_opener(urllib2.ProxyHandler({'http' : proxies[0]}))
        html = opener.open(urllib2.Request(url, None, {'User-agent': agents[0]})).read()
        open('%d.html' % id, 'w').write(html)
        agents.append(agents.pop()) # cycle
        proxies.append(proxies.pop())
        time.sleep(MAX_WAIT*random.random())