Search code examples
pythonmultithreadingcallbackpython-multithreadingurlfetch

A very simple multithreading parallel URL fetching (without queue)


I spent a whole day looking for the simplest possible multithreaded URL fetcher in Python, but most scripts I found are using queues or multiprocessing or complex libraries.

Finally I wrote one myself, which I am reporting as an answer. Please feel free to suggest any improvement.

I guess other people might have been looking for something similar.


Solution

  • Simplifying your original version as far as possible:

    import threading
    import urllib2
    import time
    
    start = time.time()
    urls = ["http://www.google.com", "http://www.apple.com", "http://www.microsoft.com", "http://www.amazon.com", "http://www.facebook.com"]
    
    def fetch_url(url):
        urlHandler = urllib2.urlopen(url)
        html = urlHandler.read()
        print "'%s\' fetched in %ss" % (url, (time.time() - start))
    
    threads = [threading.Thread(target=fetch_url, args=(url,)) for url in urls]
    for thread in threads:
        thread.start()
    for thread in threads:
        thread.join()
    
    print "Elapsed Time: %s" % (time.time() - start)
    

    The only new tricks here are:

    • Keep track of the threads you create.
    • Don't bother with a counter of threads if you just want to know when they're all done; join already tells you that.
    • If you don't need any state or external API, you don't need a Thread subclass, just a target function.