Search code examples
pythonmultithreadingdownloadeventletgreenlets

Problems in scraping and saving file by eventlet


I can use evenlet to scrap img from website but failed to save them into domestic directory. Following is the code. Any one is familiar with the I/O operation in tasklets model? Thanks

import pyquery
import eventlet
from eventlet.green import urllib2

#fetch img urls............ works fine

print "loading page..."
html=urllib2.urlopen("http://www.meinv86.com/meinv/yuanchuangmeinvzipai/").read()
print "Parsing urls..."
d=pyquery.PyQuery(html)
count=0
urls=[]
url=''
for i in d('img'):
 count=count+1
 print i.attrib["src"]
 urls.append(i.attrib["src"])


def fetch(url):
 try:
  print "start feteching %s" %(url)
  urlfile = urllib2.urlopen(url)
  size=int(urlfile.headers['content-length'])
  print 'downloading %s, total file size: %d' %(url,size)
  data = urlfile.read()
  print 'download complete - %s' %(url)

##########################################
#file save just won't work

  f=open("/head2/"+url+".jpg","wb")
  f.write(body)  
  f.close()
  print "file saved"
##########################################  

  return data

 except:
  print "fail to download..."




pool = eventlet.GreenPool()

for body in pool.imap(fetch, urls):
  print "done"

Solution

  • Make sure that url is suitable as a filename e.g.:

    import hashlib
    import os
    
    def url2filename(url, ext=''):
        return hashlib.md5(url).hexdigest() + ext # anything that removes '\/'
    
    # ...
    with open(os.path.join("/head2", url2filename(url, '.jpg')), 'wb') as f:
         f.write(body)
    print "file saved"
    

    Note: you probably don't want to write your files to a top-level directory such as '/head2'.

    You could also consider urllib.urlretrieve().