Search code examples
pythonhtmlweb-scrapingmechanize

python inconsistent behaviour reading content from EU web page


I'm trying to extract content from the following EU page:

http://europa.eu/about-eu/countries/member-countries/greece/index_el.htm

and tried opening the page with urllib2 and mechanize, but I get some garbled, strangely encoded text.

url='http://europa.eu/about-eu/countries/member-countries/greece/index_el.htm'

browser = mechanize.Browser()
browser.set_handle_robots(False)
cookies = mechanize.CookieJar()
browser.addheaders = [('User-agent', 'Mozilla/5.0 (X11; U; Linux i686; en-US)     AppleWebKit/534.7 (KHTML, like Gecko) Chrome/7.0.517.41 Safari/534.7')]

a=browser.open(url,timeout=5)
content=a.read()

gives

>>> content[:100]
'\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x03\xbcW\xddo\x1b\xc7\x11\x7fv\xfe\x8a\xf5\x050\x12\xc0\xe4I\xfejkS\x0c\\\x89M\x8c\xfa\xab\xb2\x84\xa2\x08\x0cay\xb7<\xae\xb9\xb7{\xbe\xdb\xa3\xcc\x16\x05$\xcb\xae\xeb\xc0\x0eP\xa4F\x9f\x8a6\xe8\xabcWQTWV\\\xbd\xc4\xafG\xf9?\xea\xcc\xdd\xf1x\xa4HK:\xc9\\A\xe4r?'

but sometimes it works:

>>> content[:100]
'<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd'

so what can I do to avoid running into this problem?


Solution

  • You are receiving gzipped content; test for the Content-Encoding header and decompress:

    import zlib
    
    if a.info().get('Content-Encoding', '').lower() == 'gzip':
        decompressor = zlib.decompressobj(16 + zlib.MAX_WBITS)
        content = decompressor.decompress(content)
    

    Alternatively, use the excellent python-requests library which will handle sessions for you, but also will transparently decompress gzip and deflate responses for you.