Search code examples
pythonencodingurllib2utf8-decode

urllib: get utf-8 encoded site source code


I'm trying to fetch a segment of some website. The script works, however it's a website that has accents such as á, é, í, ó, ú.

When I fetch the site using urllib or urllib2, the site source code is not encoded in utf-8, which I would like it to be, as utf-8 supports these accents.

I believe that the target site is encoded in utf-8 as it contains the following meta tag:

<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />

My python script:

opener = urllib2.build_opener()
opener.addheaders = [('Accept-Charset', 'utf-8')]
url_response = opener.open(url)
deal_html = url_response.read().decode('utf-8')

However, I keep getting results that look like they are not encoded un utf-8.

E.g: "Milán" on website = "Mil\xe1n" after urllib2 fetches it

Any suggestions?


Solution

  • Your script is working correctly. The "\xe1" string is the representation of the unicode object resulting from decoding. For example:

    >>> "Mil\xc3\xa1n".decode('utf-8')
    u'Mil\xe1n'
    

    The "\xc3\xa1" sequence is the UTF-8 sequence for leter a with diacritic mark: á.