Search code examples
pythonunicodeencodingutf-8

Why doesn't this conversion to utf8 work?


I have a subprocess command that outputs some characters such as '\xf1'. I'm trying to decode it as utf8 but I get an error.

s = '\xf1'
s.decode('utf-8')

The above throws:

UnicodeDecodeError: 'utf8' codec can't decode byte 0xf1 in position 0: unexpected end of data

It works when I use 'latin-1' but shouldn't utf8 work as well? My understanding is that latin1 is a subset of utf8.

Am I missing something here?

EDIT:

print s # ñ
repr(s) # returns "'\\xa9'"

Solution

  • You have confused Unicode with UTF-8. Latin-1 is a subset of Unicode, but it is not a subset of UTF-8. Avoid like the plague ever thinking about individual code units. Just use code points. Do not think about UTF-8. Think about Unicode instead. This is where you are being confused.

    Source Code for Demo Program

    Using Unicode in Python is very easy. It’s especially with Python 3 and wide builds, the only way I use Python, but you can still use the legacy Python 2 under a narrow build if you are careful about sticking to UTF-8.

    To do this, always your source code encoding and your output encoding correctly to UTF-8. Now stop thinking of UTF-anything and use only UTF-8 literals, logical code point numbers, or symbolic character names throughout your Python program.

    Here’s the source code with line numbers:

    % cat -n /tmp/py
         1  #!/usr/bin/env python3.2
         2  # -*- coding: UTF-8 -*-
         3  
         4  from __future__ import unicode_literals
         5  from __future__ import print_function
         6  
         7  import sys
         8  import os
         9  import re
        10  
        11  if not (("PYTHONIOENCODING" in os.environ)
        12              and
        13          re.search("^utf-?8$", os.environ["PYTHONIOENCODING"], re.I)):
        14      sys.stderr.write(sys.argv[0] + ": Please set your PYTHONIOENCODING envariable to utf8\n")
        15      sys.exit(1)
        16  
        17  print('1a: el ni\xF1o')
        18  print('2a: el nin\u0303o')
        19  
        20  print('1a: el niño')
        21  print('2b: el niño')
        22  
        23  print('1c: el ni\N{LATIN SMALL LETTER N WITH TILDE}o')
        24  print('2c: el nin\N{COMBINING TILDE}o')
    

    And here are print functions with their non-ASCII characters uniquoted using the \x{⋯} notation:

    % grep -n ^print /tmp/py | uniquote -x
    17:print('1a: el ni\xF1o')
    18:print('2a: el nin\u0303o')
    20:print('1b: el ni\x{F1}o')
    21:print('2b: el nin\x{303}o')
    23:print('1c: el ni\N{LATIN SMALL LETTER N WITH TILDE}o')
    24:print('2c: el nin\N{COMBINING TILDE}o')
    

    Sample Runs of Demo Program

    Here’s a sample run of that program that shows the three different ways (a, b, and c) of doing it: the first set as literals in your source code (which will be subject to StackOverflow’s NFC conversions and so cannot be trusted!!!) and the second two sets with numeric Unicode code points and with symbolic Unicode character names respectively, again uniquoted so you can see what things really are:

    % python /tmp/py
    1a: el niño
    2a: el niño
    1b: el niño
    2b: el niño
    1c: el niño
    2c: el niño
    
    % python /tmp/py | uniquote -x
    1a: el ni\x{F1}o
    2a: el nin\x{303}o
    1b: el ni\x{F1}o
    2b: el nin\x{303}o
    1c: el ni\x{F1}o
    2c: el nin\x{303}o
    
    % python /tmp/py | uniquote -v
    1a: el ni\N{LATIN SMALL LETTER N WITH TILDE}o
    2a: el nin\N{COMBINING TILDE}o
    1b: el ni\N{LATIN SMALL LETTER N WITH TILDE}o
    2b: el nin\N{COMBINING TILDE}o
    1c: el ni\N{LATIN SMALL LETTER N WITH TILDE}o
    2c: el nin\N{COMBINING TILDE}o
    

    I really dislike looking at binary, but here is what that looks like as binary bytes:

    % python /tmp/py | uniquote -b
    1a: el ni\xC3\xB1o
    2a: el nin\xCC\x83o
    1b: el ni\xC3\xB1o
    2b: el nin\xCC\x83o
    1c: el ni\xC3\xB1o
    2c: el nin\xCC\x83o
    

    The Moral of the Story

    Even when you use UTF-8 source, you should think and use only logical Unicode code point numbers (or symbolic named characters), not the individual 8-bit code units that underlie the serial representation of UTF-8 (or for that matter of UTF-16). It is extremely rare to need code units instead of code points, and it just confuses you.

    You will also get more reliably behavior if you use a wide build of Python3 than you will get with alternatives to those choices, but that is a UTF-32 matter, not a UTF-8 one. Both UTF-32 and UTF-8 are easy to work with, if you just go with the flow.