I'm still learning python and I have a doubt:
In python 2.6.x I usually declare encoding in the file header like this (as in PEP 0263)
# -*- coding: utf-8 -*-
After that, my strings are written as usual:
a = "A normal string without declared Unicode"
But everytime I see a python project code, the encoding is not declared at the header. Instead, it is declared at every string like this:
a = u"A string with declared Unicode"
What's the difference? What's the purpose of this? I know Python 2.6.x sets ASCII encoding by default, but it can be overriden by the header declaration, so what's the point of per string declaration?
Addendum: Seems that I've mixed up file encoding with string encoding. Thanks for explaining it :)
As others have said, # coding:
specifies the encoding the source file is saved in. Here are some examples to illustrate this:
A file saved on disk as cp437 (my console encoding), but no encoding declared
b = 'über'
u = u'über'
print b,repr(b)
print u,repr(u)
Output:
File "C:\ex.py", line 1
SyntaxError: Non-ASCII character '\x81' in file C:\ex.py on line 1, but no
encoding declared; see http://www.python.org/peps/pep-0263.html for details
Output of file with # coding: cp437
added:
über '\x81ber'
über u'\xfcber'
At first, Python didn't know the encoding and complained about the non-ASCII character. Once it knew the encoding, the byte string got the bytes that were actually on disk. For the Unicode string, Python read \x81, knew that in cp437 that was a ü, and decoded it into the Unicode codepoint for ü which is U+00FC. When the byte string was printed, Python sent the hex value 81
to the console directly. When the Unicode string was printed, Python correctly detected my console encoding as cp437 and translated Unicode ü to the cp437 value for ü.
Here's what happens with a file declared and saved in UTF-8:
++ber '\xc3\xbcber'
über u'\xfcber'
In UTF-8, ü is encoded as the hex bytes C3 BC
, so the byte string contains those bytes, but the Unicode string is identical to the first example. Python read the two bytes and decoded it correctly. Python printed the byte string incorrectly, because it sent the two UTF-8 bytes representing ü directly to my cp437 console.
Here the file is declared cp437, but saved in UTF-8:
++ber '\xc3\xbcber'
++ber u'\u251c\u255dber'
The byte string still got the bytes on disk (UTF-8 hex bytes C3 BC
), but interpreted them as two cp437 characters instead of a single UTF-8-encoded character. Those two characters where translated to Unicode code points, and everything prints incorrectly.
That doesn't set the format of the string; it sets the format of the file. Even with that header, "hello"
is a byte string, not a Unicode string. To make it Unicode, you're going to have to use u"hello"
everywhere. The header is just a hint of what format to use when reading the .py
file.
I made the following module called unicoder to be able to do the transformation on variables:
import sys
import os
def ustr(string):
string = 'u"%s"'%string
with open('_unicoder.py', 'w') as script:
script.write('# -*- coding: utf-8 -*-\n')
script.write('_ustr = %s'%string)
import _unicoder
value = _unicoder._ustr
del _unicoder
del sys.modules['_unicoder']
os.system('del _unicoder.py')
os.system('del _unicoder.pyc')
return value
Then in your program you could do the following:
# -*- coding: utf-8 -*-
from unicoder import ustr
txt = 'Hello, Unicode World'
txt = ustr(txt)
print type(txt) # <type 'unicode'>
Source: Stackoverflow.com