howto-unicode.pdf

(142 KB) Pobierz
Unicode HOWTO
Release2.7.6
Guido van Rossum
Fred L. Drake, Jr., editor
March 17, 2014
PythonSoftwareFoundation
Email:docs@python.org
Contents
1IntroductiontoUnicode
i
1.1
History of Character Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
i
1.2
Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ii
1.3
Encodings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
iii
1.4
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
iv
2Python2.x’sUnicodeSupport
iv
2.1
The Unicode Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
iv
2.2
Unicode Literals in Python Source Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
vi
2.3
Unicode Properties
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
vii
2.4
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
viii
3ReadingandWritingUnicodeData
viii
3.1
Unicode filenames
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ix
3.2
Tips for Writing Unicode-aware Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ix
3.3
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
x
4RevisionHistoryandAcknowledgements
x
Release1.03
This
HOWTO
discusses
Python
2.x’s
support
for
Unicode,
and
explains
various
problems
that
peo-
ple
commonly
encounter
when
trying
to
work
with
Unicode.
For
the
Python
3
version,
see
1 Introduction to Unicode
1.1 History of Character Codes
In 1968, the American Standard Code for Information Interchange, better known by its acronym ASCII, was
standardized. ASCII defined numeric codes for various characters, with the numeric values running from 0 to 127.
For example, the lowercase letter ‘a’ is assigned 97 as its code value.
1276745962.001.png
 
ASCII was an American-developed standard, so it only defined unaccented characters. There was an ‘e’, but no ‘é’
or ‘Í’. This meant that languages which required accented characters couldn’t be faithfully represented in ASCII.
(Actually the missing accents matter for English, too, which contains words such as ‘naïve’ and ‘café’, and some
publications have house styles which require spellings such as ‘coöperate’.)
For a while people just wrote programs that didn’t display accents. I remember looking at Apple ][ BASIC
programs, published in French-language publications in the mid-1980s, that had lines like these:
PRINT"FICHIERESTCOMPLETE."
PRINT"CARACTERENONACCEPTE."
Those messages should contain accents, and they just look wrong to someone who can read French.
In the 1980s, almost all personal computers were 8-bit, meaning that bytes could hold values ranging from 0 to
255. ASCII codes only went up to 127, so some machines assigned values between 128 and 255 to accented
characters. Different machines had different codes, however, which led to problems exchanging files. Eventually
various commonly used sets of values for the 128-255 range emerged. Some were true standards, defined by the
International Standards Organization, and some weredefactoconventions that were invented by one company or
another and managed to catch on.
255 characters aren’t very many. For example, you can’t fit both the accented characters used in Western Europe
and the Cyrillic alphabet used for Russian into the 128-255 range because there are more than 127 such characters.
You could write files using different codes (all your Russian files in a coding system called KOI8, all your French
files in a different coding system called Latin1), but what if you wanted to write a French document that quotes
some Russian text? In the 1980s people began to want to solve this problem, and the Unicode standardization
effort began.
Unicode started out using 16-bit characters instead of 8-bit characters. 16 bits means you have 2^16 = 65,536
distinct values available, making it possible to represent many different characters from many different alphabets;
an initial goal was to have Unicode contain the alphabets for every single human language. It turns out that even 16
bits isn’t enough to meet that goal, and the modern Unicode specification uses a wider range of codes, 0-1,114,111
(0x10ffff in base-16).
There’s a related ISO standard, ISO 10646. Unicode and ISO 10646 were originally separate efforts, but the
specifications were merged with the 1.1 revision of Unicode.
(This discussion of Unicode’s history is highly simplified. I don’t think the average Python programmer needs to
worry about the historical details; consult the Unicode consortium site listed in the References for more informa-
tion.)
1.2 Definitions
Acharacteris the smallest possible component of a text. ‘A’, ‘B’, ‘C’, etc., are all different characters. So are
‘È’ and ‘Í’. Characters are abstractions, and vary depending on the language or context you’re talking about. For
example, the symbol for ohms ( ) is usually drawn much like the capital letter omega ( ) in the Greek alphabet
(they may even be the same in some fonts), but these are two different characters that have different meanings.
The Unicode standard describes how characters are represented bycodepoints. A code point is an integer value,
usually denoted in base 16. In the standard, a code point is written using the notation U+12ca to mean the
character with value 0x12ca (4810 decimal). The Unicode standard contains a lot of tables listing characters and
their corresponding code points:
0061 ’a’;LATINSMALLLETTERA
0062 ’b’;LATINSMALLLETTERB
0063 ’c’;LATINSMALLLETTERC
...
007B ’{’;LEFTCURLYBRACKET
Strictly, these definitions imply that it’s meaningless to say ‘this is character U+12ca’. U+12ca is a code point,
which represents some particular character; in this case, it represents the character ‘ETHIOPIC SYLLABLE WI’.
In informal contexts, this distinction between code points and characters will sometimes be forgotten.
A character is represented on a screen or on paper by a set of graphical elements that’s called aglyph. The glyph
for an uppercase A, for example, is two diagonal strokes and a horizontal stroke, though the exact details will
depend on the font being used. Most Python code doesn’t need to worry about glyphs; figuring out the correct
glyph to display is generally the job of a GUI toolkit or a terminal’s font renderer.
1.3 Encodings
To summarize the previous section: a Unicode string is a sequence of code points, which are numbers from 0 to
0x10ffff. This sequence needs to be represented as a set of bytes (meaning, values from 0-255) in memory. The
rules for translating a Unicode string into a sequence of bytes are called anencoding.
The first encoding you might think of is an array of 32-bit integers. In this representation, the string “Python”
would look like this:
P y t h o n
0x500000007900000074000000680000006f0000006e000000
01234567891011121314151617181920212223
This representation is straightforward but using it presents a number of problems.
1. It’s not portable; different processors order the bytes differently.
2. It’s very wasteful of space. In most texts, the majority of the code points are less than 127, or less than
255, so a lot of space is occupied by zero bytes. The above string takes 24 bytes compared to the 6 bytes
needed for an ASCII representation. Increased RAM usage doesn’t matter too much (desktop computers
have megabytes of RAM, and strings aren’t usually that large), but expanding our usage of disk and network
bandwidth by a factor of 4 is intolerable.
3. It’s not compatible with existing C functions such as strlen() , so a new family of wide string functions
would need to be used.
4. Many Internet standards are defined in terms of textual data, and can’t handle content with embedded zero
bytes.
Generally people don’t use this encoding, instead choosing other encodings that are more efficient and convenient.
UTF-8 is probably the most commonly supported encoding; it will be discussed below.
Encodings don’t have to handle every possible Unicode character, and most encodings don’t.
For example,
Python’s default encoding is the ‘ascii’ encoding.
The rules for converting a Unicode string into the ASCII
encoding are simple; for each code point:
1. If the code point is < 128, each byte is the same as the value of the code point.
2. If the code point is 128 or greater, the Unicode string can’t be represented in this encoding. (Python raises
a UnicodeEncodeError exception in this case.)
Latin-1, also known as ISO-8859-1, is a similar encoding. Unicode code points 0-255 are identical to the Latin-1
values, so converting to this encoding simply requires converting code points to byte values; if a code point larger
than 255 is encountered, the string can’t be encoded into Latin-1.
Encodings don’t have to be simple one-to-one mappings like Latin-1. Consider IBM’s EBCDIC, which was used
on IBM mainframes. Letter values weren’t in one block: ‘a’ through ‘i’ had values from 129 to 137, but ‘j’ through
‘r’ were 145 through 153. If you wanted to use EBCDIC as an encoding, you’d probably use some sort of lookup
table to perform the conversion, but this is largely an internal detail.
UTF-8 is one of the most commonly used encodings. UTF stands for “Unicode Transformation Format”, and the
‘8’ means that 8-bit numbers are used in the encoding. (There’s also a UTF-16 encoding, but it’s less frequently
used than UTF-8.) UTF-8 uses the following rules:
1. If the code point is <128, it’s represented by the corresponding byte value.
2. If the code point is between 128 and 0x7ff, it’s turned into two byte values between 128 and 255.
3. Code points >0x7ff are turned into three- or four-byte sequences, where each byte of the sequence is between
128 and 255.
UTF-8 has several convenient properties:
1. It can handle any Unicode code point.
2. A Unicode string is turned into a string of bytes containing no embedded zero bytes. This avoids byte-
ordering issues, and means UTF-8 strings can be processed by C functions such as strcpy() and sent
through protocols that can’t handle zero bytes.
3. A string of ASCII text is also valid UTF-8 text.
4. UTF-8 is fairly compact; the majority of code points are turned into two bytes, and values less than 128
occupy only a single byte.
5. If bytes are corrupted or lost, it’s possible to determine the start of the next UTF-8-encoded code point and
resynchronize. It’s also unlikely that random 8-bit data will look like valid UTF-8.
1.4 References
The Unicode Consortium site at < http://www.unicode.org > has character charts, a glossary, and PDF versions
of the Unicode specification.
Be prepared for some difficult reading.
chronology of the origin and development of Unicode.
To help understand the standard, Jukka Korpela has written an introductory guide to reading the Unicode character
Another good introductory article was written by Joel Spolsky < http://www.joelonsoftware.com/articles/Unicode.html > .
If this introduction didn’t make things clear to you, you should try reading this alternate article before continuing.
Wikipedia
entries
are
often
helpful;
see
the
entries
for
“character
encoding”
and
UTF-8
for
example.
2 Python 2.x’s Unicode Support
Now that you’ve learned the rudiments of Unicode, we can look at Python’s Unicode features.
2.1 The Unicode Type
Unicode strings are expressed as instances of the unicode type, one of Python’s repertoire of built-in types. It
derives from an abstract type called basestring , which is also an ancestor of the str type; you can therefore
check if a value is a string type with isinstance(value,basestring) . Under the hood, Python repre-
sents Unicode strings as either 16- or 32-bit integers, depending on how the Python interpreter was compiled.
The unicode() constructor has the signature unicode(string[,encoding,errors]) . All of its
arguments should be 8-bit strings. The first argument is converted to Unicode using the specified encoding; if you
leave off the encoding argument, the ASCII encoding is used for the conversion, so characters greater than 127
will be treated as errors:
>>> unicode ( ’abcdef’ )
u’abcdef’
>>> s = unicode ( ’abcdef’ )
>>> type (s)
<type’unicode’>
>>> unicode ( ’abcdef’ + chr ( 255 ))
Traceback(mostrecentcalllast):
...
UnicodeDecodeError :’ascii’codeccan’tdecodebyte0xffinposition6:
ordinalnotinrange(128)
The errors argument specifies the response when the input string can’t be converted according to the encoding’s
rules. Legal values for this argument are ‘strict’ (raise a UnicodeDecodeError exception), ‘replace’ (add
U+FFFD, ‘REPLACEMENT CHARACTER’), or ‘ignore’ (just leave the character out of the Unicode result).
The following examples show the differences:
>>> unicode ( \x80 abc’ ,errors = ’strict’ )
Traceback(mostrecentcalllast):
...
UnicodeDecodeError :’ascii’codeccan’tdecodebyte0x80inposition0:
ordinalnotinrange(128)
>>> unicode ( \x80 abc’ ,errors = ’replace’ )
u’\ufffdabc’
>>> unicode ( \x80 abc’ ,errors = ’ignore’ )
u’abc’
Encodings are specified as strings containing the encoding’s name. Python 2.7 comes with roughly 100 different
encodings; see the Python Library Reference atstandard-encodingsfor a list. Some encodings have multiple
names; for example, ‘latin-1’, ‘iso_8859_1’ and ‘8859’ are all synonyms for the same encoding.
One-character Unicode strings can also be created with the unichr() built-in function, which takes integers
and returns a Unicode string of length 1 that contains the corresponding code point. The reverse operation is the
built-in ord() function that takes a one-character Unicode string and returns the code point value:
>>> unichr ( 40960 )
u’\ua000’
>>> ord ( u’ \ua000 )
40960
Instances of the unicode type have many of the same methods as the 8-bit string type for operations such as
searching and formatting:
>>> s = u’Waseverfeathersolightlyblowntoandfroasthismultitude?’
>>> s . count( ’e’ )
5
>>> s . find( ’feather’ )
9
>>> s . find( ’bird’ )
-1
>>> s . replace( ’feather’ , ’sand’ )
u’Waseversandsolightlyblowntoandfroasthismultitude?’
>>> s . upper()
u’WASEVERFEATHERSOLIGHTLYBLOWNTOANDFROASTHISMULTITUDE?’
Note that the arguments to these methods can be Unicode strings or 8-bit strings. 8-bit strings will be converted
to Unicode before carrying out the operation; Python’s default ASCII encoding will be used, so characters greater
than 127 will cause an exception:
>>> s . find( ’Was \x9f )
Traceback(mostrecentcalllast):
...
UnicodeDecodeError :’ascii’codeccan’tdecodebyte0x9finposition3:
ordinalnotinrange(128)
>>> s . find( u’Was \x9f )
-1
Much Python code that operates on strings will therefore work with Unicode strings without requiring any changes
to the code. (Input and output code needs more updating for Unicode; more on this later.)
Another important method is .encode([encoding],[errors=’strict’]) , which returns an 8-bit
string version of the Unicode string, encoded in the requested encoding. The errors parameter is the same
as the parameter of the unicode() constructor, with one additional possibility; as well as ‘strict’, ‘ignore’, and
‘replace’, you can also pass ‘xmlcharrefreplace’ which uses XML’s character references. The following example
shows the different results:
Zgłoś jeśli naruszono regulamin