Monday, 10 July 2017

Computer Science Internationalization - Unicode Encoding & Decoding

Several years ago I devised this visual and fun way to teach and practise encoding and decoding Unicode. I used this method in my International Computing class. This method involves use of pencil and eraser.

The codepoints and the UTF-8 are all written in hexadecimal(hex). The binary bits are an intermediate form for the purposes of encoding and decoding.

We start with the following form which is designed for encoding Unicode codepoints to UTF-8 and decoding UTF-8 to Unicode codepoints.
Encoding: We will start with encoding Unicode codepoints to UTF-8.

The first thing we can do is fill in the fixed bits. They are the fixed bits defined by the encoding scheme. I have entered the fixed bits in red to make them distinct from variable bits.
Now we will write one or more Unicode codepoints on the form. These will be the codepoints we will encode into UTF-8. The codepoints should be written in hexadecimal. I will use the codepoints U+0444 and U+597D.

So, how do we determine where the codepoints go on the form. We need to look at the free bits to determine the range of values that can be accommodated.

  • 1 byte row - 7 free variable bits giving a range of 0 ➔ 7F
  • 2 byte row - 11 free variable bits giving a range of 80 ➔ 7FF
  • 3 byte row - 16 free variable bits giving a range of 800 ➔ FFFF
  • 4 byte row - 21 free variable bits giving a range of 10000 ➔ 1FFFFF (the actual maximum value of a codepoint is 10FFFF)
Now we know the ranges we can put U+0444 and U+597D in the correct places of the form.

We have empty boxes into which we write the binary values of the codepoints.
Finally, we take the complete bytes and write them as hexadecimal values to form the UTF-8 encoded forms. U+0444 encoded is D184, U+597D encoded is E5A5BD.
Decoding: Now onto decoding from UTF-8 to Unicode codepoints. We will decode the UTF-8 F0AA9FB7 which I have entered onto the form. I have used spaces on the form to make the byte boundaries more obvious.
Complete the bytes by writing the binary variable values.
Extract the variable binary values to form the hex Unicode codepoint U+2A7F7.
Whilst I was at it, I completed a single byte entry. The single byte characters are ASCII characters. ASCII is a subset of Unicode.

It is a Unicode convention, when writing codepoints, to use a minimum of four hex digits. So for codepoints <1000, one should left pad with zeroes. Hence my entries U+0444 and U+0057 rather than U+444 and U+57.