These three classes use fixed width character representations in memory
(1 byte, 2 bytes and 4 bytes respectively) for efficient storage and
efficient character access (i.e. using #at:). Unicode16 strings are
_not_ stored in memory using Utf16 encoding, and so on ... Unicode7 maps
to the 7 bit ASCII character set (code points <= 127), Unicode16 maps to
characters with code points <= 65535, and Unicode32 covers characters
with code points > 65535 ...
To put it another way, when you send #asString to a Utf8 instance you
may end up with a Unicode7, Unicode16, or Unicode32 instance depending
upon the size of the largest code point of the characters in the string ...