UTF-8 vs UTF-16 and UTF-32 conversion confusion - javascript

I'm kinda confused about conversion of unicode characters into hexadecimal values.
I'm using this website to get hexadecimal value for characters. (https://www.branah.com/unicode-converter)
If I put "A" and convert then I get something like:
0041 --> UTF-16
00000041 --> UTF-32
41 --> UTF-8
00065 --> Decimal Value
This above output makes sense because we can convert all these hexadecimal values into 65.
Now, If i put "ะฏ" (without quotes) and convert it then I get values like.
042f --> UTF-16
0000042f --> UTF-32
d0af --> UTF-8
01071 --> Decimal Value
This output doesn't make sense to me because not all these hexadecimal values convert back to 1071.
If you you take d0af and try to convert it back to decimal value then you will get 53423.
This is something that is really confusing for me and I've searching online to find answers about this conversion but so far I've not been able to find any good answer.
So, I'm wondering if anyone here can help. (that would mean alot) // Thanks in advance.
you can also see below link for example of this conversion in binary. (and can you explain why utf-8 binary value is different in last example??)
http://kunststube.net/encoding/

UTF-8 uses variable length encoding (can use 1, 2, 3 or 4 bytes to store a single character).
In this case:
d0af = 11010000 10101111
110 at the start tells us to expect 2 bytes when decoding it (looking at the byte 1 column of the schematic). When decoding we use the binary digits that follow the first 0 in the byte. So,
110x xxxx the x's are our first lot of values for our actual unicode value. Every additional byte follows the pattern of 10xx xxxx. So taking the values from byte 1 & 2 we get:
110[10000] 10[101111] =
V V
10000 101111 = 42f = 1071
The reason this is done is that for common characters less bytes are needed for transmission and storage. But on the odd occasion that a uncommon character is needed it can still be used at part of UTF-8.
If you have any questions, please comment.

Related

Why is that JavaScript's strings are using UTF-16 but one character's actual size can be just one byte?

according to this article:
Internally, JavaScript source code is treated as a sequence of UTF-16 code units.
And this IBM doc says that:
UTF-16 is based on 16-bit code units. Therefore, each character can be 16 bits (2 bytes) or 32 bits (4 bytes).
But I tested in Chrome's console that English letters are only taking 1 byte, not 2 or 4.
new Blob(['a']).size === 1
I wonder why that is the case? Am I missing something here?
Internally, JavaScript source code is treated as a sequence of UTF-16 code units.
Note that this is referring to source code, not String values. String values are referenced to also be UTF-16 later in the article:
When a String contains actual textual data, each element is considered to be a single UTF-16 code unit.
The discrepancy here is actually in the Blob constructor. From MDN:
Note that strings here are encoded as UTF-8, unlike the usual JavaScript UTF-16 strings.
UTF has a varying character size.
a has a size of 1 byte, but ฤ… for example has 2
console.log('a', new Blob(['a']).size)
console.log('ฤ…', new Blob(['ฤ…']).size)

Why does code points between U+D800 and U+DBFF generate one-length string in ECMAScript 6?

I'm getting too confused. Why do code points from U+D800 to U+DBFF encode as a single (2 bytes) String element, when using the ECMAScript 6 native Unicode helpers?
I'm not asking how JavaScript/ECMAScript encodes Strings natively, I'm asking about an extra functionality to encode UTF-16 that makes use of UCS-2.
var str1 = '\u{D800}';
var str2 = String.fromCodePoint(0xD800);
console.log(
str1.length, str1.charCodeAt(0), str1.charCodeAt(1)
);
console.log(
str2.length, str2.charCodeAt(0), str2.charCodeAt(1)
);
Re-TL;DR: I want to know why the above approaches return a string of length 1. Shouldn't U+D800 generate a 2 length string, since my browser's ES6 implementation incorporates UCS-2 encoding in strings, which uses 2 bytes for each character code?
Both of these approaches return a one-element String for the U+D800 code point (char code: 55296, same as 0xD800). But for code points bigger than U+FFFF each one returns a two-element String, the lead and trail. lead would be a number between U+D800 and U+DBFF, and trail I'm not sure about, I only know it helps changing the result code point. For me the return value doesn't make sense, it represents a lead without trail. Am I understanding something wrong?
I think your confusion is about how Unicode encodings work in general, so let me try to explain.
Unicode itself just specifies a list of characters, called "code points", in a particular order. It doesn't tell you how to convert those to bits, it just gives them all a number between 0 and 1114111 (in hexadecimal, 0x10FFFF). There are several different ways these numbers from U+0 to U+10FFFF can be represented as bits.
In an earlier version, it was expected that a range of 0 to 65535 (0xFFFF) would be enough. This can be naturally represented in 16 bits, using the same convention as an unsigned integer. This was the original way of storing Unicode, and is now known as UCS-2. To store a single code point, you reserve 16 bits of memory.
Later, it was decided that this range was not large enough; this meant that there were code points higher than 65535, which you can't represent in a 16-bit piece of memory. UTF-16 was invented as a clever way of storing these higher code points. It works by saying "if you look at a 16-bit piece of memory, and it's a number between 0xD800 and 0xDBF (a "low surrogate"), then you need to look at the next 16 bits of memory as well". Any piece of code which is performing this extra check is processing its data as UTF-16, and not UCS-2.
It's important to understand that the memory itself doesn't "know" which encoding it's in, the difference between UCS-2 and UTF-16 is how you interpret that memory. When you write a piece of software, you have to choose which interpretation you're going to use.
Now, onto Javascript...
Javascript handles input and output of strings by interpreting its internal representation as UTF-16. That's great, it means that you can type in and display the famous ๐Ÿ’ฉ character, which can't be stored in one 16-bit piece of memory.
The problem is that most of the built in string functions actually handle the data as UCS-2 - that is, they look at 16 bits at a time, and don't care if what they see is a special "surrogate". The function you used, charCodeAt(), is an example of this: it reads 16 bits out of memory, and gives them to you as a number between 0 and 65535. If you feed it ๐Ÿ’ฉ, it will just give you back the first 16 bits; ask it for the next "character" after, and it will give you the second 16 bits (which will be a "high surrogate", between 0xDC00 and 0xDFFF).
In ECMAScript 6 (2015), a new function was added: codePointAt(). Instead of just looking at 16 bits and giving them to you, this function checks if they represent one of the UTF-16 surrogate code units, and if so, looks for the "other half" - so it gives you a number between 0 and 1114111. If you feed it ๐Ÿ’ฉ, it will correctly give you 128169.
var poop = '๐Ÿ’ฉ';
console.log('Treat it as UCS-2, two 16-bit numbers: ' + poop.charCodeAt(0) + ' and ' + poop.charCodeAt(1));
console.log('Treat it as UTF-16, one value cleverly encoded in 32 bits: ' + poop.codePointAt(0));
// The surrogates are 55357 and 56489, which encode 128169 as follows:
// 0x010000 + ((55357 - 0xD800) << 10) + (56489 - 0xDC00) = 128169
Your edited question now asks this:
I want to know why the above approaches return a string of length 1. Shouldn't U+D800 generate a 2 length string?
The hexadecimal value D800 is 55296 in decimal, which is less than 65536, so given everything I've said above, this fits fine in 16 bits of memory. So if we ask charCodeAt to read 16 bits of memory, and it finds that number there, it's not going to have a problem.
Similarly, the .length property measures how many sets of 16 bits there are in the string. Since this string is stored in 16 bits of memory, there is no reason to expect any length other than 1.
The only unusual thing about this number is that in Unicode, that value is reserved - there isn't, and never will be, a character U+D800. That's because it's one of the magic numbers that tells a UTF-16 algorithm "this is only half a character". So a possible behaviour would be for any attempt to create this string to simply be an error - like opening a pair of brackets that you never close, it's unbalanced, incomplete.
The only way you could end up with a string of length 2 is if the engine somehow guessed what the second half should be; but how would it know? There are 1024 possibilities, from 0xDC00 to 0xDFFF, which could be plugged into the formula I show above. So it doesn't guess, and since it doesn't error, the string you get is 16 bits long.
Of course, you can supply the matching halves, and codePointAt will interpret them for you.
// Set up two 16-bit pieces of memory
var high=String.fromCharCode(55357), low=String.fromCharCode(56489);
// Note: String.fromCodePoint will give the same answer
// Glue them together (this + is string concatenation, not number addition)
var poop = high + low;
// Read out the memory as UTF-16
console.log(poop);
console.log(poop.codePointAt(0));
Well, it does this because the specification says it has to:
http://www.ecma-international.org/ecma-262/6.0/#sec-string.fromcodepoint
http://www.ecma-international.org/ecma-262/6.0/#sec-utf16encoding
Together these two say that if an argument is < 0 or > 0x10FFFF, a RangeError is thrown, but otherwise any codepoint <= 65535 is incorporated into the result string as-is.
As for why things are specified this way, I don't know. It seems like JavaScript doesn't really support Unicode, only UCS-2.
Unicode.org has the following to say on the matter:
http://www.unicode.org/faq/utf_bom.html#utf16-2
Q: What are surrogates?
A: Surrogates are code points from two special ranges of Unicode values, reserved for use as the leading, and trailing values of paired code units in UTF-16. Leading, also called high, surrogates are from D80016 to DBFF16, and trailing, or low, surrogates are from DC0016 to DFFF16. They are called surrogates, since they do not represent characters directly, but only as a pair.
http://www.unicode.org/faq/utf_bom.html#utf16-7
Q: Are there any 16-bit values that are invalid?
A: Unpaired surrogates are invalid in UTFs. These include any value in the range D80016 to DBFF16 not followed by a value in the range DC0016 to DFFF16, or any value in the range DC0016 to DFFF16 not preceded by a value in the range D80016 to DBFF16.
Therefore the result of String.fromCodePoint is not always valid UTF-16 because it can emit unpaired surrogates.

Difference between codePointAt and charCodeAt

What is the difference between String.prototype.codePointAt() and String.prototype.charCodeAt() in JavaScript?
'A'.codePointAt(); // 65
'A'.charCodeAt(); // 65
From the MDN page on charCodeAt:
The charCodeAt() method returns an integer between 0 and 65535 representing the UTF-16 code unit at the given index.
The UTF-16 code unit matches the Unicode code point for code points which can be represented in a single UTF-16 code unit. If the Unicode code point cannot be represented in a single UTF-16 code unit (because its value is greater than 0xFFFF) then the code unit returned will be the first part of a surrogate pair for the code point. If you want the entire code point value, use codePointAt().
TLDR;
charCodeAt() is UTF-16
codePointAt() is Unicode.
To add a few for the ToxicTeacakes's answer, here is another example to help you know the difference:
"๐ ฎท".charCodeAt(0).toString(16);//d842
"๐ ฎท".charCodeAt(1).toString(16);//dfb7
"๐ ฎท".codePointAt(0);//20bb7
"๐ ฎท".codePointAt(1);//dfb7
console.log("\ud842\udfb7");//๐ ฎท, an example of hexadecimal digits
console.log("\u20bb7\udfb7");//โ‚ป7๏ฟฝ
console.log("\u{20bb7}");//๐ ฎท an unicode code point escapes the "\ud842\udfb7"
The following is the info about javascript string literals:
"\uXXXX"
The Unicode character specified by the four hexadecimal digits
XXXX. For example, \u00A9 is the Unicode sequence for the copyright
symbol.
"\u{XXXXX}"
Unicode code point
escapes. For example, \u{2F804} is the same as the simple Unicode
escapes \uD87E\uDC04.
see also msdn
Example in JS
On The example with strings and emojis, I am going to illustrate how things could go wrong when you do not know that some of the characters could consist of 2 code units. Some of the characters take up more than one code unit. Consider using codePointAt() over charCodeAt() or use the first one if you are sure that your characters lie in of between 0 and 65535 (216)
more about code units here
// charCodeAt() is UTF-16
// codePointAt() is Unicode
/* UTF-16 is generally considered a bad idea today */
const strings = ["o", "four", "to"];
const emojis = ["๐ŸŽ", "๐Ÿ‘Ÿ"];
function printItemsLength(arr) {
for (const item of arr) {
console.log(item, item.length);
}
}
printItemsLength(strings);
console.log('================================');
printItemsLength(emojis);
console.log('================================');
console.log("i.charCodeAt(0)", "i".charCodeAt(0)); // 105
console.log("i.charCodeAt(1)", "i".charCodeAt(1)); // 105
console.log("i.codePointAt(0)", "i".codePointAt(0)); // 105
console.log('=============EMOJIS=============');
// getting the decimal (dec) by which you can find them
console.log('===========charCodeAt===========');
// "surrogate pair"
console.log(emojis[0] + '.charCodeAt(0)', emojis[0].charCodeAt(0)); // only half-character - 55357
console.log(emojis[0] + '.charCodeAt(1)', emojis[0].charCodeAt(1)); // only half-character - 55357
console.log('===========codePointAt===========');
console.log(emojis[0] + '.codePointAt(0)', emojis[0].codePointAt(0)); // 128014
console.log('===========charCodeAt===========');
// "surrogate pair"
console.log(emojis[1] + '.charCodeAt(0)', emojis[1].charCodeAt(0)); // only half-character - 55357
console.log(emojis[1] + '.charCodeAt(1)', emojis[1].charCodeAt(1)); // only half-character - 55357
console.log('===========codePointAt===========');
// full-character
console.log(emojis[1] + '.codePointAt(0)', emojis[1].codePointAt(0)); // 128095
console.log(emojis[1] + '.codePointAt(1)', emojis[1].codePointAt(1)); // will return lower surragate (non-displayable character)
// to find this emojis have a look here: https://www.w3schools.com/charsets/ref_emoji.asp
as someone may have noticed I have tried to convert back from charcode to the emoji, and it did not work on one of the symbols (that is because it is not in range of UTF-16
Introduction to Unicode and UTF-16
please skip this section if you already familiar with it
Unicode โ€“ is a set of characters used around the world; UTF-16 -
00000000 00100100 for "$" (one 16-bits);11011000 01010010 11011111
01100010 for "๐คญข" (two 16-bits)
read more
"surrogate pair" characters are emoji and some letters that consist of more than 1 character as it is explained here
The term "surrogate pair" refers to a means of encoding Unicode
characters with high code-points in the UTF-16 encoding scheme. In the
Unicode character encoding, characters are mapped to values between
0x0 and 0x10FFFF.
read more
Unicode - It assigns every character a unique number called a code point.
Differentiating charCodeAt() from codePointAt()
charCodeAt(pos) returns code a code unit (not a full character).
If you need a character (that could be either one or two code units), you can use codePointAt(pos) to get its code.
charCodeAt() - returns an integer between 0 and 65535 representing the UTF-16 code unit at the given index link
codePointAt() - returns a non-negative integer that is the Unicode code point value at the given position link
where pos is the index of the character you want to check.
Quote from the book:
UTF-16 is generally considered a bad idea today. It seems almost
intentionally designed to invite mistakes. Itโ€™s easy to write programs
that pretend code units and characters are the same things.
read more
jsfiddle sandbox
Sources:
What is Unicode, UTF-8, UTF-16?
Marijn Haverbeke Eloquent JavaScript, 3rd Edition: A Modern Introduction to Programming [Text] โ€“ City(not-specified) : No Starch Press, 2018 โ€“ 447 p. can be found here
What is "surrogate pair"
to find this emojis have a look w3schools.com/charsets/ref_emoji
Chapter 5, p. 91 => Strings and character codes

How to correctly use Unicode and UTF-8 special characters in Node and browser js?

So i have this character:
๐Ÿ€€
MAHJONG TILE EAST WIND Which has the Unicode point U+1F000 (U+D83C U+DC00) and the UTF-8 encoding F0 9F 80 80
My question is how to I escape this in javascript?
I see \uff00 all the time, but that is for ASCII as 8 bytes will only take you up to 255. Just putting \u1F000' returns the (incorrect) 'แผ€0' and trying to fill in the extra bytes with 0s just returns \u0001F000'. How do I escape values that are higher (such as my above character?).
And how do I escape not just the Unicode point but also the UTF-8 encoding?
Taking on to this, I have noticed that the node REPL is able to show many Unicode values but not some (such as Emoji) even when my terminal window (mac) normally could. Is there any rhyme or reason to this
You can escape the char using \uXXXX x2 (for 32-bit values) format.
To use UTF-8 strings look into typed arrays and TextEncoder / TextDecoder. They are fairly new so you may need to use polyfill in some browsers.
Example
document.write('<h1>\uD83C\uDC00</h1>');
JavaScript does not support UTF-8 strings. All JavaScript strings are UCS-2 (but it supports UTF-16-style surrogate pairs). You can escape astral plane characters with two 16-bit characters: "\ud83c\udc00".
"๐Ÿ€€".charCodeAt(0).toString(16)
// => "d83c"
"๐Ÿ€€".charCodeAt(1).toString(16)
// => "dc00"
console.log("\ud83c\udc00")
// => ๐Ÿ€€
This also means that JavaScript doesn't know how to get the correct length of strings containing astrals, and that any indexing or substringing has a chance of being wrong:
"๐Ÿ€€".length
// => 2

How to convert a long string of numbers to a short string?

I'm building a grid where each box has a different color value using HTML & Javascript.
Example: A 5x5 grid where I'd have 25 values to keep track of. I'd like to be able to send the grid's information via a 'grid' parameter so when any user wants to view this specific grid, they'd see the fully drawn grid. Initially, the URL would be: www.mysite.com?grid=0123401234012340123401234 Ideally, the parameter would be under 25 characters.
How would I go about converting '0123401234012340123401234' to a smaller string? Is it best using a compression algorithm or just using decimal to hex conversion?
You can just use a higher base. Something like 30, for example: (123401234012340123401234).toString(30) --> "8i015nb02ib0c4ik". Note that, for this to work properly, the 123401234012340123401234 has to be a number rather than a string.
If each cell contains a color, that means each cell needs 3 bytes (or 4 if you include changes in the alpha channel). This means that you need 6 hexadecimal characters for each cell if you were encoding in hexadecimal.
This isn't optimal, so I suggest using a-zA-Z0-9 which equals 26 + 26 + 10 = 62 characters . Then use the two characters - and _ as well, to get a full 64 characters. Using hexadecimal, you needed 6 characters to encode 2^(24) possible values (or 2^32 if you are using alpha channel).
With 64-base characters, you only need 4 characters per cell. The reason I am mentioning this is because a-zA-Z0-9 - and _ are valid url characters.
All of this is assuming of course you are using all the colors. If you limit the domain, you can reduce the number of characters per cell drastically, but you should still use the same encoding scheme.

Categories