I've read in some places that JavaScript strings are UTF-16, and in other places they're UCS-2. I did some searching around to try to figure out the difference and found this:
Q: What is the difference between UCS-2 and UTF-16?
A: UCS-2 is obsolete terminology which refers to a Unicode
implementation up to Unicode 1.1, before surrogate code points and
UTF-16 were added to Version 2.0 of the standard. This term should now
be avoided.
UCS-2 does not define a distinct data format, because UTF-16 and UCS-2
are identical for purposes of data exchange. Both are 16-bit, and have
exactly the same code unit representation.
Sometimes in the past an implementation has been labeled "UCS-2" to
indicate that it does not support supplementary characters and doesn't
interpret pairs of surrogate code points as characters. Such an
implementation would not handle processing of character properties,
code point boundaries, collation, etc. for supplementary characters.
via: http://www.unicode.org/faq/utf_bom.html#utf16-11
So my question is, is it because the JavaScript string object's methods and indexes act on 16-bit data values instead of characters what make some people consider it UCS-2? And if so, would a JavaScript string object oriented around characters instead of 16-bit data chunks be considered UTF-16? Or is there something else I'm missing?
Edit: As requested, here are some sources saying JavaScript strings are UCS-2:
http://blog.mozilla.com/nnethercote/2011/07/01/faster-javascript-parsing/
http://terenceyim.wordpress.com/tag/ucs2/
EDIT: For anyone who may come across this, be sure to check out this link:
http://mathiasbynens.be/notes/javascript-encoding
JavaScript, strictly speaking, ECMAScript, pre-dates Unicode 2.0, so in some cases you may find references to UCS-2 simply because that was correct at the time the reference was written. Can you point us to specific citations of JavaScript being "UCS-2"?
Specifications for ECMAScript versions 3 and 5 at least both explicitly declare a String to be a collection of unsigned 16-bit integers and that if those integer values are meant to represent textual data, then they are UTF-16 code units. See
section 8.4 of the ECMAScript Language Specification in version 5.1
or section 6.1.4 in version 13.0.
EDIT: I'm no longer sure my answer is entirely correct. See the excellent article mentioned above, which in essence says that while a JavaScript engine may use UTF-16 internally, and most do, the language itself effectively exposes those characters as if they were UCS-2.
It's UTF-16/USC-2. It can handle surrogate pairs, but the charAt/charCodeAt returns a 16-bit char and not the Unicode codepoint. If you want to have it handle surrogate pairs, I suggest a quick read through this.
Its just a 16-bit value with no encoding specified in the ECMAScript standard.
See section 7.8.4 String Literals in this document: http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-262.pdf
Things have changed since 2012. JavaScript strings are now UTF-16 for real. Yes, the old string methods still work on 16-bit code units, but the language is now aware of UTF-16 surrogates and knows what to do about them if you use the string iterator. There's also Unicode regex support.
// Before
"๐๐๐ฉ".length // 6
// Now
[..."๐๐๐ฉ"].length // 3
[..."๐๐๐ฉ"] // [ '๐', '๐', '๐ฉ' ]
[... "๐๐๐ฉ".matchAll(/./ug) ] // 3 matches as above
// Regexes support unicode character classes
"cafรฉ".normalize("NFD").match(/\p{L}\p{M}/ug) // [ 'eฬ' ]
// Extract code points
[..."๐๐๐ฉ"].map(char => char.codePointAt(0).toString(16)) // [ '1f600', '1f602', '1f4a9' ]
Related
according to this article:
Internally, JavaScript source code is treated as a sequence of UTF-16 code units.
And this IBM doc says that:
UTF-16 is based on 16-bit code units. Therefore, each character can be 16 bits (2 bytes) or 32 bits (4 bytes).
But I tested in Chrome's console that English letters are only taking 1 byte, not 2 or 4.
new Blob(['a']).size === 1
I wonder why that is the case? Am I missing something here?
Internally, JavaScript source code is treated as a sequence of UTF-16 code units.
Note that this is referring to source code, not String values. String values are referenced to also be UTF-16 later in the article:
When a String contains actual textual data, each element is considered to be a single UTF-16 code unit.
The discrepancy here is actually in the Blob constructor. From MDN:
Note that strings here are encoded as UTF-8, unlike the usual JavaScript UTF-16 strings.
UTF has a varying character size.
a has a size of 1 byte, but ฤ
for example has 2
console.log('a', new Blob(['a']).size)
console.log('ฤ
', new Blob(['ฤ
']).size)
Is it possible to keep trailing or leading zeroes on a number in javascript, without using e.g. a string instead?
const leading = 003; // literal, leading
const trailing = 0.10; // literal, trailing
const parsed = parseFloat('0.100'); // parsed or somehow converted
console.log(leading, trailing, parsed); // desired: 003 0.10 0.100
This question has been regularly asked (and still is), yet I don't have a place I'd feel comfortable linking to (did i miss it?).
Fully analogously would be keeping any other aspect of the representation a number literal was entered as, although asked nowhere near as often:
console.log(0x10); // 16 instead of potentially desired 0x10
console.log(1e1); // 10 instead of potentially desired 1e1
For disambiguation, this is not about the following topics, for some of which I'll add links, as they might be of interest as well:
Padding to a set amount of digits, formatting to some specific string representation, e.g. How can i pad a value with leading zeroes?, How to output numbers with leading zeros in JavaScript?, How to add a trailing zero to a price
Why a certain string representation will be produced for some number by default, e.g. How does JavaScript determine the number of digits to produce when formatting floating-point values?
Floating point precision/accuracy problems, e.g. console.log(0.1 + 0.2) producing 0.30000000000000004, see Is floating point math broken?, and How to deal with floating point number precision in JavaScript?
No. A number stores no information about the representation it was entered as, or parsed from. It only relates to its mathematical value. Perhaps reconsider using a string after all.
If i had to guess, it would be that much of the confusion comes from the thought, that numbers, and their textual representations would either be the same thing, or at least tightly coupled, with some kind of bidirectional binding between them. This is not the case.
The representations like 0.1 and 0.10, which you enter in code, are only used to generate a number. They are convenient names, for what you intend to produce, not the resulting value. In this case, they are names for the same number. It has a lot of other aliases, like 0.100, 1e-1, or 10e-2. In the actual value, there is no contained information, about what or where it came from. The conversion is a one-way street.
When displaying a number as text, by default (Number.prototype.toString), javascript uses an algorithm to construct one of the possible representations from a number. This can only use what's available, the number value, also meaning it will produce the same results for two same numbers. This implies, that 0.1 and 0.10 will produce the same result.
Concerning the number1 value, javascript uses IEEE754-2019 float642. When source code is being evaluated3, and a number literal is encountered, the engine will convert the mathematical value the literal represents to a 64bit value, according to IEEE754-2019. This means any information about the original representation in code is lost4.
There is another problem, which is somewhat unrelated to the main topic. Javascript used to have an octal notation, with a prefix of "0". This means, that 003 is being parsed as an octal, and would throw in strict-mode. Similarly, 010 === 8 (or an error in strict-mode), see Why JavaScript treats a number as octal if it has a leading zero
In conclusion, when trying to keep information about some representation for a number (including leading or trailing zeroes, whether it was written as decimal, hexadecimal, and so on), a number is not a good choice. For how to achieve some specific representation other than the default, which doesn't need access to the originally entered text (e.g. pad to some amount of digits), there are many other questions/articles, some of which were already linked.
[1]: Javascript also has BigInt, but while it uses a different format, the reasoning is completely analogous.
[2]: This is a simplification. Engines are allowed to use other formats internally (and do, e.g. to save space/time), as long as they are guaranteed to behave like an IEEE754-2019 float64 in any regard, when observed from javascript.
[3]: E.g. V8 would convert to bytecode earlier than evaluation, already exchanging the literal. The only relevant thing is, that the information is lost, before we could do anything with it.
[4]: Javascript gives the ability to operate on code itself (e.g. Function.prototype.toString), which i will not discuss here much. Parsing the code yourself, and storing the representation, is an option, but has nothing to do with how number works (you would be operating on code, a string). Also, i don't immediately see any sane reason to do so, over alternatives.
I'm getting too confused. Why do code points from U+D800 to U+DBFF encode as a single (2 bytes) String element, when using the ECMAScript 6 native Unicode helpers?
I'm not asking how JavaScript/ECMAScript encodes Strings natively, I'm asking about an extra functionality to encode UTF-16 that makes use of UCS-2.
var str1 = '\u{D800}';
var str2 = String.fromCodePoint(0xD800);
console.log(
str1.length, str1.charCodeAt(0), str1.charCodeAt(1)
);
console.log(
str2.length, str2.charCodeAt(0), str2.charCodeAt(1)
);
Re-TL;DR: I want to know why the above approaches return a string of length 1. Shouldn't U+D800 generate a 2 length string, since my browser's ES6 implementation incorporates UCS-2 encoding in strings, which uses 2 bytes for each character code?
Both of these approaches return a one-element String for the U+D800 code point (char code: 55296, same as 0xD800). But for code points bigger than U+FFFF each one returns a two-element String, the lead and trail. lead would be a number between U+D800 and U+DBFF, and trail I'm not sure about, I only know it helps changing the result code point. For me the return value doesn't make sense, it represents a lead without trail. Am I understanding something wrong?
I think your confusion is about how Unicode encodings work in general, so let me try to explain.
Unicode itself just specifies a list of characters, called "code points", in a particular order. It doesn't tell you how to convert those to bits, it just gives them all a number between 0 and 1114111 (in hexadecimal, 0x10FFFF). There are several different ways these numbers from U+0 to U+10FFFF can be represented as bits.
In an earlier version, it was expected that a range of 0 to 65535 (0xFFFF) would be enough. This can be naturally represented in 16 bits, using the same convention as an unsigned integer. This was the original way of storing Unicode, and is now known as UCS-2. To store a single code point, you reserve 16 bits of memory.
Later, it was decided that this range was not large enough; this meant that there were code points higher than 65535, which you can't represent in a 16-bit piece of memory. UTF-16 was invented as a clever way of storing these higher code points. It works by saying "if you look at a 16-bit piece of memory, and it's a number between 0xD800 and 0xDBF (a "low surrogate"), then you need to look at the next 16 bits of memory as well". Any piece of code which is performing this extra check is processing its data as UTF-16, and not UCS-2.
It's important to understand that the memory itself doesn't "know" which encoding it's in, the difference between UCS-2 and UTF-16 is how you interpret that memory. When you write a piece of software, you have to choose which interpretation you're going to use.
Now, onto Javascript...
Javascript handles input and output of strings by interpreting its internal representation as UTF-16. That's great, it means that you can type in and display the famous ๐ฉ character, which can't be stored in one 16-bit piece of memory.
The problem is that most of the built in string functions actually handle the data as UCS-2 - that is, they look at 16 bits at a time, and don't care if what they see is a special "surrogate". The function you used, charCodeAt(), is an example of this: it reads 16 bits out of memory, and gives them to you as a number between 0 and 65535. If you feed it ๐ฉ, it will just give you back the first 16 bits; ask it for the next "character" after, and it will give you the second 16 bits (which will be a "high surrogate", between 0xDC00 and 0xDFFF).
In ECMAScript 6 (2015), a new function was added: codePointAt(). Instead of just looking at 16 bits and giving them to you, this function checks if they represent one of the UTF-16 surrogate code units, and if so, looks for the "other half" - so it gives you a number between 0 and 1114111. If you feed it ๐ฉ, it will correctly give you 128169.
var poop = '๐ฉ';
console.log('Treat it as UCS-2, two 16-bit numbers: ' + poop.charCodeAt(0) + ' and ' + poop.charCodeAt(1));
console.log('Treat it as UTF-16, one value cleverly encoded in 32 bits: ' + poop.codePointAt(0));
// The surrogates are 55357 and 56489, which encode 128169 as follows:
// 0x010000 + ((55357 - 0xD800) << 10) + (56489 - 0xDC00) = 128169
Your edited question now asks this:
I want to know why the above approaches return a string of length 1. Shouldn't U+D800 generate a 2 length string?
The hexadecimal value D800 is 55296 in decimal, which is less than 65536, so given everything I've said above, this fits fine in 16 bits of memory. So if we ask charCodeAt to read 16 bits of memory, and it finds that number there, it's not going to have a problem.
Similarly, the .length property measures how many sets of 16 bits there are in the string. Since this string is stored in 16 bits of memory, there is no reason to expect any length other than 1.
The only unusual thing about this number is that in Unicode, that value is reserved - there isn't, and never will be, a character U+D800. That's because it's one of the magic numbers that tells a UTF-16 algorithm "this is only half a character". So a possible behaviour would be for any attempt to create this string to simply be an error - like opening a pair of brackets that you never close, it's unbalanced, incomplete.
The only way you could end up with a string of length 2 is if the engine somehow guessed what the second half should be; but how would it know? There are 1024 possibilities, from 0xDC00 to 0xDFFF, which could be plugged into the formula I show above. So it doesn't guess, and since it doesn't error, the string you get is 16 bits long.
Of course, you can supply the matching halves, and codePointAt will interpret them for you.
// Set up two 16-bit pieces of memory
var high=String.fromCharCode(55357), low=String.fromCharCode(56489);
// Note: String.fromCodePoint will give the same answer
// Glue them together (this + is string concatenation, not number addition)
var poop = high + low;
// Read out the memory as UTF-16
console.log(poop);
console.log(poop.codePointAt(0));
Well, it does this because the specification says it has to:
http://www.ecma-international.org/ecma-262/6.0/#sec-string.fromcodepoint
http://www.ecma-international.org/ecma-262/6.0/#sec-utf16encoding
Together these two say that if an argument is < 0 or > 0x10FFFF, a RangeError is thrown, but otherwise any codepoint <= 65535 is incorporated into the result string as-is.
As for why things are specified this way, I don't know. It seems like JavaScript doesn't really support Unicode, only UCS-2.
Unicode.org has the following to say on the matter:
http://www.unicode.org/faq/utf_bom.html#utf16-2
Q: What are surrogates?
A: Surrogates are code points from two special ranges of Unicode values, reserved for use as the leading, and trailing values of paired code units in UTF-16. Leading, also called high, surrogates are from D80016 to DBFF16, and trailing, or low, surrogates are from DC0016 to DFFF16. They are called surrogates, since they do not represent characters directly, but only as a pair.
http://www.unicode.org/faq/utf_bom.html#utf16-7
Q: Are there any 16-bit values that are invalid?
A: Unpaired surrogates are invalid in UTFs. These include any value in the range D80016 to DBFF16 not followed by a value in the range DC0016 to DFFF16, or any value in the range DC0016 to DFFF16 not preceded by a value in the range D80016 to DBFF16.
Therefore the result of String.fromCodePoint is not always valid UTF-16 because it can emit unpaired surrogates.
I have been trying the following code to get the ASCII equivalent character
String.fromCharCode("149")
but, it seems to work till 126 is passed as parameter. But for 149, the symbol generated should be
โข
128 and beyond is not standard ASCII.
var s = "โข";
alert(s.charCodeAt(0))
gives 8226
https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/String/fromCharCode
Getting it to work with higher values Although most common Unicode
values can be represented with one 16-bit number (as expected early on
during JavaScript standardization) and fromCharCode() can be used to
return a single character for the most common values (i.e., UCS-2
values which are the subset of UTF-16 with the most common
characters), in order to deal with ALL legal Unicode values (up to 21
bits), fromCharCode() alone is inadequate. Since the higher code point
characters use two (lower value) "surrogate" numbers to form a single
character, String.fromCodePoint() (part of the ES6 draft) can be used
to return such a pair and thus adequately represent these higher
valued characters.
The fromCharCode() method converts Unicode values into characters.
to use unicode see the link for unicode table
http://unicode-table.com/en/
I got String.fromCodePoint(149) to show inside an alert in firefox but not in IE & Chrome. It may be because of browser language settings.
But this looks correct accourding to the ASCII table.
http://www.asciitable.com/
This is the code I used
alert(String.fromCodePoint(149));
So i have this character:
๐
MAHJONG TILE EAST WIND Which has the Unicode point U+1F000 (U+D83C U+DC00) and the UTF-8 encoding F0 9F 80 80
My question is how to I escape this in javascript?
I see \uff00 all the time, but that is for ASCII as 8 bytes will only take you up to 255. Just putting \u1F000' returns the (incorrect) 'แผ0' and trying to fill in the extra bytes with 0s just returns \u0001F000'. How do I escape values that are higher (such as my above character?).
And how do I escape not just the Unicode point but also the UTF-8 encoding?
Taking on to this, I have noticed that the node REPL is able to show many Unicode values but not some (such as Emoji) even when my terminal window (mac) normally could. Is there any rhyme or reason to this
You can escape the char using \uXXXX x2 (for 32-bit values) format.
To use UTF-8 strings look into typed arrays and TextEncoder / TextDecoder. They are fairly new so you may need to use polyfill in some browsers.
Example
document.write('<h1>\uD83C\uDC00</h1>');
JavaScript does not support UTF-8 strings. All JavaScript strings are UCS-2 (but it supports UTF-16-style surrogate pairs). You can escape astral plane characters with two 16-bit characters: "\ud83c\udc00".
"๐".charCodeAt(0).toString(16)
// => "d83c"
"๐".charCodeAt(1).toString(16)
// => "dc00"
console.log("\ud83c\udc00")
// => ๐
This also means that JavaScript doesn't know how to get the correct length of strings containing astrals, and that any indexing or substringing has a chance of being wrong:
"๐".length
// => 2