Why is unary + incompatible with BigInt? [duplicate] - javascript

The following code throws an error in javascript:
console.log(String(+0n))
But this code runs successfully:
console.log(String(-0n))
Why +0n throws an error but -0n does not?

So that it doesn't break asm.js:
Unary + followed by an expression is always either a Number, or results in throwing. For this reason, unfortunately, + on a BigInt
needs to throw, rather than being symmetrical with + on Number:
Otherwise, previously "type-declared" asm.js code would now be
polymorphic.
As Bergi highlights in the comments, this was the least bad of three options:
+BigInt -> BigInt: breaks asm.js, and anything else that made the assumption "unary plus gives a Number";
+BigInt -> Number: conflicts with the design decision to disallow implicit conversions between Number and BigInt; or
+BigInt -> error.

+0n is treated as +(BigInt(0)), since unary + means "cast to integer", and it can't automatically do that (for some reason)
console.log(+(BigInt(0)));
-0n is treated as BigInt(-0), since negative numbers can be big integers
(You need to check your console for this, since I guess there's a bug in the StackSnippets preventing BigInts from being casted to a string in the console.log call)
console.log(BigInt(-0));

Related

+ operator vs parseFloat

Example 1 of the knockout extenders page describes a way of rounding user input and making sure it is only numeric.
It works great, but looking through the source they do a peculiar thing that i don't understand, that is on line 8 they do this:
parseFloat(+newValue)
newValue is a string.
When i initially asked this question I didn't know what + did - some further poking and a link to a different MDN page from one of the initial answers I got indicate it is a unary operator equivalent to number(str) and that there are some differences between +str and parseFloat(str) (treatment of strings ending in alpha characters and interpretation of hex seem to be the headlines).
I still don't understand why the + in this case needed to be wrapped in the parseFloat although I am starting to think it might be a typo...
Citing MDN docs for parseFloat:
parseFloat parses its argument, a string, and returns a floating point number. If it encounters a character other than a sign (+ or -), numeral (0-9), a decimal point, or an exponent, it returns the value up to that point and ignores that character and all succeeding characters. Leading and trailing spaces are allowed.
Using [unary plus operator][2] you may be sure that `parseFloat` operates on `Number`, which is only useful if you want to be more strict about results but still want to use a `parseFloat`
parseFloat('0.32abcd') // -> 0.32
parseFloat(+'0.32abcd') // -> NaN
**Update:**
After a bit of digging in docs and running some tests, seems there is no reason to use parseFloat other than parsing strings that may contain numbers with non numeric trails to number, eq:
parseFloat('31.5 miles') // -> 31.5
parseFloat('12.75em') // -> 12.75
For any other cases where your string contains number + is a fastest and prefered way (citing MDN docs for unary plus operator):
unary plus is the fastest and preferred way of converting something into a number, because it does not perform any other operations on the number.
See parseFloat versus unary test case for how faster it is.
Previous link broken so here is the new test that shows how unary is faster.

using division operator (/) on strings in javascript

I realized that in javascript all 101/100, "101"/100, 101/"100" and "101"/"100" result in 1.01 (checked on Chrome, FF and IE11). But I cannot find a piece of documentation regarding this behaviour.
Therefore my question is if it is (cross-browser) safe to use this feature, and if it is a good practice to do so (or rather to use parseInt before division if the variable can be a string)?
When you use / on strings, the strings are implicitly converted to numbers and then division operation is performed.
This may work in all browsers, but it's always good practice to convert to number explicitly using parseInt or parseFloat or other method.
parseInt("101", 10) / 100
Or
parseFloat("101") / 100
ECMAScript Specifications for Division Operator
Therefore my question is if it is (cross-browser) safe to use this feature...
It depends on your definition of "safe." With the division operator, yes, it's specified behavior: Each operand is converted (implicitly coerced) to a number, and then numeric division is done.
But beware of generalizing this too far. You'll be okay with /, *, and - but it will bite you on +, because if either operand to + is a string, + does string concatenation, not addition.
Another way that it may or may not be "safe" depending on your point of view is the implicit coercion: It uses the browser's JavaScript engine's rules for converting strings to numbers. Some older browsers went beyond the specification (which they were allowed to in the past) and treated numbers starting with a 0 as octal (base 8) rather than decimal. Naturally, end users who type in, say, "0123" as a number probably mean the number 123, not the number 83 (123 in octal = 83 decimal). JavaScript engines are no longer allowed to do that, but some older ones do.
In general, it's probably best to explicitly coerce or convert those operands. Your choices for doing so:
The unary + operator: value = +value will coerce the string to a number using the JavaScript engine's standard rules for that. Any non-digits in the string (other than the e for scientific notation) make the result NaN. Also, +"" is 0, which may not be intuitive.
The Number function: value = Number(value). Does the same thing as +.
The parseInt function, usually with a radix (number base): value = parseInt(value, 10). The downside here is that parseInt converts any number it finds at the beginning of the string but ignores non-digits later in the string, so parseInt("100asdf", 10) is 100, not NaN. As the name implies, parseInt parses only a whole number.
The parseFloat function: value = parseFloat(value). Allows fractional values, and always works in decimal (never octal or hex). Does the same thing parseInt does with garbage at the end of the string, parseFloat("123.34alksdjf") is 123.34.
So, pick your tool to suit your use case. :-)
Type coercion is at play here. Quoting #Barmar's answer from What exactly is Type Coercion in Javascript?
Type coercion means that when the operands of an operator are of different types, one of them will be converted to an "equivalent" value of the other operand's type.
The reason for your observation is valid for other operations too -
1 + "2" will give you "12"
1 - "2" will give you -1
(because "-" operation on strings is not defined like division")
In the case "101/100" the operation "/" will decide the coercion, since there is no operation defined on strings with that operator "/", but is there for "numbers".
Using it is safe (at least in modern browsers) as long as you are clear how type coercion will play out in your operation.

How does asm.js handle divide-by-zero?

In javascript, division by zero with "integer" arguments acts like floating points should:
1/0; // Infinity
-1/0; // -Infinity
0/0; // NaN
The asm.js spec says that division with integer arguments returns intish, which must be immediately coerced to signed or unsigned. If we do this in javascript, division by zero with "integer" arguments always returns zero after coersion:
(1/0)|0; // == 0, signed case.
(1/0) >> 0; // == 0, unsigned case.
However, in languages with actual integer types like Java and C, dividing an integer by zero is an error and execution halts somehow (e.g., throws exception, triggers a trap, etc).
This also seems to violate the type signatures specified by asm.js. The type of Infinity and NaN is double and of / is supposedly (from the spec):
(signed, signed) → intish ∧
(unsigned, unsigned) → intish ∧
(double?, double?) → double ∧
(float?, float?) → floatish
However if any of these has a zero denominator, the result is double, so it seems like the type can only be:
(double?, double?) → double
What is expected to happen in asm.js code? Does it follow javascript and return 0 or does divide-by-zero produce a runtime error? If it follows javascript, why is it ok that the typing is wrong? If it produces a runtime error, why doesn't the spec mention it?
asm.js is a subset of JavaScript, so it has to return what JavaScript does: Infinity|0 → 0.
You point out that Infinity is double, but that mixes up the asm.js type system with the C one (in JavaScript those are number): asm.js uses JavaScript type coercion to make intermediate results the "right" type when they aren't. The same thing happens when a small integer in JavaScript would overflow to a double: it gets coerced back into an integer using bitwise operations.
The key here is that it gives the compiler a hint that it doesn't need to calculate all the things JavaScript would usually have it calculate: it doesn't matter if a small integer overflows because it's coerced back into an integer, so the compiler can omit overflow checks and emit straight-line integer arithmetic. Note that it still has to behave correctly for every possible value! The type system basically hints the compiler towards doing a bunch of strength reductions.
Now back to integer division: on x86 this causes a floating-point exception (yes! Integer division causes SIGFPE!). The compiler knows the output is an integer so it can do an integer division, but it can't halt the program if the denominator was zero. There are two options here:
Branch around the division if the input is zero, and return zero directly.
Do the division with the provided input, but at the start of the program install a signal handler, catching SIGFPE. When it faults look up the code location, and if the compiler's metadata says that's a division location then modify the return value to be zero and continue executing.
The former is what V8 and OdinMonkey implement.
On ARM the integer division instruction is defined to always return zero, except ARMv7-R profile of ARM where it faults (the fault is undefined instruction, or can be changed to return zero if SCTRL.DZ == 0). ARM only added the UDIV and SDIV instructions recently with the ARMv7VE extension (virtualization extension), and made it optional in ARMv7-A processors (most phones and tablets use these). You can check for the instruction using /proc/cpuinfo, but note that some kernels are unaware of the instruction! A workaround is to check for the instruction when the process starts by executing the instruction and using sigsetjmp/siglongjmp to catch cases where it's not handled. That has a further caveat of also catching cases where the kernel is being "helpful" and emulating UDIV/IDIV on processors that don't support it! If the instruction isn't present then you have to use the C library's integer division instruction (libgcc or compiler_rt contain functions such as __udivmoddi4). Note that the behavior of this function on divide by zero may vary between implementations and has to be handled with a branch on zero denominator or checked at load time (same as outlined above for UDIV/SDIV).
I'll leave you off with a question: what happens in asm.js when executing the following C code: INT_MIN/-1?

JavaScript concating with number as string

Why does x = "1"- -"1" work and set the value of x to 2?
Why doesn't x = "1"--"1" works?
This expression...
"1"- -"1"
... is processed as ...
"1"- (-"1")
... that is, substract result of unary minus operation applied to "1" from "1". Now, both unary and binary minus operations make sense to be applied for Numbers only - so JS converts its operands to Numbers first. So that'll essentially becomes:
Number("1") - (-(Number("1"))
... that'll eventually becomes evaluated to 2, as Number("1"), as you probably expect, evaluates to 1.
When trying to understand "1"--"1" expression, JS parser attempts to consume as many characters as possible. That's why this expression "1"-- is processed first.
But it makes no sense, as auto-increment/decrement operations are not defined for literals. Both ++ and -- (both in postfix and prefix forms) should change the value of some assignable ('left-value') expression - variable name, object property etc.
In this case, however, there's nothing to change: "1" literal is always "1". )
Actually, I got a bit different errors (for x = "1"--"1") both in Firefox:
SyntaxError: invalid decrement operand
... and Chrome Canary:
ReferenceError: Invalid left-hand side expression in postfix operation
And I think these messages actually show the reason of that error quite clearly. )
Because -- is an operator in JavaScript.
When you separate the - characters in the first expression, it's unambiguous what you mean. When you put them together, JavaScript interprets them as one operator, and the following "1" as an unexpected string. (Or maybe it's the preceding "1"? I'm honestly not sure.)
"-1" = -1 (unary minus converts it to int)
So. "1" - (-1)
now, "+" is thje concatenation operator. not -. so JS returns the result 2 (instead of string concat).
also, "1"--"1" => here "--" is the decrement operator, for which the syntax is wrong as strings will not get converted automatically in this case.
because -- is an operator for decrement and cannot be applied on constant values.

Why does 10..toString() work, but 10.toString() does not? [duplicate]

This question already has answers here:
Closed 10 years ago.
 
Why does calling 152..toString(2) return a binary string value of "10011000", when a call to 152.toString(2) throws the following exception?
          "SyntaxError: identifier starts immediately after numeric literal"
It seems to me that it's intuitive to want to use the latter call to toString(), as it looks & feels correct. The first example just seems plain odd to me.
Does anyone know why JavaScript was designed to behave like this?
A . after a number might seem ambiguous. Is it a decimal or an object member operator?
However, the interpreter decides that it's a decimal, so you're missing the member operator.
It sees it as this:
(10.)toString(); // invalid syntax
When you include the second ., you have a decimal followed by the member operator.
(10.).toString();
#pedants and downvoters
The . character presents an ambiguity. It can be understood to be the member operator, or a decimal, depending on its placement. If there was no ambiguity, there would be no question to ask.
The specification's interpretation of the . character in that particular position is that it will be a decimal. This is defined by the numeric literal syntax of ECMAScript.
Just because the specification resolves the ambiguity for the JS interpreter, doesn't mean that the ambiguity of the . character doesn't exist at all.
The lexer (aka "tokenizer") when reading a new token, and upon first finding a digit, will keep consuming characters (i.e. digits or one dot) until it sees a character that is not part of a legal number.
<152.> is a legal token (the trailing 0 isn't required) but <152..> isn't, so your first example reduces to this series of tokens:
<152.> <.> <toString> <(> <2> <)>
which is the legal (and expected) sequence, whereas the second looks like
<152.> <toString> <(> <2> <)>
which is illegal - there's no period token separating the Number from the toString call.
10. is a float number an you can use toString on float
eg.
parseFloat("10").toString() // "10"

Categories