Why is IsNaN(x) different from x == NaN where x = NaN [duplicate] - javascript

This question already has answers here:
What is the rationale for all comparisons returning false for IEEE754 NaN values?
(12 answers)
Closed 10 years ago.
Why are these two different?
var x = NaN; //e.g. Number("e");
alert(isNaN(x)); //true (good)
alert(x == NaN); //false (bad)

Nothing is equal to NaN. Any comparison will always be false.
In both the strict and abstract comparison algorithms, if the types are the same, and either operand is NaN, the result will be false.
If Type(x) is Number, then
If x is NaN, return false.
If y is NaN, return false.
In the abstract algorithm, if the types are different, and a NaN is one of the operands, then the other operand will ultimately be coerced to a number, and will bring us back to the scenario above.

The equality and inequality predicates are non-signaling so x = x returning false can be used to test if x is a quiet NaN.
Source
This is the rule defined in IEEE 754 so full compliance with the specification requires this behavior.

The following operations return NaN
The divisions 0/0, ∞/∞, ∞/−∞, −∞/∞, and −∞/−∞
The multiplications 0×∞ and 0×−∞
The power 1^∞
The additions ∞ + (−∞), (−∞) + ∞ and equivalent subtractions.
Real operations with complex results:
The square root of a negative number
The logarithm of a negative number
The tangent of an odd multiple of 90 degrees (or π/2 radians)
The inverse sine or cosine of a number which is less than −1 or greater than +1.
The following operations return values for numeric operations. Hence typeof Nan is a number. NaN is an undefined number in mathematical terms. ∞ + (-∞) is not equal to ∞ + (-∞). But we get that NaN is typeof number because it results from a numeric operation.
From wiki:

Related

Number function is considering -0 as 0 in javascript [duplicate]

Reading through the ECMAScript 5.1 specification, +0 and -0 are distinguished.
Why then does +0 === -0 evaluate to true?
JavaScript uses IEEE 754 standard to represent numbers. From Wikipedia:
Signed zero is zero with an associated sign. In ordinary arithmetic, −0 = +0 = 0. However, in computing, some number representations allow for the existence of two zeros, often denoted by −0 (negative zero) and +0 (positive zero). This occurs in some signed number representations for integers, and in most floating point number representations. The number 0 is usually encoded as +0, but can be represented by either +0 or −0.
The IEEE 754 standard for floating point arithmetic (presently used by most computers and programming languages that support floating point numbers) requires both +0 and −0. The zeroes can be considered as a variant of the extended real number line such that 1/−0 = −∞ and 1/+0 = +∞, division by zero is only undefined for ±0/±0 and ±∞/±∞.
The article contains further information about the different representations.
So this is the reason why, technically, both zeros have to be distinguished.
However, +0 === -0 evaluates to true. Why is that (...) ?
This behaviour is explicitly defined in section 11.9.6, the Strict Equality Comparison Algorithm (emphasis partly mine):
The comparison x === y, where x and y are values, produces true or false. Such a comparison is performed as follows:
(...)
If Type(x) is Number, then
If x is NaN, return false.
If y is NaN, return false.
If x is the same Number value as y, return true.
If x is +0 and y is −0, return true.
If x is −0 and y is +0, return true.
Return false.
(...)
(The same holds for +0 == -0 btw.)
It seems logically to treat +0 and -0 as equal. Otherwise we would have to take this into account in our code and I, personally, don't want to do that ;)
Note:
ES2015 introduces a new comparison method, Object.is. Object.is explicitly distinguishes between -0 and +0:
Object.is(-0, +0); // false
I'll add this as an answer because I overlooked #user113716's comment.
You can test for -0 by doing this:
function isMinusZero(value) {
return 1/value === -Infinity;
}
isMinusZero(0); // false
isMinusZero(-0); // true
I just came across an example where +0 and -0 behave very differently indeed:
Math.atan2(0, 0); //returns 0
Math.atan2(0, -0); //returns Pi
Be careful: even when using Math.round on a negative number like -0.0001, it will actually be -0 and can screw up some subsequent calculations as shown above.
Quick and dirty way to fix this is to do smth like:
if (x==0) x=0;
or just:
x+=0;
This converts the number to +0 in case it was -0.
2021's answer
Are +0 and -0 the same?
Short answer: Depending on what comparison operator you use.
Long answer:
Basically, We've had 4 comparison types until now:
‘loose’ equality
console.log(+0 == -0); // true
‘strict’ equality
console.log(+0 === -0); // true
‘Same-value’ equality (ES2015's Object.is)
console.log(Object.is(+0, -0)); // false
‘Same-value-zero’ equality (ES2016)
console.log([+0].includes(-0)); // true
As a result, just Object.is(+0, -0) makes difference with the other ones.
const x = +0, y = -0; // true -> using ‘loose’ equality
console.log(x === y); // true -> using ‘strict’ equality
console.log([x].indexOf(y)); // 0 (true) -> using ‘strict’ equality
console.log(Object.is(x, y)); // false -> using ‘Same-value’ equality
console.log([x].includes(y)); // true -> using ‘Same-value-zero’ equality
In the IEEE 754 standard used to represent the Number type in JavaScript, the sign is represented by a bit (a 1 indicates a negative number).
As a result, there exists both a negative and a positive value for each representable number, including 0.
This is why both -0 and +0 exist.
Answering the original title Are +0 and -0 the same?:
brainslugs83 (in comments of answer by Spudley) pointed out an important case in which +0 and -0 in JS are not the same - implemented as function:
var sign = function(x) {
return 1 / x === 1 / Math.abs(x);
}
This will, other than the standard Math.sign return the correct sign of +0 and -0.
We can use Object.is to distinguish +0 and -0, and one more thing, NaN==NaN.
Object.is(+0,-0) //false
Object.is(NaN,NaN) //true
I'd blame it on the Strict Equality Comparison method ( '===' ).
Look at section 4d
see 7.2.13 Strict Equality Comparison on the specification
If you need sign function that supports -0 and +0:
var sign = x => 1/x > 0 ? +1 : -1;
It acts as Math.sign, except that sign(0) returns 1 and sign(-0) returns -1.
There are two possible values (bit representations) for 0. This is not unique. Especially in floating point numbers this can occur. That is because floating point numbers are actually stored as a kind of formula.
Integers can be stored in separate ways too. You can have a numeric value with an additional sign-bit, so in a 16 bit space, you can store a 15 bit integer value and a sign-bit. In this representation, the value 1000 (hex) and 0000 both are 0, but one of them is +0 and the other is -0.
This could be avoided by subtracting 1 from the integer value so it ranged from -1 to -2^16, but this would be inconvenient.
A more common approach is to store integers in 'two complements', but apparently ECMAscript has chosen not to. In this method numbers range from 0000 to 7FFF positive. Negative numbers start at FFFF (-1) to 8000.
Of course, the same rules apply to larger integers too, but I don't want my F to wear out. ;)
Wikipedia has a good article to explain this phenomenon: http://en.wikipedia.org/wiki/Signed_zero
In brief, it both +0 and -0 are defined in the IEEE floating point specifications. Both of them are technically distinct from 0 without a sign, which is an integer, but in practice they all evaluate to zero, so the distinction can be ignored for all practical purposes.

Floating point numbers - no arithmetic operations and precision errors

Can I ever run into any floating number precision errors if I don't perform any arithmetic operations on the floats? The only operations I do with numbers in my program are limited to the following:
Getting numbers as strings from a web service and converting them to floats using parseFloat()
Comparing resulting floats using <= < == > >=
Example:
const input = ['1000.69', '1001.04' /*, ... */]
const x = parseFloat(input[0])
const y = parseFloat(input[1])
console.log(x < y)
console.log(x > y)
console.log(x == y)
As for parseFloat() implemetation, I'm using latest Node.js.
The source of floats is prices in USD as strings, always two decimals.
As long as the source of your floats is reliable, your checks are safe, yes.
I'd still round them to an acceptable decimal number after the parsing, just to be 100% safe.
As the MDN docs show in one of their examples
// these all return 3.14
parseFloat(3.14);
parseFloat('3.14');
parseFloat(' 3.14 ');
parseFloat('314e-2');
parseFloat('0.0314E+2');
parseFloat('3.14some non-digit characters');
parseFloat({ toString: function() { return "3.14" } });
//and of course
parseFloat('3.140000000') === 3.14
The parseFloat operation converts a string into it's number value. The spec says:
In this specification, the phrase “the Number value for x” where x represents an exact real mathematical quantity (which might even be an irrational number such as π) means a Number value chosen in the following manner. Consider the set of all finite values of the Number type, with -0 removed and with two additional values added to it that are not representable in the Number type, namely 2ℝ1024ℝ (which is +1ℝ × 2ℝ53ℝ × 2ℝ971ℝ) and -2ℝ1024ℝ (which is -1ℝ × 2ℝ53ℝ × 2ℝ971ℝ). Choose the member of this set that is closest in value to x.
That reads as if two same strings are always converted to the same closest number. Except for NaN, two same numbers are equal.
6.1.6.1.13 Number::equal ( x, y )
If x is NaN, return false.
If y is NaN, return false.
If x is the same Number value as y, return true.
If x is +0 and y is -0, return true.
If x is -0 and y is +0, return true.
Return false.
emphasis mine

NaN !== parseInt(undefined);

How can this be false?
console.log(parseInt(undefined));
//NaN
console.log(parseInt(undefined)===NaN);
//false
That seems dumb
NaN is not equal to anything, even itself. Use isNaN to detect NaN instead of an equality.
NaN === NaN // -> false
isNaN(NaN) // -> true (argument is coerced [ToNumber] as required)
x = NaN
x !== x // -> true (would be false for any other value of x)
NaN || "Hi" // -> "Hi" (NaN is a false-y value, but not false)
This is a result of JavaScript following IEEE-754 and it's (quiet) NaN lack-of-ordering behavior:
A comparison with a NaN always returns an unordered [not equal] result even when comparing with itself.
See also What is the rationale for all comparisons returning false for IEEE754 NaN values?
Its because NaN === NaN is also false!
NaN is not equal to itself and the reason can be understood from the answer posted by Stephen here:
My understanding from talking to Kahan is that NaN != NaN originated
out of two pragmatic considerations:
that x == y should be equivalent to x - y == 0 whenever possible (beyond being a theorem of real arithmetic, this makes hardware
implementation of comparison more space-efficient, which was of utmost
importance at the time the standard was developed — note, however,
that this is violated for x = y = infinity, so it’s not a great reason
on its own; it could have reasonably been bent to x - y == 0 or
NaN).
more importantly, there was no isnan( ) predicate at the time that NaN was formalized in the 8087 arithmetic; it was necessary to provide
programmers with a convenient and efficient means of detecting NaN
values that didn’t depend on programming languages providing something
like isnan( ) which could take many years. I’ll quote Kahan’s own
writing on the subject:
Were there no way to get rid of NaNs, they would be as useless as Indefinites on CRAYs; as soon as one were encountered, computation
would be best stopped rather than continued for an indefinite time to
an Indefinite conclusion. That is why some operations upon NaNs must
deliver non-NaN results. Which operations? … The exceptions are C
predicates “ x == x ” and “ x != x ”, which are respectively 1 and 0
for every infinite or finite number x but reverse if x is Not a Number
( NaN ); these provide the only simple unexceptional distinction
between NaNs and numbers in languages that lack a word for NaN and a
predicate IsNaN(x).
Note that this is also the logic that rules out returning something
like a “Not-A-Boolean”. Maybe this pragmatism was misplaced, and
the standard should have required isnan( ), but that would have made
NaN nearly impossible to use efficiently and conveniently for several
years while the world waited for programming language adoption. I’m
not convinced that would have been a reasonable tradeoff.

+ operator before expression in javascript: what does it do?

I was perusing the underscore.js library and I found something I haven't come across before:
if (obj.length === +obj.length) { ... }
What is that + operator doing there? For context, here is a direct link to that part of the file.
The unary + operator can be used to convert a value to a number in JavaScript. Underscore appears to be testing that the .length property is a number, otherwise it won't be equal to itself-converted-to-a-number.
According to MDN:
The unary plus operator precedes its operand and evaluates to its
operand but attempts to converts it into a number, if it isn't
already. For example, y = +x takes the value of x and assigns that to
y; that is, if x were 3, y would get the value 3 and x would retain
the value 3; but if x were the string "3", y would also get the value
3. Although unary negation (-) also can convert non-numbers, unary plus is the fastest and preferred way of converting something into a
number, because it does not perform any other operations on the
number. It can convert string representations of integers and floats,
as well as the non-string values true, false, and null. Integers in
both decimal and hexadecimal ("0x"-prefixed) formats are supported.
Negative numbers are supported (though not for hex). If it cannot
parse a particular value, it will evaluate to NaN.
It's a way of ensuring that obj.length is a number rather than a potential string. The reason for this is that the === will fail if the length (for whatever reason) is a string variable, e.g. "3".
It's a nice hack to check whether obj.length is of the type number or not. You see, the + operator can be used for string coercion. For example:
alert(+ "3" + 7); // alerts 10
This is possible because the + operator coerces the string "3" to the number 3. Hence the result is 10 and not "37".
In addition, JavaScript has two types of equality and inequality operators:
Strict equality and inequality (e.g. 3 === "3" expresses false).
Normal equality and inequality (e.g. 3 == "3" expresses true).
Strict equality and inequality doesn't coerce the value. Hence the number 3 is not equal to the string "3". Normal equality and inequality does coerce the value. Hence the number 3 is equal to the string "3".
Now, the above code simply coerces obj.length to a number using the + operator, and strictly checks whether the value before and after the coercion are the same (i.e. obj.length of the type number). It's logically equivalent to the following code (only more succinct):
if (typeof obj.length === "number") {
// code
}

Why is undefined == undefined but NaN != NaN? [duplicate]

This question already has answers here:
What is the rationale for all comparisons returning false for IEEE754 NaN values?
(12 answers)
Closed 8 years ago.
I am wondering why undefined == undefined but NaN != NaN.
Because that's how it is defined in both the Abstract Equality Comparison Algorithm, and the Strict Equality Comparison Algorithm.
If either operand to == or === is NaN, it returns false.
Abstract
If Type(x) is Number, then
If x is NaN, return false.
If y is NaN, return false.
If x is the same Number value as y, return true.
If x is +0 and y is −0, return true.
If x is −0 and y is +0, return true.
Return false.
EDIT: The motivation for the unequal comparison as noted by #CMS is compliance with the IEEE 754 standard.
From the Wikipedia link provided in the comment below:
...The normal comparison operations however treat NaNs as unordered and compare −0 and +0 as equal. The totalOrder predicate will order these cases, and it also distinguishes between different representations of NaNs and between the same decimal floating point number encoded in different ways.
Because Math.sqrt(-5) !== Math.sqrt(-6).
Not sure why it is like this, but in order to check if a certain statement or variable is a NaN, you should use the isNaN method
I would assume because the IEEE standard allows for more than one representation of NaN. Not all NaNs are equal to each other...
The reasoning is that the creators wanted x == x returning false to mean that x is NaN, so NaN == NaN has to return false to be consistent.

Categories