How does asm.js handle divide-by-zero? - javascript

In javascript, division by zero with "integer" arguments acts like floating points should:
1/0; // Infinity
-1/0; // -Infinity
0/0; // NaN
The asm.js spec says that division with integer arguments returns intish, which must be immediately coerced to signed or unsigned. If we do this in javascript, division by zero with "integer" arguments always returns zero after coersion:
(1/0)|0; // == 0, signed case.
(1/0) >> 0; // == 0, unsigned case.
However, in languages with actual integer types like Java and C, dividing an integer by zero is an error and execution halts somehow (e.g., throws exception, triggers a trap, etc).
This also seems to violate the type signatures specified by asm.js. The type of Infinity and NaN is double and of / is supposedly (from the spec):
(signed, signed) → intish ∧
(unsigned, unsigned) → intish ∧
(double?, double?) → double ∧
(float?, float?) → floatish
However if any of these has a zero denominator, the result is double, so it seems like the type can only be:
(double?, double?) → double
What is expected to happen in asm.js code? Does it follow javascript and return 0 or does divide-by-zero produce a runtime error? If it follows javascript, why is it ok that the typing is wrong? If it produces a runtime error, why doesn't the spec mention it?

asm.js is a subset of JavaScript, so it has to return what JavaScript does: Infinity|0 → 0.
You point out that Infinity is double, but that mixes up the asm.js type system with the C one (in JavaScript those are number): asm.js uses JavaScript type coercion to make intermediate results the "right" type when they aren't. The same thing happens when a small integer in JavaScript would overflow to a double: it gets coerced back into an integer using bitwise operations.
The key here is that it gives the compiler a hint that it doesn't need to calculate all the things JavaScript would usually have it calculate: it doesn't matter if a small integer overflows because it's coerced back into an integer, so the compiler can omit overflow checks and emit straight-line integer arithmetic. Note that it still has to behave correctly for every possible value! The type system basically hints the compiler towards doing a bunch of strength reductions.
Now back to integer division: on x86 this causes a floating-point exception (yes! Integer division causes SIGFPE!). The compiler knows the output is an integer so it can do an integer division, but it can't halt the program if the denominator was zero. There are two options here:
Branch around the division if the input is zero, and return zero directly.
Do the division with the provided input, but at the start of the program install a signal handler, catching SIGFPE. When it faults look up the code location, and if the compiler's metadata says that's a division location then modify the return value to be zero and continue executing.
The former is what V8 and OdinMonkey implement.
On ARM the integer division instruction is defined to always return zero, except ARMv7-R profile of ARM where it faults (the fault is undefined instruction, or can be changed to return zero if SCTRL.DZ == 0). ARM only added the UDIV and SDIV instructions recently with the ARMv7VE extension (virtualization extension), and made it optional in ARMv7-A processors (most phones and tablets use these). You can check for the instruction using /proc/cpuinfo, but note that some kernels are unaware of the instruction! A workaround is to check for the instruction when the process starts by executing the instruction and using sigsetjmp/siglongjmp to catch cases where it's not handled. That has a further caveat of also catching cases where the kernel is being "helpful" and emulating UDIV/IDIV on processors that don't support it! If the instruction isn't present then you have to use the C library's integer division instruction (libgcc or compiler_rt contain functions such as __udivmoddi4). Note that the behavior of this function on divide by zero may vary between implementations and has to be handled with a branch on zero denominator or checked at load time (same as outlined above for UDIV/SDIV).
I'll leave you off with a question: what happens in asm.js when executing the following C code: INT_MIN/-1?

Related

Keep trailing or leading zeroes on number

Is it possible to keep trailing or leading zeroes on a number in javascript, without using e.g. a string instead?
const leading = 003; // literal, leading
const trailing = 0.10; // literal, trailing
const parsed = parseFloat('0.100'); // parsed or somehow converted
console.log(leading, trailing, parsed); // desired: 003 0.10 0.100
This question has been regularly asked (and still is), yet I don't have a place I'd feel comfortable linking to (did i miss it?).
Fully analogously would be keeping any other aspect of the representation a number literal was entered as, although asked nowhere near as often:
console.log(0x10); // 16 instead of potentially desired 0x10
console.log(1e1); // 10 instead of potentially desired 1e1
For disambiguation, this is not about the following topics, for some of which I'll add links, as they might be of interest as well:
Padding to a set amount of digits, formatting to some specific string representation, e.g. How can i pad a value with leading zeroes?, How to output numbers with leading zeros in JavaScript?, How to add a trailing zero to a price
Why a certain string representation will be produced for some number by default, e.g. How does JavaScript determine the number of digits to produce when formatting floating-point values?
Floating point precision/accuracy problems, e.g. console.log(0.1 + 0.2) producing 0.30000000000000004, see Is floating point math broken?, and How to deal with floating point number precision in JavaScript?
No. A number stores no information about the representation it was entered as, or parsed from. It only relates to its mathematical value. Perhaps reconsider using a string after all.
If i had to guess, it would be that much of the confusion comes from the thought, that numbers, and their textual representations would either be the same thing, or at least tightly coupled, with some kind of bidirectional binding between them. This is not the case.
The representations like 0.1 and 0.10, which you enter in code, are only used to generate a number. They are convenient names, for what you intend to produce, not the resulting value. In this case, they are names for the same number. It has a lot of other aliases, like 0.100, 1e-1, or 10e-2. In the actual value, there is no contained information, about what or where it came from. The conversion is a one-way street.
When displaying a number as text, by default (Number.prototype.toString), javascript uses an algorithm to construct one of the possible representations from a number. This can only use what's available, the number value, also meaning it will produce the same results for two same numbers. This implies, that 0.1 and 0.10 will produce the same result.
Concerning the number1 value, javascript uses IEEE754-2019 float642. When source code is being evaluated3, and a number literal is encountered, the engine will convert the mathematical value the literal represents to a 64bit value, according to IEEE754-2019. This means any information about the original representation in code is lost4.
There is another problem, which is somewhat unrelated to the main topic. Javascript used to have an octal notation, with a prefix of "0". This means, that 003 is being parsed as an octal, and would throw in strict-mode. Similarly, 010 === 8 (or an error in strict-mode), see Why JavaScript treats a number as octal if it has a leading zero
In conclusion, when trying to keep information about some representation for a number (including leading or trailing zeroes, whether it was written as decimal, hexadecimal, and so on), a number is not a good choice. For how to achieve some specific representation other than the default, which doesn't need access to the originally entered text (e.g. pad to some amount of digits), there are many other questions/articles, some of which were already linked.
[1]: Javascript also has BigInt, but while it uses a different format, the reasoning is completely analogous.
[2]: This is a simplification. Engines are allowed to use other formats internally (and do, e.g. to save space/time), as long as they are guaranteed to behave like an IEEE754-2019 float64 in any regard, when observed from javascript.
[3]: E.g. V8 would convert to bytecode earlier than evaluation, already exchanging the literal. The only relevant thing is, that the information is lost, before we could do anything with it.
[4]: Javascript gives the ability to operate on code itself (e.g. Function.prototype.toString), which i will not discuss here much. Parsing the code yourself, and storing the representation, is an option, but has nothing to do with how number works (you would be operating on code, a string). Also, i don't immediately see any sane reason to do so, over alternatives.

c# Bitwise Xor calculation that seems easy in R or nodeJS

In R if I do bitXor(2496638211, -1798328965) I get 120 returned.
In nodeJS 2496638211 ^ -1798328965; returns 120.
How do I do this in C#? (Think I'm struggling to understand c# type declaration and the way R & nodeJS presumably convert to 32 bits)
In C# you would have to use unchecked and cast to int:
unchecked
{
int result = (int)2496638211 ^ -1798328965;
Console.WriteLine(result); // Prints 120
}
You could write a method to do this:
public static int XorAsInts(long a, long b)
{
unchecked
{
return (int)a ^ (int)b;
}
}
Then call it:
Console.WriteLine(XorAsInts(2496638211, -1798328965)); // Prints 120
Note that by casting to int, you are throwing away the top 32 bits of the long. This is what bitXor() is doing, but are you really sure this is the behaviour you want?
2496638211 is larger than an 32bit integer and in 64 bit systems with C# integers defaults long thus the hex value representation is0x94CFAD03. That 9 overlaps with the sign bits. This is also means that -1798328965 results in 0xFFFFFFFF94CFAD7B due to the negative number representation. So C# gets 0xFFFFFFFF00000078 that is the correct answer.
JS XOR uses 32 bit operands so it truncates 2496638211. Thus JS gets 0x00000078 that is not technically the correct answer. My assumption is that R does the same or similar.
In JS use Number.isSafeInteger() as well as Number.MAX_SAFE_INTEGER. In C# check int.MaxValue vs long.MaxValue to see the difference vs JS.
To recreate the JS behavior in C# wrap it in an unchecked block as per Matthew Watson post. The unchecked keyword is used to suppress overflow-checking for integral-type arithmetic operations and conversions.

How v8 Javascript engine performs bit operations on Int32Array values

As far as I know, V8 Javascript engine makes double number conversion (to i32 and back) to perform bit operation.
Let's consider next example:
const int32 = new Int32Array(2);
int32[0] = 42;
int32[1] = 2;
const y = int32[0] | int32[1];
Does V8 makes double conversion in this case ?
If no, does it mean bit operations are faster on Int32Array values?
UPDATE:
By double conversion I mean:
from 64 bit (53 bits precision) to -> 32bit and again to -> 64 bit
V8 developer here. The short answer is: there are no conversions here, bitwise operations are always performed on 32-bit integers, and an Int32Array also stores its elements as 32-bit integers.
The longer answer is "it depends". In unoptimized code, all values use a unified representation. In V8, that means number values are either represented as a "Smi" ("small integer") if they are in range (31 bits including sign, i.e. about -1 billion to +1 billion), or as "HeapNumbers" (64-bit double with a small object header) otherwise. So for an element load like int32[0], the 32-bit value is loaded from the array, inspected for range, and then either tagged as a Smi or boxed as a HeapNumber. The following | operation looks at the input, converts it to a 32-bit integer (which could be as simple as untagging a Smi, or as complex as invoking .valueOf on an object), and performs the bitwise "or". The result is then again either tagged as a Smi or boxed as a HeapNumber, so that the next operation can decide what to do with it.
If the function runs often enough to eventually get optimized, then the optimizing compiler makes use of type feedback (guarded by type checks where necessary) to elide all these conversions that are unnecessary in this case, and the simple answer will be true.

Math inaccuracy in Javascript: safe to use JS for important stuff?

I was bored, so I started fidlling around in the console, and stumbled onto this (ignore the syntax error):
Some variable "test" has a value, which I multiply by 10K, it suddenly changes into different number (you could call it a rounding error, but that depends on how much accuracy you need). I then multiply that number by 10, and it changes back/again.
That raises a few questions for me:
How in accurate is Javascript? Has this been determined? I.e. a number that can be taken into account?
Is there a way to fix this? I.e. to do math in Javascript with complete accuracy (within the limitations of its datatype).
Should the changed number after the second operation be interpreted as 'changing back to the original number' or 'changing again, because of the inaccuracy'?
I'm not sure whether this should be a separate question, but I was actually trying to round numbers to a certain amount after the decimal point. I've researched it a bit, and have found two methods:
> Method A
function roundNumber(number, digits) {
var multiple = Math.pow(10, digits);
return Math.floor(number * multiple) / multiple;
}
> Method B
function roundNumber(number, digits) {
return Number(number.toFixed(digits));
}
Intuitively I like method B more (looks more efficient), but I don't know what going on behind the scenes so I can't really judge. Anyone have an idea on that? Or a way to benchmark this? And why is there no native round_to_this_many_decimals function? (one that returns an integer, not a string)
How in accurate is Javascript?
Javascript uses standard double precision floating point numbers, so the precision limitations are the same as for any other language that uses them, which is most languages. It's the native format used by the processor to handle floating point numbers.
Is there a way to fix this? I.e. to do math in Javascript with complete accuracy (within the limitations of its datatype).
No. The precision limitations lies in the way that the number is stored. Floating point numbers doesn't have complete accuracy, so no matter how you do the calculations you can't achieve absolute accuracy as the result goes back into a floating point number.
If you want complete accuracy then you need to use a different data type.
Should the changed number after the second operation be interpreted as
'changing back to the original number' or 'changing again, because of
the inaccuracy'?
It's changing again.
When a number is converted to text to be displayed, it's rounded to a certain number of digits. The numbers that look like they are exact aren't, it's just that the limitations in precision doesn't show up.
When the number "changes back" it's just because the rounding again hides the limitations in the precision. Each calculation adds or subtracts a small inaccuracy in the number, and sometimes it just happens to take the number closer to the number that you had originally. Eventhough it looks like it's more accurate, it's actually less accurate as each calculation adds a bit of uncertainty.
Internally, JavaScript uses 64-bit IEEE 754 floating-point numbers, which are a widely used standard and usually guarantee about 16 digits of accuracy. The error you witnessesed was on the 17th significant digit of the number and was reeeally tiny.
Is there a way to [...] do math in Javascript with complete accuracy (within the limitations of its datatype).
I would say that JavaScript's math is completely accurate within the limitations of its datatype. The error you witnessed was outside of those limitations.
Are you working with calculations that require a higher degree of precision than that?
Should the changed number after the second operation be interpreted as 'changing back to the original number' or 'changing again, because of the inaccuracy'?
The number never really became more or less accurate than the original value. It was only when the value was converted into a decimal value that a rounding error became apparent. But this was not a case of the value "changing back" to an accurate number. The rounding error was just too small to display.
And why is there no native round_to_this_many_decimals function? (one that returns an integer, not a string)
"Why is the language this way" questions are not considered very productive here, but it is easy to get around this limitation (assuming you mean numbers and not integers). This answer has 337 upvotes: +numb.toFixed(digits);, but note that if you try to display a number produced with that expression, there's no guarantee that it will actually display with only six digits. That's probably one of the reasons why JavaScript's "round to N places" function produces a string and not a number.
I came across the same few times and with further research I was able solve the little issues by using the library below
Math.js Library
Sample
import {
atan2, chain, derivative, e, evaluate, log, pi, pow, round, sqrt
} from 'mathjs'
// functions and constants
round(e, 3) // 2.718
atan2(3, -3) / pi // 0.75
log(10000, 10) // 4
sqrt(-4) // 2i
pow([[-1, 2], [3, 1]], 2) // [[7, 0], [0, 7]]
derivative('x^2 + x', 'x') // 2 * x + 1
// expressions
evaluate('12 / (2.3 + 0.7)') // 4
evaluate('12.7 cm to inch') // 5 inch
evaluate('sin(45 deg) ^ 2') // 0.5
evaluate('9 / 3 + 2i') // 3 + 2i
evaluate('det([-1, 2; 3, 1])') // -7
// chaining
chain(3)
.add(4)
.multiply(2)
.done() // 14

NaN is a number? [duplicate]

Just out of curiosity.
It doesn't seem very logical that typeof NaN is number. Just like NaN === NaN or NaN == NaN returning false, by the way. Is this one of the peculiarities of JavaScript, or would there be a reason for this?
Edit: thanks for your answers. It's not an easy thing to get ones head around though. Reading answers and the wiki I understood more, but still, a sentence like
A comparison with a NaN always returns an unordered result even when comparing with itself. The comparison predicates are either signaling or non-signaling, the signaling versions signal an invalid exception for such comparisons. The equality and inequality predicates are non-signaling so x = x returning false can be used to test if x is a quiet NaN.
just keeps my head spinning. If someone can translate this in human (as opposed to, say, mathematician) readable language, I would be grateful.
Well, it may seem a little strange that something called "not a number" is considered a number, but NaN is still a numeric type, despite that fact :-)
NaN just means the specific value cannot be represented within the limitations of the numeric type (although that could be said for all numbers that have to be rounded to fit, but NaN is a special case).
A specific NaN is not considered equal to another NaN because they may be different values. However, NaN is still a number type, just like 2718 or 31415.
As to your updated question to explain in layman's terms:
A comparison with a NaN always returns an unordered result even when comparing with itself. The comparison predicates are either signalling or non-signalling, the signalling versions signal an invalid exception for such comparisons. The equality and inequality predicates are non-signalling so x = x returning false can be used to test if x is a quiet NaN.
All this means is (broken down into parts):
A comparison with a NaN always returns an unordered result even when comparing with itself.
Basically, a NaN is not equal to any other number, including another NaN, and even including itself.
The comparison predicates are either signalling or non-signalling, the signalling versions signal an invalid exception for such comparisons.
Attempting to do comparison (less than, greater than, and so on) operations between a NaN and another number can either result in an exception being thrown (signalling) or just getting false as the result (non-signalling or quiet).
The equality and inequality predicates are non-signalling so x = x returning false can be used to test if x is a quiet NaN.
Tests for equality (equal to, not equal to) are never signalling so using them will not cause an exception. If you have a regular number x, then x == x will always be true. If x is a NaN, then x == x will always be false. It's giving you a way to detect NaN easily (quietly).
It means Not a Number. It is not a peculiarity of javascript but common computer science principle.
From http://en.wikipedia.org/wiki/NaN:
There are three kinds of operation
which return NaN:
Operations with a NaN as at least one operand
Indeterminate forms
The divisions 0/0, ∞/∞, ∞/−∞, −∞/∞, and −∞/−∞
The multiplications 0×∞ and 0×−∞
The power 1^∞
The additions ∞ + (−∞), (−∞) + ∞ and equivalent subtractions.
Real operations with complex results:
The square root of a negative number
The logarithm of a negative number
The tangent of an odd multiple of 90 degrees (or π/2 radians)
The inverse sine or cosine of a number which is less than −1 or
greater than +1.
All these values may not be the same. A simple test for a NaN is to test value == value is false.
The ECMAScript (JavaScript) standard specifies that Numbers are IEEE 754 floats, which include NaN as a possible value.
ECMA 262 5e Section 4.3.19: Number value
primitive value corresponding to a double-precision 64-bit binary format IEEE 754 value.
ECMA 262 5e Section 4.3.23: NaN
Number value that is a IEEE 754 "Not-a-Number" value.
IEEE 754 on Wikipedia
The IEEE Standard for Floating-Point Arithmetic is a technical standard established by the Institute of Electrical and Electronics Engineers and the most widely used standard for floating-point computation [...]
The standard defines
arithmetic formats: sets of binary and decimal floating-point data, which consist of finite numbers (including signed zeros and subnormal numbers), infinities, and special "not a number" values (NaNs)
[...]
typeof NaN returns 'number' because:
ECMAScript spec says the Number type includes NaN:
4.3.20 Number type
set of all possible Number values including the special “Not-a-Number”
(NaN) values, positive infinity, and negative infinity
So typeof returns accordingly:
11.4.3 The typeof Operator
The production UnaryExpression : typeof UnaryExpression is
evaluated as follows:
Let val be the result of evaluating UnaryExpression.
If Type(val) is Reference, then
If IsUnresolvableReference(val) is true, return "undefined".
Let val be GetValue(val).
Return a String determined by Type(val) according to Table 20.
​
Table 20 — typeof Operator Results
==================================================================
| Type of val | Result |
==================================================================
| Undefined | "undefined" |
|----------------------------------------------------------------|
| Null | "object" |
|----------------------------------------------------------------|
| Boolean | "boolean" |
|----------------------------------------------------------------|
| Number | "number" |
|----------------------------------------------------------------|
| String | "string" |
|----------------------------------------------------------------|
| Object (native and does | "object" |
| not implement [[Call]]) | |
|----------------------------------------------------------------|
| Object (native or host and | "function" |
| does implement [[Call]]) | |
|----------------------------------------------------------------|
| Object (host and does not | Implementation-defined except may |
| implement [[Call]]) | not be "undefined", "boolean", |
| | "number", or "string". |
------------------------------------------------------------------
This behavior is in accordance with IEEE Standard for Floating-Point Arithmetic (IEEE 754):
4.3.19 Number value
primitive value corresponding to a double-precision 64-bit binary
format IEEE 754 value
4.3.23 NaN
number value that is a IEEE 754 “Not-a-Number” value
8.5 The Number Type
The Number type has exactly 18437736874454810627 (that is, 253−264+3)
values, representing the double-precision 64-bit format IEEE 754
values as specified in the IEEE Standard for Binary Floating-Point
Arithmetic, except that the 9007199254740990 (that is, 253−2) distinct
“Not-a-Number” values of the IEEE Standard are represented in
ECMAScript as a single special NaN value. (Note that the NaN value
is produced by the program expression NaN.)
NaN != NaN because they are not necessary the SAME non-number. Thus it makes a lot of sense...
Also why floats have both +0.00 and -0.00 that are not the same. Rounding may do that they are actually not zero.
As for typeof, that depends on the language. And most languages will say that NaN is a float, double or number depending on how they classify it... I know of no languages that will say this is an unknown type or null.
NaN is a valid floating point value (http://en.wikipedia.org/wiki/NaN)
and NaN === NaN is false because they're not necessarily the same non-number
NaN stands for Not a Number. It is a value of numeric data types (usually floating point types, but not always) that represents the result of an invalid operation such as dividing by zero.
Although its names says that it's not a number, the data type used to hold it is a numeric type. So in JavaScript, asking for the datatype of NaN will return number (as alert(typeof(NaN)) clearly demonstrates).
A better name for NaN, describing its meaning more precisely and less confusingly, would be a numerical exception. It is really another kind of exception object disguised as having primitive type (by the language design), where at the same it is not treated as primitive in its false self-comparison. Whence the confusion. And as long as the language "will not make its mind" to choose between proper exception object and primitive numeral, the confusion will stay.
The infamous non-equality of NaN to itself, both == and === is a manifestation of the confusing design forcing this exception object into being a primitive type. This breaks the fundamental principle that a primitive is uniquely determined by its value. If NaN is preferred to be seen as exception (of which there can be different kinds), then it should not be "sold" as primitive. And if it is wanted to be primitive, that principle must hold. As long as it is broken, as we have in JavaScript, and we can't really decide between the two, the confusion leading to unnecessary cognitive load for everyone involved will remain. Which, however, is really easy to fix by simply making the choice between the two:
either make NaN a special exception object containing the useful information about how the exception arose, as opposed to throwing that information away as what is currently implemented, leading to harder-to-debug code;
or make NaN an entity of the primitive type number (that could be less confusingly called "numeric"), in which case it should be equal to itself and cannot contain any other information; the latter is clearly an inferior choice.
The only conceivable advantage of forcing NaN into number type is being able to throw it back into any numerical expression. Which, however, makes it brittle choice, because the result of any numerical expression containing NaN will either be NaN, or leading to unpredictable results such as NaN < 0 evaluating to false, i.e. returning boolean instead of keeping the exception.
And even if "things are the way they are", nothing prevents us from making that clear distinction for ourselves, to help make our code more predictable and easierly debuggable. In practice, that means identifying those exceptions and dealing with them as exceptions. Which, unfortunately, means more code but hopefully will be mitigated by tools such as TypeScript of Flowtype.
And then we have the messy quiet vs noisy aka signalling NaN distinction. Which really is about how exceptions are handled, not the exceptions themselves, and nothing different from other exceptions.
Similarly, Infinity and +Infinity are elements of numeric type arising in the extension of the real line but they are not real numbers. Mathematically, they can be represented by sequences of real numbers converging to either + or -Infinity.
Javascript uses NaN to represent anything it encounters that can't be represented any other way by its specifications. It does not mean it is not a number. It's just the easiest way to describe the encounter. NaN means that it or an object that refers to it could not be represented in any other way by javascript. For all practical purposes, it is 'unknown'. Being 'unknown' it cannot tell you what it is nor even if it is itself. It is not even the object it is assigned to. It can only tell you what it is not, and not-ness or nothingness can only be described mathematically in a programming language. Since mathematics is about numbers, javascript represents nothingness as NaN. That doesn't mean it's not a number. It means we can't read it any other way that makes sense. That's why it can't even equal itself. Because it doesn't.
This is simply because NaN is a property of the Number object in JS, It has nothing to do with it being a number.
The best way to think of NAN is that its not a known number. Thats why NAN != NAN because each NAN value represents some unique unknown number. NANs are necessary because floating point numbers have a limited range of values. In some cases rounding occurs where the lower bits are lost which leads to what appears to be nonsense like 1.0/11*11 != 1.0. Really large values which are greater are NANs with infinity being a perfect example.
Given we only have ten fingers any attempt to show values greater than 10 are impossible, which means such values must be NANs because we have lost the true value of this greater than 10 value. The same is true of floating point values, where the value exceeds the limits of what can be held in a float.
NaN is still a numeric type, but it represents value that could not represent a valid number.
Because NaN is a numeric data type.
NaN is a number from a type point of view, but is not a normal number like 1, 2 or 329131. The name "Not A Number" refers to the fact that the value represented is special and is about the IEEE format spec domain, not javascript language domain.
If using jQuery, I prefer isNumeric over checking the type:
console.log($.isNumeric(NaN)); // returns false
console.log($.type(NaN)); // returns number
http://api.jquery.com/jQuery.isNumeric/
Javascript has only one numeric data type, which is the standard 64-bit double-precision float. Everything is a double. NaN is a special value of double, but it's a double nonetheless.
All that parseInt does is to "cast" your string into a numeric data type, so the result is always "number"; only if the original string wasn't parseable, its value will be NaN.
We could argue that NaN is a special case object. In this case, NaN's object represents a number that makes no mathematical sense. There are some other special case objects in math like INFINITE and so on.
You can still do some calculations with it, but that will yield strange behaviours.
More info here: http://www.concentric.net/~ttwang/tech/javafloat.htm (java based, not javascript)
You've got to love Javascript. It has some interesting little quirks.
http://wtfjs.com/page/13
Most of those quirks can be explained if you stop to work them out logically, or if you know a bit about number theory, but nevertheless they can still catch you out if you don't know about them.
By the way, I recommend reading the rest of http://wtfjs.com/ -- there's a lot more interesting quirks than this one to be found!
The value NaN is really the Number.NaN hence when you ask if it is a number it will say yes. You did the correct thing by using the isNaN() call.
For information, NaN can also be returned by operations on Numbers that are not defined like divisions by zero or square root of a negative number.
It is special value of Number type as POSITIVE_INFINITY
Why? By design
An example
Imagine We are converting a string to a number:
Number("string"); // returns NaN
We changed the data type to number but its value is not a number!

Categories