JavaScript - How To Detect Number As A Decimal (Including 1.0) - javascript

It feels like I am missing something obvious here. This has been asked a number of times - and the answer usually boils down to:
var num = 4.5;
num % 1 === 0; // false - 4.5 is a decimal
But, this fails for
var num = 1.0; // or 2.0, 3.0, ...
num % 1 // 0
Unfortunately, these doesn't work either
num.toString() // 1
typeof num // "number"
I am writing a JavaScript color parsing library, and I want to process input differently if it is given as 1.0 or 1. In my case, 1.0 really means 100% and 1 means 1.
Otherwise, both rgb 1 1 1 and rgb 1 255 255 will be parsed as rgb 255 255 255 (since I am right now taking anything <= 1 to mean a ratio).

Those numbers aren't actually decimals or integers. They're all floats. The only real difference between 1 and 1.0 is the notation that was used to create floats of equal values.
Edit: to help illustrate, consider:
1 === 1.0; // true
parseInt('1') == parseInt('1.0'); // true
parseFloat('1') === parseFloat('1.0'); // true
parseInt('1') === parseFloat('1'); // true
// etc...
Also, to demonstrate that they are really the same underlying data type:
typeof(1); // 'number'
typeof(1.0); // 'number'
Also, note that 'number' isn't unambiguous in JavaScript like it would be in other languages, because numbers are always floats.
Edit 2: One more addition, since it's relevant. To the best of my knowledge, the only context in JavaScript in which you actually have "real and true" integers that aren't really represented as floats, is when you're doing bitwise operations. However, in this case, the interpreter converts all the floats to integers, performs the operation, and then converts the result back to a float before control is returned. Not totally pertinent to this question, but it helps to have a good understanding of Number handling in JS in general.

Let your script parse the input as string, then it will be a matter of checking if there is the point like this.
mystring.indexOf('.');
Check this example and this example.

Number.isInteger(4.5)
Number.isInteger() is part of the ES6 standard and not supported in IE11.

You'll have to do it when parsing the string. If there's a decimal point in the string, treat it as percentage, otherwise it's just the integer value.
So, e.g.:
rgb 1 1 1 // same as #010101
rgb 1.0 1 1 // same as #ff0101
Since the rgb is there, you're parsing the string anyway. Just look for . in there as you're doing it.

Well, as far as the compiler is concerned, there is no difference between 1.0 and 1, and because there is no difference, it is impossible to tell the difference between them. You should change it from 1.0 to 100 for the the percentage thing. That might fix it.

var num = 1;
and
var num = 1.0;
are the same. You mention that you want to treat them differently when given from a user. You will want to parse the difference when it is still a string and convert it to the appropriate number.

Related

Should this number be subtracted by 1 when implementing the Mersenne Twister?

I found this snippet online along with this Stackoverflow post which converts it into a TypeScript class.
I basically copy and pasted it verbatim (because I am not qualified to modify this sort of cryptographic code), but I noticed that VS Code has a little underline in the very last function:
/**
* generates a random number on [0,1) with 53-bit resolution
*/
nextNumber53(): number {
let a = this._nextInt32() >>> 5;
let b = this._nextInt32() >>> 6;
return (a * 67108864.0 + b) * (1.0 / 9007199254740992.0);
}
Specifically the 9007199254740992.0
VS Code says Numeric literals with absolute values equal to 2^53 or greater are too large to be represented accurately as integers.ts(80008)
I notice that if I subtract that number by one and instead make it 9007199254740991.0, then the warning goes away. But I don't necessarily want to modify the code and break it if this is indeed a significant difference.
Basically, I am unsure, because while my intuition says that having a numerical overflow is bad, my intuition also says that I shouldn't try to fix cryptographic code that was posted in several places, as it is probably correct.
But is it? Or should this number be subtracted by one?
9007199254740992 is the right value to use if you want Uniform values in [0,1), i.e. 0.0 <= x < 1.0.
This is just the automatics going awry, this value can be accurately represented by a JavaScript Number, i.e. a 64bit float. It's just 253 and binary IEEE 754 floats have no trouble with numbers of this form (it would even be represented accurately with a 32bit float).
Using 9007199254740991 would make the range [0,1], i.e. 0.0 <= x <= 1.0. Most libraries generate uniform values in [0,1) and other distributions are derived from that, but you are obviously free to do whatever is best for your application.
Note that the actual chance of getting the maximum value back is 2-53 (~1e-16) so you're unlikely not actually see it in practice.

Javascript scientific notation floats vs integers explained

I am working with js numbers and have lack of experience in that. So, I would like to ask few questions:
2.2932600144518896
e+160
is this float or integer number? If it's float how can I round it to two decimals (to get 2.29)? and if it's integer, I suppose it's very large number, and I have another problem than.
Thanks
Technically, as said in comments, this is a Number.
What you can do if you want the number (not its string representation):
var x = 2.2932600144518896e+160;
var magnitude = Math.floor(Math.log10(x)) + 1;
console.log(Math.round(x / Math.pow(10, magnitude - 3)) * Math.pow(10, magnitude - 3));
What's the problem with that? Floating point operation may not be precise, so some "number" different than 0 should appear.
To have this number really "rounded", you can only achieve it through string (than you can't make any operation).
JavaScript only has one Number type so is technically neither a float or an integer.
However this isn't really relevant as the value (or rather representation of it) is not specific to JavaScript and uses E-Notation which is a standard way to write very large/small numbers.
Taking this in to account 2.2932600144518896e+160 is equivalent to 2.2932600144518896 * Math.pow(10,160) and approximately 229 followed by 158 zeroes i.e. very flippin' big.

Math.sin() Different Precision between Node.js and C#

I have a problem in precision in the last digit after the comma.The javascript code generates one less Digit in compare with the C# code.
Here is the simple Node.js code
var seed = 45;
var x = Math.sin(seed) * 0.5;
console.log(x);//0.4254517622670592
Here is the simple C# code
public String pseudorandom()
{
int seed = 45;
double num = Math.Sin(seed) * (0.5);
return num.ToString("G15");//0.42545176226705922
}
How to achieve the same precision?
The JavaScript Number type is quite complex. It looks like floating point number will probably be like IEEE 754-2008 but some aspects are left to the implementation. See http://www.ecma-international.org/ecma-262/6.0/#sec-number-objects sec 12.7.
There is a note
The output of toFixed may be more precise than toString for some
values because toString only prints enough significant digits to
distinguish the number from adjacent number values. For example,
(1000000000000000128).toString() returns "1000000000000000100", while
(1000000000000000128).toFixed(0) returns "1000000000000000128".
Hence to get full digit accuracy you need something like
seed = 45;
x = Math.sin(seed) * 0.5;
x.toFixed(17);
// on my platform its "0.42545176226705922"
Also, note the specification for how the implementation of sin and cos allow for some variety in the actual algorithm. It's only guaranteed to within +/- 1 ULP.
Using java the printing algorithm is different. Even forcing 17 digits gives the result as 0.42545176226705920.
You can check you are getting the same bit patterns using x.toString(2) and Double.doubleToLongBits(x) in Java.
return num.ToString("G15");//0.42545176226705922
actually returns "0.425451762267059" (no significant digit + 15 decimal places in this example), and not the precision shown in the comment after.
So you would use:
return num.ToString("G16");
to get "0.4254517622670592"
(for your example - where the significant digit is always 0) G16 will be 16 decimal places.

What does Math.random() do in this JavaScript snippet?

I'm watching this Google I/O presentation from 2011 https://www.youtube.com/watch?v=M3uWx-fhjUc
At minute 39:31, Michael shows the output of the closure compiler, which looks like the code included below.
My question is what exactly is this code doing (how and why)
// Question #1 - floor & random? 2147483648?
Math.floor(Math.random() * 2147483648).toString(36);
var b = /&/g,
c = /</g,d=/>/g,
e = /\"/g,
f = /[&<>\"]/;
// Question #2 - sanitizing input, I get it...
// but f.test(a) && ([replaces]) ?
function g(a) {
a = String(a);
f.test(a) && (
a.indexOf("&") != -1 && (a = a.replace(b, "&")),
a.indexOf("<") != -1 && (a = a.replace(c, "<")),
a.indexOf(">") != -1 && (a = a.replace(d, ">")),
a.indexOf('"') != -1 && (a = a.replace(e, """))
);
return a;
};
// Question #3 - void 0 ???
var h = document.getElementById("submit-button"),
i,
j = {
label: void 0,
a: void 0
};
i = '<button title="' + g(j.a) + '"><span>' + g(j.label) + "</span></button>";
h.innerHTML = i;
Edit
Thanks for the insightful answers. I'm still really curious about the reason why the compiler threw in that random string generation at the top of the script. Surely there must be a good reason for it. Anyone???
1) This code is pulled from Closure Library. This code in is simply creating random string. In later version it has been replaced by to simply create a large random integer that is then concatenated to a string:
'closure_uid_' + ((Math.random() * 1e9) >>> 0)
This simplified version is easier for the Closure Compiler to remove so you won't see it leftover like it was previously. Specifically, the Compiler assumes "toString" with no arguments does not cause visible state changes. It doesn't make the same assumption about toString calls with parameters, however. You can read more about the compiler assumptions here:
https://code.google.com/p/closure-compiler/wiki/CompilerAssumptions
2) At some point, someone determined it was faster to test for the characters that might need to be replaced before making the "replace" calls on the assumption most strings don't need to be escaped.
3) As others have stated the void operator always returns undefined, and "void 0" is simply a reasonable way to write "undefined". It is pretty useless in normal usage.
1) I have no idea what the point of number 1 is.
2) Looks to make sure that any symbols are properly converted into their corresponding HTML entities , so yes basically sanitizing the input to make sure it is HTML safe
3) void 0 is essentially a REALLY safe way to make sure it returns undefined . Since the actual undefined keyword in javascript is mutable (i.e. can be set to something else), it's not always safe to assume undefined is actually equal to an undefined value you expect.
When in doubt, check other bases.
2147483648 (base 10) = 0x80000000 (base 16). So it's just making a random number which is within the range of a 32-bit signed int. floor is converting it to an actual int, then toString(36) is converting it to a 36-character alphabet, which is 0-9 (10 characters) plus a-z (26 characters).
The end-result of that first line is a string of random numbers and letters. There will be 6 of them (36^6 = 2176782336), but the first one won't be quite as random as the others (won't be late in the alphabet). Edit: Adrian has worked this out properly in his answer; the first letter can be any of the 36 characters, but is slightly less likely to be Z. The other letters have a small bias towards lower values.
For question 2, if you mean this a = String(a); then yes, it is ensuring that a is a string. This is also a hint to the compiler so that it can make better optimisations if it's able to convert it to machine code (I don't know if they can for strings though).
Edit: OK you clarified the question. f.test(a) && (...) is a common trick which uses short-circuit evaluation. It's effectively saying if(f.test(a)){...}. Don't use it like that in real code because it makes it less readable (although in some cases it is more readable). If you're wondering about test, it's to do with regular expressions.
For question 3, it's new to me too! But see here: What does `void 0` mean? (quick google search. Turns out it's interesting, but weird)
There's a number of different questions rolled into one, but considering the question title I'll just focus on the first here:
Math.floor(Math.random() * 2147483648).toString(36);
In actual fact, this doesn't do anything - as the value is discarded rather than assigned. However, the idea of this is to generate a number between 0 and 2 ^ 31 - 1 and return it in base 36.
Math.random() returns a number from 0 (inclusive) to 1 (exclusive). It is then multipled by 2^31 to produce the range mentioned. The .toString(36) then converts it to base 36, represented by 0 to 9 followed by A to Z.
The end result ranges from 0 to (I believe) ZIK0ZI.
As to why it's there in the first place ... well, examine the slide. This line appears right at the top. Although this is pure conjecture, I actually suspect that the code was cropped down to what's visible, and there was something immediately above it that this was assigned to.

How to output the number 2.00 as a complete string?

Before this, i want to say sorry. But this is not duplicate. Any answer on other posting has same problem. No float or int in JS (only number). When you make isInt() function, 2.00 always detected true as integer. I want 2.00 detected as float. So, i have to stringify it first.
function isInt(i) {
if ( i.toFixed ) {
if ( i % 1 === 0 ) {
return true; // the problem is 2.00 always detected true as integer.
// i want 2.00 detected as float
} else {
return false;
}
} else {
return false;
}
}
then i think i will stringify the 2.00 and then split it with split('.') . But toString doesn't do it
var i = 2.00;
alert(i.toString());
// Why this always result 2 . i want the character behind point
So, how to do that? i want 2.00 result "2.00" , not only "2"
Thank you for answering
You can use Number.toFixed(n);
var i = 2;
alert( i.toFixed(2) ); // "2.00"
var i = 1.2345;
alert( i.toFixed(2) ); // "1.23"
Also note that 2 === 2.00 but 2 !== "2.00".
Answer to revision:
Within javascript there is absolutely no way to distinguish between 2 2.0 and 2.000. Therefore, you will never without some additional decimal place supplied, be able to detect from var a = 2.00 that 2 was ever anything other than an integer (per your method) after it's been assigned.
Case in point, despite the [misleading] built-in methods:
typeof parseInt('2.00', 10) == typeof parseFloat('2.00')
'number' == 'number'
/* true */
Original Answer:
JavaScript doesn't have hard-based scalar types, just simply a Number. For that reason, and because you really only have 1 significant figure, JavaScript is taking your 2.00 and making it an "integer" [used loosly] (therefore no decimal places are present). To JavaScript: 2 = 2.0 = 2.00 = 2.00000).
Case in point, if I gave you the number 12.000000000000 and asked you to remember it and give it to someone a week from now, would you spend the time remember how many zeros there were, or focus on the fact that I handed you the number 12? (twelve takes a lot less effort to remember than twelve with as many decimal places)
As far as int vs float/double/real, you're really only describing the type of number from your perspective and not JavaScript's. Think of calling a number in JavaScript an int as giving it a label and not a definition. to outline:
Value: To JavaScript: To Us:
------ -------------- ------
1 Number integer
1.00 Number decimal
1.23 Number decimal
No matter what we may classify it as, JavaScript still only sees it as a Number.
If you need to keep decimal places, Number.toFixed(n) is going to be your best bet.
For example:
// only 1 sig-fig
var a = 2.00;
console.log(''+a); // 2
console.log(a.toFixed(2)); // 2.00
// 3 sig-figs
var b = 2.01
console.log(''+b); // 2.01
console.log(b.toFixed(2)); // 2.01
BTW, prefixing the var with ''+ is the same as calling a .toString(), it's just cast just shorthand. The same outcome would result if I had used a.toString() or b.toString()
To stringify (Chrome at least does so) use this:
i.toPrecision(3)
This will show 2 decimal digits.
Also NULL's solution does it very well without having to calculate the exact precision. i.e. .toFixed(decimals) is your best friend
You can't do exactly what you want to do.
2.00 gets converted to the number 2 in JavaScript, without decimal points. You can add it back in if you want using .toFixed(2), shown above.
I suggest keeping "2.00" as a string, as well as parsing it as a number, if necessary, for arithmetic. That way you can distinguish whether the number 2 was entered as "2", or "2.00", or "2.000000". Output the string, not the number, if you want to preserve the original number of decimal places.

Categories