How fast is parseFloat Vs. float ints in JavaScript? - javascript

I'm getting a huge dataset from a client's internal API. It'll contain a bunch of numerical data of prices such as: $31.23. He gives them to me as {"spend":"21.23"} which is fine, but I was concerned that after 1000+ items and running parseFloat() on all of those values (on top of graphing them) might be resource heavy on the client's browser.
Has anyone done this?
==Update==
I'm sorry. My question was too vague. My concern was that it's a string and i'm parsing it. My issue was is parseFloat faster than just an int. I.e. is appending parseFloat("12.12") to a div faster than simply just appending 12.12 and if so, how much faster.

On my work machine (Mac OS X, Intel 2 GHz Core i7), I saw the following results on jsperf.com:
Browser | parseFloat calls per second
----------------------------------------
Chrome 12 | 5.3 million
Firefox 6 | 21.7 million
IE 8 | 666.9 thousand <- totally unfair, ran on virtual machine
Opera 11 | 5.2 million
This is far from an exhaustive survey; but at a rough minimum of over 600 thousand calls per second (and that on a virtual machine), I think you should be good.

In regard for speed of parseFloat or parseInt MDN recommends using the unary operator + instead, as in
+"12.12"
=> 12.12
MDN Link
The unary plus operator precedes its operand and evaluates to its operand but attempts to converts it into a number, if it isn't already. Although unary negation (-) also can convert non-numbers, unary plus is the fastest and preferred way of converting something into a number, because it does not perform any other operations on the number.

you know that parseFloat() is based on the browser. So as far as I know the browser could crash after 200 values, could work just fine after 10.000 value.
It depends on how many tabs that browser has, what other scripts is running, how much CPU is free for processing, and of course what browser
if your client uses firefox with 1000 addons it will never run your script smoothly.
just my opinion. If you want to do it nice you should preprocess on a server then display.

Type
javascript:a = +new Date; x = 100000; while (--x) parseFloat("21.23"); alert(+new Date - a);
Into your url bar.
This is the only way to know for sure.
In honesty, you can't answer the question. It depends on the browser, for example, firefox 8 should be faster than 6, and so on.

Related

What is the maximum length of a string in JavaScript?

I thought this would be an easy google search but nothing came up.
Say I create a loop that continually appends one character at a time to a variable.
At what point will the program crash?
I expect it will take <6 seconds for someone to comment "Why don't you try it and find out?", but trying it wouldn't tell me the answer to the question. Is it implementation-specific? Is it defined anywhere? It doesn't seem to be in the ECMAScript spec.
Are there any known limitations beyond available memory? Does performance degrade with strings holding Mbs or Gbs of text?
For the record, I did try running that loop....and it just crashed my browser :( Maybe if I make some adjustments and try again.
ECMAScript 2016 (ed. 7) established a maximum length of 2^53 - 1
elements. Previously, no maximum length was specified. In Firefox,
strings have a maximum length of 2**30 - 2 (~1GB).
In versions prior
to Firefox 65, the maximum length was 2**28- 1 (~256MB).
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/length

Implications of the unique binary representation for NaN in JavaScript

Would I be correct in assuming that the reason that JavaScript only supports one binary representation for NaN is that it allows interpreters to speed up operations involving NaNs by checking for that specific bit pattern as a 64 bit integer rather than rely upon the FPU handling them?
Stephen Canon's comment spurred me to run some timing tests, which I guess I should have done in the first place. I'm posting the results as an answer in case they're of interest to anyone else...
Using my Intel Core2 Quad CPU (2.33GHz) PC I compared
for(i=0;i<100000000;++i) x += 1.0;
in C++ and JavaScript with x first equal to 0.0 and then to NaN.
C++ x==0.0: 124ms
C++ x==NaN: 11888ms
JS x==0.0: 268ms
JS x==NaN: 432ms
So it seems that JavaScript is exploiting the fact that it already has to dynamically dispatch arithmetic operators to treat NaN as a special case.
I'm guessing that it's a hidden type that has no data, hence only one binary representation for NaN.

Has any one faced Math.js auto approximation issue and got any work around for this?

Has any one faced Math.js auto approximation issue and got any work around for this?
If I enter any number more than 18 digits then this library returns the approximate value; not the exact value. Lets say if user enters "03030130000309293689" then it returns "3030130000309293600" and when user enters "3030130000309293799" even it returns "3030130000309293600". Can we stop this approximation? This is a bug or if not then how can I avoid approximation?
Due to this approximation if any user enters "03030130000309293695 == 03030130000309293799" then it will always return true which is totally wrong.
github -- https://github.com/josdejong/mathjs
We can try this at http://mathjs.org/ ( in Demo notepad).
This is released for production!
I think if any time user enters like "03030130000309293695 == 03030130000309293799" both side number only then we can do string comparison. Rest all cases will be taken care by approximation.Why I am saying this is because if i use the same library for "73712347274723714284 *73712347274723713000" computation then it gives result in scientific notation.
03030130000309293695 and 03030130000309293799 are pretty much the same number.
HOW?
According to this answer the limit of JS number is 9007199254740992(2^53). Your both numbers are greater than this number and so precision is left out. You probably need to use library like Big.js
It's not a library issue, it's just language architecture issue. You can even open your browser console and type in your equation to see it it truthy.
This is not really a problem of Math.js but a result of how numbers work in javascript. Javascript uses 64bit binary floating point numbers (also known as 64bit double in C). As such, it has only 53 bits to store your number.
I've written an explanation here: Javascript number gets another value
You can read the wikipedia page for 64 bit doubles for more detail: http://en.wikipedia.org/wiki/Double_precision_floating-point_format
Now for the second part of your question:
If not then how can I avoid approximation?
There are several libraries in javascript that implements big numbers:
For the browser there's this: https://github.com/MikeMcl/bignumber.js
Which is written in pure javascript. Should also be usable node.js.
For node.js there's this: https://github.com/justmoon/node-bignum
Which is a wrapper around the big number library used by OpenSSL. It's written in C so can't be loaded in the browser but should be faster and maybe more memory efficient on node.js.
The latest version of math.js has support for bignumbers, see docs:
https://github.com/josdejong/mathjs/blob/master/docs/datatypes/bignumbers.md

Why is Google Chrome's Math.random number generator not *that* random?

I ran into an odd "bug" today when I was running some unit tests in various browsers. I had run the tests in Firefox many times before today, and even IE but apparently not Chrome (v19-dev) yet. When I ran them in Chrome it consistently failed one test because two values I was calculating did not match.
When I really dug into what was happening I realized that the issue was that I was assuming that if I filled an array with 100,000 Math.random() values that they would all be unique (there wouldn't be any collisions). Turned out that in Chrome that is not true.
In Chrome I was consistently getting at least two pairs of values that matched out of 100,000. Firefox and IE9 never experience a collision. Here is a jsfiddle I wrote just for testing this that creates 1M Math.random() entries in an array: http://jsfiddle.net/pseudosavant/bcduj/
Does anyone know why the Chrome pseudo-random number generator that is used for Math.random is really not that random? It seems like this could have implications for any client-side js encryption routines that ever use Math.random.
Apparently Math.random() in V8 only works with 32 bit values (and didn't even correctly randomize all of those in the past). And with 32 bits, the probability of a collision reaches 50% around 2^16 = 65k values...
Other answers have explained the issue. If you're after better pseudo-random number generation in JavaScript, I'd recommend this page as a good place to start:
http://baagoe.com/en/RandomMusings/javascript/
I adapted one of the algorithms on this page for a script I'm using to generate UUIDs in the browser and had no collisions in my tests.
UPDATE 22 October 2013
The pages linked to above are no longer live. Here's a link to a snapshot from the Wayback Machine:
http://web.archive.org/web/20120502223108/http://baagoe.com/en/RandomMusings/javascript/
And here's a link to a Node.js module that includes Alea.js:
https://npmjs.org/package/alea
See https://medium.com/#betable/tifu-by-using-math-random-f1c308c4fd9d:
If we analyze the first sub-generator independently we see that it has 32 bits of internal state. It’s not a full-cycle generator — its actual cycle length is about 590 million (18,030*2¹⁵-1, the math is tricky but it’s explained here and here, or you can just trust me). So we can only produce a maximum of 590 million distinct request identifiers with this generator. If they were randomly selected there would be a 50% chance of collision after generating just 30,000 identifiers.

Greasemonkey Storage

Is there any limit on how much data can be stored using GM_setValue?
GM stores it in properties. Open about:config and look for them.
According to http://diveintogreasemonkey.org/api/gm_getvalue.html, you can find them in the greasemonkey.scriptvals branch.
This sqlite info on its limits shows some default limits for strings and blobs, but they may be changed by Firefox.
More information is in the Greasespot Wiki:
The Firefox preference store is not designed for storing large amounts of data. There are no hard limits, but very large amounts of data may cause Firefox to consume more memory and/or run more slowly.2
The link refers to a discussion in the Greasemonkey Mailinglist. Anthony Lieuallen answers the same question as you posted:
I've just tested this. Running up to a 32 meg string seems to work
without major issues, but 64 or 128 starts to thrash the disk for
virtual memory a fair deal.
According to the site you provided, "The value argument can be a string, boolean, or integer."
Obviously, a string can hold far more information than an integer or boolean.
Since GreaseMonkey scripts are JavaScript, the max length for a GM_setValue is the max length of a JavaScript string. Actually, the JavaScript engine (browser specific) determines the max length of a string.
I do not know any specifics, but you could write a script to determine max length.
Keep doubling length until you get an error. Then, try a value halfway between maxGoodLen and minBadLen until maxGoodLen = maxBadLen - 1.

Categories