Are there any good JavaScript currency or decimal classes? - javascript

I am trying to deal with JavaScript values such as 23.45, but I want to be able to do mathematical operations on these values (addition, subtraction, multiplication, division) without running into floating point issues. Yes, I might need to round the results sometimes, but I would like it to give reasonable answers.
Consider this in JavaScript:
24.56 * .3
Yields
7.36799999999
I would like it to come out with 7.368.
Most languages have either a decimal or currency data type to deal with this. Has anyone built a class that can handle this sort of data effectively, or is there any other solution for dealing with these sorts of numbers without having to constantly adjust for floating point errors?

Integers.
There is no need to use floating-point for currency. Use fixed-point, where the number of decimal points is 0.
You count in pennies (or possibly in tenths of pennies).

Instead of using integers (which have their own problems)
I would use the bignumber.js library

There is Math
The Math object is build into the JavaScript spec so every browser has it natively.
As for data types, JavaScript has Number. That's it. We have no other number data type. The best think to do is to try and work with Integers.

Doing some more searching, I came across this.
https://stackoverflow.com/questions/744099/javascript-bigdecimal-library
It looks like none of them are ideal, but they do the job.

ku4jQuery-kernel contains both a money class and a math utility that contain operations and rounding, including round, roundUp and roundDown. These are nice methods because you can pass a value to round to. For example you can do $.math.round(3.4567, -2) and it will round the number 3.4567 to the nearest 10^-2. The same goes for money. $.money(100.87).divide(2).roundUp().toString() will yield "$50.44". You can go further and add the denomination of money as a second parameter, say "B" for Bitcoin, $.money(100.87, "B").divide(2).roundUp().toString(). You can find more about this library here ku4jQuery-kernel and more libraries that you may find useful here kodmunki github. These libraries are closely maintained and used in many production projects. If you decide to try them, I hope that you find them useful! Happy coding :{)}

New kid on the block: moneysafe. It's open-source, and uses a functional approach that allows smart composition.
$(.1) + $(.2) === $(.3).cents;
https://github.com/ericelliott/moneysafe

The toFixed method can round to a given number of decimals.
There is also a Javascript sprintf implementation.

Related

V8 (or other JS engine) BigInt Implementation - Displaying as Decimal

I'm wondering if someone might be able to explain a specific aspect of the JavaScript BigInt implementation to me.
The overview implementation I understand - rather than operating in base 10, build an array representing digits effectively operating in base 2^32/2^64 depending on build architecture.
What I'm curious about is the display/console.log implementation for this type - it's incredibly fast for most common cases, to the point where if you didn't know anything about the implementation you'd probably assume it was native. But, knowing what I do about the implementation, it's incredible to me that it's able to do the decimal cast/string concatenation math as quickly as it can, and I'm deeply curious how it works.
A moderate look into bigint.cc and bigint.h in the Chromium source has only confused me further, as there are a number of methods whose signatures are defined, but whose implementations I can't seem to find.
I'd appreciate even being pointed to another spot in the Chromium source which contains the decimal cast implementation.
(V8 developer here.)
#Bergi basically provided the relevant links already, so just to sum it up:
Formatting a binary number as a decimal string is a "base conversion", and its basic building block is:
while (number > 0) {
next_char = "0123456789"[number % 10];
number = number / 10; // Truncating integer division.
}
(Assuming that next_char is also written into some string backing store; this string is being built up from the right.)
Special-cased for the common situation that the BigInt only had one 64-bit "digit" to begin with, you can find this algorithm in code here.
The generalization for more digits and non-decimal radixes is here; it's the same algorithm.
This algorithm runs sufficiently fast for sufficiently small BigInts; its problem is that it scales quadratically with the length of the BigInt. So for large BigInts (where some initial overhead easily pays for itself due to enabling better scaling), we have a divide-and-conquer implementation that's built on better-scaling division and multiplication algorithms.
When the requested radix is a power of two, then no such heavy machinery is necessary, because a linear-time implementation is easy. That's why some_bigint.toString(16) is and always will be much faster than some_bigint.toString() (at least for large BigInts), so when you need de/serialization rather than human readability, hex strings are preferable for performance.
if you didn't know anything about the implementation you'd probably assume it was native
What does that even mean?

JavaScript - display number as non standard index?

In my JS, I've got a generated number (fairly enormous, it's normally about 95^[5-10]).
How do I stop this number from being displayed as standard notation?
You can't. JavaScript cannot handle such large numbers natively, and the scientific notation helps emphasize that fact.
That said, you might be able to do some string manipulation on it, to strip out the . and process the exponent to find out how many zeroes to add to the end. Obviously it won't be accurate but that's because of the inability to handle such large numbers I mentioned.

Has any one faced Math.js auto approximation issue and got any work around for this?

Has any one faced Math.js auto approximation issue and got any work around for this?
If I enter any number more than 18 digits then this library returns the approximate value; not the exact value. Lets say if user enters "03030130000309293689" then it returns "3030130000309293600" and when user enters "3030130000309293799" even it returns "3030130000309293600". Can we stop this approximation? This is a bug or if not then how can I avoid approximation?
Due to this approximation if any user enters "03030130000309293695 == 03030130000309293799" then it will always return true which is totally wrong.
github -- https://github.com/josdejong/mathjs
We can try this at http://mathjs.org/ ( in Demo notepad).
This is released for production!
I think if any time user enters like "03030130000309293695 == 03030130000309293799" both side number only then we can do string comparison. Rest all cases will be taken care by approximation.Why I am saying this is because if i use the same library for "73712347274723714284 *73712347274723713000" computation then it gives result in scientific notation.
03030130000309293695 and 03030130000309293799 are pretty much the same number.
HOW?
According to this answer the limit of JS number is 9007199254740992(2^53). Your both numbers are greater than this number and so precision is left out. You probably need to use library like Big.js
It's not a library issue, it's just language architecture issue. You can even open your browser console and type in your equation to see it it truthy.
This is not really a problem of Math.js but a result of how numbers work in javascript. Javascript uses 64bit binary floating point numbers (also known as 64bit double in C). As such, it has only 53 bits to store your number.
I've written an explanation here: Javascript number gets another value
You can read the wikipedia page for 64 bit doubles for more detail: http://en.wikipedia.org/wiki/Double_precision_floating-point_format
Now for the second part of your question:
If not then how can I avoid approximation?
There are several libraries in javascript that implements big numbers:
For the browser there's this: https://github.com/MikeMcl/bignumber.js
Which is written in pure javascript. Should also be usable node.js.
For node.js there's this: https://github.com/justmoon/node-bignum
Which is a wrapper around the big number library used by OpenSSL. It's written in C so can't be loaded in the browser but should be faster and maybe more memory efficient on node.js.
The latest version of math.js has support for bignumbers, see docs:
https://github.com/josdejong/mathjs/blob/master/docs/datatypes/bignumbers.md

Why can't I get big number libraries for javascript to work?

Looking for a library to work with large numbers on javascript (bigger than 2^53) I checked a couple of questions (JavaScript large number library? and Is there a bignum library for JavaScript?) and then tinkered a little bit with javascript-bignum.js and big.js, but the thing is that with I am unable to represent odd numbers, since both
Big(9007199254740995);
and
SchemeNumber.fn["string->number"](9007199254740995);
return
9007199254740996
rather than
9007199254740995
as I would expect.
So, is it that I am doing something wrong? Or there's no way to represent large odd numbers?
When you say this
Big(9007199254740995)
you are not giving the bignum library a chance! Your numeric literal is first parsed by pure JS, in which that number isn't exactly representable. You can see this simply with
window.alert(9007199254740995);
which alerts 9007199254740996.
In order to let your chosen bignum library successfully represent this number, you will need to pass it as a string, for example:
Big('9007199254740995')
should get you this exact number, as a bignum.

Python vs Javascript floating point arithmetic giving very different answers. What am I doing wrong?

Python version | Javascript version | Whitepaper
So, I'm working on a website to calculate Glicko ratings for two player games. It involves a lot of floating point arithmetic (square roots, exponents, division, all the nasty stuff) and I for some reason am getting a completely different answer from the Python implementation of the algorithm I translated line-for-line. The Python version is giving basically the expected answer for the example found in the original whitepaper describing the algorithm, but the Javascript version is quite a bit off.
Have I made an error in translation or is Javascript's floating point math just less accurate?
Expected answer: [1464, 151.4]
Python answer: [1462, 155.5]
Javascript answer: [1470.8, 89.7]
So the rating calculation isn't THAT bad, being 99.6% accurate, but the variance is off by 2/3!
Edit: People have pointed out that the default value of RD in the Pyglicko version is 200. This is a case of the original implementer leaving in test code I believe, as the test case is done on a person with an RD of 200, but clearly default is supposed to be 350. I did, however, specify 200 in my test case in Javascript, so that is not the issue here.
Edit: Changed the algorithm to use map/reduce. Rating is less accurate, variance is more accurate, both for no discernible reason. Wallbanging commence.
typically you get errors like this where you are subtracting two similar numbers - then the normally insignificant differences between values become amplified. for example, if you have two values that are 1.2345 and 1.2346 in python, but 1.2344 and 1.2347 in javascript, then the differences are 1e-4 and 3 e-4 respectively (ie one is 3x the other).
so i would look at where you have subtractions in your code and check those values. you may find that you can either (1) rewrite the maths to avoid the subtraction (often it turns out that you can find an expression that calculates the difference in some other way) or (2) focus on why the values at that particular point differ between the two languages (perhaps the difference in pi that the other answer identified is being amplified in this way).
it's also possible, although less likely here, that you have a difference because something is treated as an integer in python, but as a float in javascript. in python there is a difference between integers and floats, and if you are not careful you can do things like divide two integers to get another integer (eg 3/2 = 1 in python). while in javascript, all numbers are "really" floats, so this does not occur.
finally, it's possible there are small differences in how the calculations are performed. but these are "normal" - to get such drastic differences you need something like what i described above to occur as well.
PS: also note what Daniel Baulig said about the initial value of the parameter rd in the comments above.
My guess is that involves the approximations you're using for some of the constants in the JavaScript version. Your pi2 in particular seems a little.. brief. I believe Python is using doubles for those values.

Categories