So I am adding and subtracting floats in javascript, and I need to know how to always take the ceiling of any number that has more than 3 decimal places. For example:
3.19 = 3.19
3.191 = 3.20
3.00000001 = 3.01
num = Math.ceil(num * 100) / 100;
Though, due to the way floats are represented, you may not get a clean number that's to two decimal places. For display purposes, always do num.toFixed(2).
Actually I don't think you want to represent dollar amounts as float, due to the same reason cited by Box9.
For example, 0.1*3 != 0.3 in my browser. It's better to represent them as integers (e.g. cents).
Related
Will I possibly loose any decimal digits (precision) when multiplying Number.MAX_SAFE_INTEGER by Math.random() in JavaScript?
I presume I won't but it'd be nice to have a credible explanation as to why 😎
Edited, In layman terms, we're dealing with two IEEE 754 double-precision floating-point numbers, one is the maximal integer (for double-precision), the other one is fractional with quite a few digits after a decimal point. What if (say) I first converted them to quadruple-precision format, then multiplied, and then converted the product back to double-precision, would the result be any different?
const max = Number.MAX_SAFE_INTEGER;
const random = Math.random();
console.log(`\
MAX_SAFE_INTEGER: ${max}, \
random: ${random}, \
product: ${max * random}`);
For more elaborate examples, I use it to generate BigInt random numbers.
Your implementation should be safe - in theory, all numbers between 0 and MAX_SAFE_INTEGER should have a possibility of appearing, if the engine implementing Math.random uses a completely unbiased algorithm.
But an absolutely unbiased algorithm is not guaranteed by the specification - the numbers chosen are meant to be psuedo random, not truly, completely random. (does such a thing even exist? it's debatable...) Modern versions V8 and some other implementations use an algorithm with a period on the order of 2 ** 128, larger than MAX_SAFE_INTEGER (2 ** 53 - 1) - but it'd be completely plausible for other implementations (especially older ones) to have a much smaller period, resulting in certain integers within the range being picked much more often than others.
If this is important for your script (which is pretty unlikely in most situations, I'd think), you might consider using a higher-quality random generatior than Math.random - but it's almost certainly not worth worrying about.
What if (say) I first converted them to quadruple-precision format, then multiplied, and then converted the product back to double-precision, would the result be any different?
It could be in cases where the rounding behaves differently between multiplying two doubles vs converting quadruple to double, but the main problem remains the same. The spacing between representable doubles in the range from 2n to 2n+1 is 2n−52. So between 252 and 253 only whole numbers can be represented, between 251 and 252 only every 0.5 can be represented, etc.
If you want more precision you could try decimal.js. The library is included on that documentation page so you can try these out in your console.
Number.MAX_SAFE_INTEGER*.9
8106479329266892
new Decimal(Number.MAX_SAFE_INTEGER).mul(new Decimal(0.9)).toString()
"8106479329266891.9"
Both answers are correct, but I couldn't help running this little experiment in C#, where double is the same thing as Number in JavaScript (fiddle):
using System;
public class Program
{
public static void Main()
{
const double MAX_SAFE_INT = 9007199254740991;
Decimal maxD = Convert.ToDecimal(MAX_SAFE_INT.ToString());
var rng = new Random(Environment.TickCount);
for (var i = 0; i < 1000; i++)
{
double random = rng.NextDouble();
double product = MAX_SAFE_INT * random;
// converting via string to workaround the "15 significant digits" limitation for Decimal(Double)
Decimal randomD = Decimal.Parse(String.Format("{0:F18}", random));
Decimal productD = maxD * randomD;
double converted = Convert.ToDouble(productD);
if (Math.Floor(converted) != Math.Floor(product))
{
Console.WriteLine($"{maxD}, {randomD, 22}, products: decimal {productD, 32}, converted {converted, 20}, original {product, 20}");
}
}
}
}
As far as I'm concerned, I'm still getting the desired distribution of the random numbers within the 0 - 9007199254740991 range.
Here is a JavaScript playground code to check for possible recurrences.
In the interest of creating cross-platform code, I'd like to develop a simple financial application in JavaScript. The calculations required involve compound interest and relatively long decimal numbers. I'd like to know what mistakes to avoid when using JavaScript to do this type of math—if it is possible at all!
You should probably scale your decimal values by 100, and represent all the monetary values in whole cents. This is to avoid problems with floating-point logic and arithmetic. There is no decimal data type in JavaScript - the only numeric data type is floating-point. Therefore it is generally recommended to handle money as 2550 cents instead of 25.50 dollars.
Consider that in JavaScript:
var result = 1.0 + 2.0; // (result === 3.0) returns true
But:
var result = 0.1 + 0.2; // (result === 0.3) returns false
The expression 0.1 + 0.2 === 0.3 returns false, but fortunately integer arithmetic in floating-point is exact, so decimal representation errors can be avoided by scaling1.
Note that while the set of real numbers is infinite, only a finite number of them (18,437,736,874,454,810,627 to be exact) can be represented exactly by the JavaScript floating-point format. Therefore the representation of the other numbers will be an approximation of the actual number2.
1 Douglas Crockford: JavaScript: The Good Parts: Appendix A - Awful Parts (page 105).
2 David Flanagan: JavaScript: The Definitive Guide, Fourth Edition: 3.1.3 Floating-Point Literals (page 31).
Scaling every value by 100 is the solution. Doing it by hand is probably useless, since you can find libraries that do that for you. I recommend moneysafe, which offers a functional API well suited for ES6 applications:
const { in$, $ } = require('moneysafe');
console.log(in$($(10.5) + $(.3)); // 10.8
https://github.com/ericelliott/moneysafe
Works both in Node.js and the browser.
There's no such thing as "precise" financial calculation because of just two decimal fraction digits but that's a more general problem.
In JavaScript, you can scale every value by 100 and use Math.round() everytime a fraction can occur.
You could use an object to store the numbers and include the rounding in its prototypes valueOf() method. Like this:
sys = require('sys');
var Money = function(amount) {
this.amount = amount;
}
Money.prototype.valueOf = function() {
return Math.round(this.amount*100)/100;
}
var m = new Money(50.42355446);
var n = new Money(30.342141);
sys.puts(m.amount + n.amount); //80.76569546
sys.puts(m+n); //80.76
That way, everytime you use a Money-object, it will be represented as rounded to two decimals. The unrounded value is still accessible via m.amount.
You can build in your own rounding algorithm into Money.prototype.valueOf(), if you like.
Unfortunately all of the answers so far ignore the fact that not all currencies have 100 sub-units (e.g., the cent is the sub-unit of the US dollar (USD)). Currencies like the Iraqi Dinar (IQD) have 1000 sub-units: an Iraqi Dinar has 1000 fils. The Japanese Yen (JPY) has no sub-units. So "multiply by 100 to do integer arithmetic" isn't always the correct answer.
Additionally for monetary calculations you also need to keep track of the currency. You can't add a US Dollar (USD) to an Indian Rupee (INR) (without first converting one to the other).
There are also limitations on the maximum amount that can be represented by JavaScript's integer data type.
In monetary calculations you also have to keep in mind that money has finite precision (typically 0-3 decimal points) & rounding needs to be done in particular ways (e.g., "normal" rounding vs. banker's rounding). The type of rounding to be performed might also vary by jurisdiction/currency.
How to handle money in javascript has a very good discussion of the relevant points.
In my searches I found the dinero.js library that addresses many of the issues wrt monetary calculations. Haven't used it yet in a production system so can't give an informed opinion on it.
use decimaljs ... It a very good library that solves a harsh part of the problem ...
just use it in all your operation.
https://github.com/MikeMcl/decimal.js/
Your problem stems from inaccuracy in floating point calculations. If you're just using rounding to solve this you'll have greater error when you're multiplying and dividing.
The solution is below, an explanation follows:
You'll need to think about mathematics behind this to understand it. Real numbers like 1/3 cannot be represented in math with decimal values since they're endless (e.g. - .333333333333333 ...). Some numbers in decimal cannot be represented in binary correctly. For example, 0.1 cannot be represented in binary correctly with a limited number of digits.
For more detailed description look here: http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
Take a look at the solution implementation: http://floating-point-gui.de/languages/javascript/
Due to the binary nature of their encoding, some decimal numbers cannot be represented with perfect accuracy. For example
var money = 600.90;
var price = 200.30;
var total = price * 3;
// Outputs: false
console.log(money >= total);
// Outputs: 600.9000000000001
console.log(total);
If you need to use pure javascript then you have need to think about solution for every calculation. For above code we can convert decimals to whole integers.
var money = 60090;
var price = 20030;
var total = price * 3;
// Outputs: true
console.log(money >= total);
// Outputs: 60090
console.log(total);
Avoiding Problems with Decimal Math in JavaScript
There is a dedicated library for financial calculations with great documentation. Finance.js
Use this code for currency calculation and round numbers in two digits.
<!DOCTYPE html>
<html>
<body>
<h1>JavaScript Variables</h1>
<p id="test1"></p>
<p id="test2"></p>
<p id="test3"></p>
<script>
function setDecimalPoint(num) {
if (isNaN(parseFloat(num)))
return 0;
else {
var Number = parseFloat(num);
var multiplicator = Math.pow(10, 2);
Number = parseFloat((Number * multiplicator).toFixed(2));
return (Math.round(Number) / multiplicator);
}
}
document.getElementById("test1").innerHTML = "Without our method O/P is: " + (655.93 * 9)/100;
document.getElementById("test2").innerHTML = "Calculator O/P: 59.0337, Our value is: " + setDecimalPoint((655.93 * 9)/100);
document.getElementById("test3").innerHTML = "Calculator O/P: 32.888.175, Our value is: " + setDecimalPoint(756.05 * 43.5);
</script>
</body>
</html>
I am working with js numbers and have lack of experience in that. So, I would like to ask few questions:
2.2932600144518896
e+160
is this float or integer number? If it's float how can I round it to two decimals (to get 2.29)? and if it's integer, I suppose it's very large number, and I have another problem than.
Thanks
Technically, as said in comments, this is a Number.
What you can do if you want the number (not its string representation):
var x = 2.2932600144518896e+160;
var magnitude = Math.floor(Math.log10(x)) + 1;
console.log(Math.round(x / Math.pow(10, magnitude - 3)) * Math.pow(10, magnitude - 3));
What's the problem with that? Floating point operation may not be precise, so some "number" different than 0 should appear.
To have this number really "rounded", you can only achieve it through string (than you can't make any operation).
JavaScript only has one Number type so is technically neither a float or an integer.
However this isn't really relevant as the value (or rather representation of it) is not specific to JavaScript and uses E-Notation which is a standard way to write very large/small numbers.
Taking this in to account 2.2932600144518896e+160 is equivalent to 2.2932600144518896 * Math.pow(10,160) and approximately 229 followed by 158 zeroes i.e. very flippin' big.
I am adding client-side sub-total calculations to my order page, so that the volume discount will show as the user makes selections.
I am finding that some of the calculations are off by one cent here or there. This wouldn't be a very big deal except for the fact that the total doesn't match the final total calculated server-side (in PHP).
I know that the rounding errors are an expected result when dealing with floating point numbers. For example, 149.95 * 0.15 = 22.492499999999996 and 149.95 * 0.30 = 44.98499999999999. The former rounds as desired, the latter does not.
I've searched on this topic and found a variety of discussions, but nothing that satisfactorily addresses the problem.
My current calculation is as follows:
discount = Math.round(price * factor * 100) / 100;
A common suggestion is to work in cents rather than fractions of dollars. However, this would require me to convert my starting numbers, round them, multiply them, round the result, and then convert it back.
Essentially:
discount = Math.round(Math.round(price * 100) * Math.round(factor * 100) / 100) / 100;
I was thinking of adding 0.0001 to the number before rounding. For example:
discount = Math.round(price * factor * 100 + 0.0001) / 100;
This works for the scenarios I've tried, but I am wondering about my logic. Will adding 0.0001 always be enough, and never too much, to force the desired rounding result?
Note: For my purposes here, I am only concerned with a single calculation per price (so not compounding the errors) and will never be displaying more than two decimal places.
EDIT: For example, I want to round the result of 149.95 * 0.30 to two decimal places and get 44.99. However, I get 44.98 because the actual result is 44.98499999999999 not 44.985. The error is not being introduced by the / 100. It is happening before that.
Test:
alert(149.95 * 0.30); // yields 44.98499999999999
Thus:
alert(Math.round(149.95 * 0.30 * 100) / 100); // yields 44.98
The 44.98 is expected considering the actual result of the multiplication, but not desired since it is not what a user would expect (and differs from the PHP result).
Solution: I'm going to convert everything to integers to do my calculations. As the accepted answer points out, I can simplify my original conversion calculation somewhat. My idea of adding the 0.0001 is just a dirty hack. Best to use the right tool for the job.
I don't think adding a small amount will favor you, I guess there are cases where it is too much. Also it needs to be properly documented, otherwise one could see it as incorrect.
working in cents […] would require me to convert my starting numbers, round them, multiply them, round the result, and then convert it back:
discount = Math.round(Math.round(price * 100) * Math.round(factor * 100) / 100) / 100;
I think it should work as well to round afterwards only. However, you should first multiply the result so that the significant digits are the sum of the two sig digits from before, i.e. 2+2=4 decimal places in your example:
discount = Math.round(Math.round( (price * factor) * 10000) / 100) / 100;
Adding a small amount to your numbers will not be very accurate. You can try using a library to get better results: https://github.com/jtobey/javascript-bignum.
Bergi’s answer shows a solution. This answer shows a mathematical demonstration that it is correct. In the process, it also establishes some bound on how much error in the input is tolerable.
Your problem is this:
You have a floating-point number, x, which already contains rounding errors. E.g., it is intended to represent 149.95 but actually contains 149.94999999999998863131622783839702606201171875.
You want to multiply this floating-point number x by a discount value d.
You want to know the result of the multiplication to the nearest penny, performed as if ideal mathematics were used with no errors.
Suppose we add two more assumptions:
x always represents some exact number of cents. That is, it represents a number that has an exact number of hundredths, such as 149.95.
The error in x is small, less than, say, .00004.
The discount value d represents an integer percentage (that is, also an exact number of hundredths, such as .25 for 25%) and is in the interval [0%, 100%].
The error is d is tiny, always the result of correct conversion of a decimal numeral with two digits after the decimal point to double-precision (64 bit) binary floating point.
Consider the value x*d*10000. Ideally, this would be an integer, since x and d are each ideally multiples of .01, so multiplying the ideal product of x and d by 10,000 produces an integer. Since the errors in x and d are small, then rounding x*d*10000 to an integer will produce that ideal integer. E.g., instead of the ideal x and d, we have x and d plus small errors, x+e0 and d+e1, and we are computing (x+e0)•(d+e1)•10000 = (x•d+x•e1+d•e0+e0•e1)•10000. We have assumed that e1 is tiny, so the dominant error is d•e0•10000. We assumed e0, the error in x, is less than .00004, and d is at most 1 (100%), so d•e0•10000 is less than .4. This error, plus the tiny errors from e1, are not enough to change the rounding of x*d*10000 from the ideal integer to some other integer. (This is because the error must be at least .5 to change how a result that should be an integer rounds. E.g., 3 plus an error of .5 would round to 4, but 3 plus .49999 would not.)
Thus, Math.round(x*d*10000) produces the integer desired. Then Math.round(x*d*10000)/100 is an approximation of x*d*100 that is accurate to much less than one cent, so rounding it, with Math.round(Math.round(x*d*10000)/100) produces exactly the number of cents desired. Finally, dividing that by 100 (to produce a number of dollars, with hundredths, instead of a number of cents, as an integer) produces a new rounding error, but the error is so small that, when the resulting value is correctly converted to decimal with two decimal digits, the correct value is displayed. (If further arithmetic is performed with this value, that might not remain true.)
We can see from the above that, if the error in x grows to .00005, this calculation can fail. Suppose the value of an order might grow to $100,000. The floating-point error in representing a value around 100,000 is at most 100,000•2-53. If somebody ordered one hundred thousand items with this error (they could not, since the items would have smaller individual prices than $100,000, so their errors would be smaller), and the prices were individually added up, performing one hundred thousand (minus one) additions adding one hundred thousand new errors, then we have almost two hundred thousand errors of at most 100,000•2-53, so the total error is at most 2•105•105•2-53, which is about .00000222. Therefore, this solution should work for normal orders.
Note that the solution requires reconsideration if the discount is not an integer percentage. For example, if the discount is stated as “one third” instead of 33%, then x*d*10000 is not expected to be an integer.
I have always been coding in java and have recently started coding in javascript (node.js to be precise). One thing that's driving me crazy is add operation on decimal numbers;
Consider the following code
var a=0.1, b=0.2, c=0.3;
var op1 = (a+b)+c;
var op2 = (b+c)+a;
To my amazement I find out op1 != op2 ! console.logging op1 and op2 print out the following:
console.log(op1); 0.6000000000000001
console.log(op2); 0.6
This does not make sense. This looks like a bug to me because javascript simply cannot ignore rules of arithmetic. Could someone please explain why this happens?
This is a case of floating-point error.
The only fractional numbers that can be exactly represented by a floating-point number are those that can be written as an integer fraction with the denominator as a power of two.
For example, 0.5 can be represented exactly, because it can be written 1/2. 0.3 can't.
In this case, the rounding errors in your variables and expressions combined just right to produce an error in the first case, but not in the second case.
Neither way is represented exactly behind the scenes, but when you output the value, it rounds it to 16 digits of precision. In one case, it rounded to a slightly larger number, and in the other, it rounded to the expected value.
The lesson you should learn is that you should never depend on a floating-point number having an exact value, especially after doing some arithmetic.
That is not a javascript problem, but a standard problem in every programming language when using floats. Your sample numbers 0.1, 0.2 and 0.3 cannot be represented as finite decimal, but as repeating decimal, because they have the divisor 5 in it:
0.1 = 1/(2*5) and so on.
If you use decimals only with divisor 2 (like a=0.5, b=0.25 and c=0.125), everything is fine.
When working with floats you might want to set the precision on the numbers before you compare.
(0.1 + 0.2).toFixed(1) == 0.3
However, in a few cases toFixed doesn't behavior the same way across browsers.
Here's a fix for toFixed.
http://bateru.com/news/2012/03/reimplementation-of-number-prototype-tofixed/
This happens because not all floating point numbers can be accurately represented in binary.
Multiply all your numbers by some number (for example, 100 if you're dealing with 2 decimal points).
Then do your arithmetic on the larger result.
After your arithmetic is finished, divide by the same factor you multiplied by in the beginning.
http://jsfiddle.net/SjU9d/
a = 10 * a;
b = 10 * b;
c = 10 * c;
op1 = (a+b)+c;
op2 = (b+c)+a;
op1 = op1 / 10;
op2 = op2 / 10;