I am creating a Fibonacci function that is producing unexpected and inaccurate results. What am I doing wrong here? Is there a simple way to correct this?
const fib=(c,f=[0,1],r=0)=>c>1?(f.push(f.reduce((a,b)=>a+b,0)),f.shift(),fib(c-1,f,1)):(r?f[1]:(c<0?NaN:f[c]));
fib(100); // -> 354224848179262000000,
// Correct 100th digit where `fib(0) === 0` would be 354224848179261915075
Yes, this is a fairly common issue with big integers and very small/long decimal numbers. There are a few libraries out there to resolve this issue when it comes to decimal numbers, but the good news is that JavaScript has native support for big integers that will solve this issue immediately. Just use and return the BigInt type and the number will be accurate.
Unfortunately, you cannot convert this number format back into a normal number without it rounding back to the incorrect value you mentioned, but it will stringify just fine, so it should work for almost all uses.
Use this:
const fib=(c,f=[0n,1n],r=0)=>c>1?(f.push(f.reduce((a,b)=>a+b,0n)),f.shift(),fib(c-1,f,1)):(r?f[1]:(c<0?NaN:f[c]));
console.log(fib(100).toString()); // -> "354224848179261915075"
You can read up more on the BigInt type here: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt
Related
When I tried BigInt, the result returned a wrong result:
BigInt(123456789123456789*111111111111)
13717421013703702653171662848n
And this is what the actual result should be like by hand-written calculation:
13717421013703703578986282579
Is there a way to produce the correct result without this error? Thank you.
Instead of passing the multiplication to BigInt, convert both numbers to BigInt and multiply them after that:
const big = 123456789123456789n * 111111111111n;
console.log(big)
Note, use your browser's console, not the snippet's one (also, this won't work in Safari).
I am having an issue with the way Javascript is rounding numbers when hitting 0.5.
I am writing levies calculators, and am noticing a 0.1c discrepancy in the results.
The problem is that the result for them is 21480.705 which my application translates into 21480.71, whereas the tariff says 21480.70.
This is what I am seeing with Javascript:
(21480.105).toFixed(2)
"21480.10"
(21480.205).toFixed(2)
"21480.21"
(21480.305).toFixed(2)
"21480.31"
(21480.405).toFixed(2)
"21480.40"
(21480.505).toFixed(2)
"21480.51"
(21480.605).toFixed(2)
"21480.60"
(21480.705).toFixed(2)
"21480.71"
(21480.805).toFixed(2)
"21480.81"
(21480.905).toFixed(2)
"21480.90"
Questions:
What the hell is going on with this erratic rouding?
What's the quickest easiest way to get a "rounded up" result (when hitting 0.5)?
So as some of the others already explained the reason for the 'erratic' rounding is a floating point precision problem. You can investigate this by using the toExponential() method of a JavaScript number.
(21480.905).toExponential(20)
#>"2.14809049999999988358e+4"
(21480.805).toExponential(20)
#>"2.14808050000000002910e+4"
As you can see here 21480.905, gets a double representation that is slightly smaller than 21480.905, while 21480.805 gets a double representation slightly larger than the original value. Since the toFixed() method works with the double representation and has no idea of your original intended value, it does all it can and should do with the information it has.
One way to work around this, is to shift the decimal point to the number of decimals you require by multiplication, then use the standard Math.round(), then shift the decimal point back again, either by division or multiplication by the inverse. Then finally we call toFixed() method to make sure the output value gets correctly zero-padded.
var x1 = 21480.905;
var x2 = -21480.705;
function round_up(x,nd)
{
var rup=Math.pow(10,nd);
var rdwn=Math.pow(10,-nd); // Or you can just use 1/rup
return (Math.round(x*rup)*rdwn).toFixed(nd)
}
function round_down(x,nd)
{
var rup=Math.pow(10,nd);
var rdwn=Math.pow(10,-nd);
return (Math.round(x*-rup)*-rdwn).toFixed(nd)
}
function round_tozero(x,nd)
{
return x>0?round_down(x,nd):round_up(x,nd)
}
console.log(x1,'up',round_up(x1,2));
console.log(x1,'down',round_down(x1,2));
console.log(x1,'to0',round_tozero(x1,2));
console.log(x2,'up',round_up(x2,2));
console.log(x2,'down',round_down(x2,2));
console.log(x2,'to0',round_tozero(x2,2));
Finally:
Encountering a problem like this is usually a good time to sit down and have a long think about wether you are actually using the correct data type for your problem. Since floating point errors can accumulate with iterative calculation, and since people are sometimes strangely sensitive with regards to money magically disappearing/appearing in the CPU, maybe you would be better off keeping monetary counters in integer 'cents' (or some other well thought out structure) rather than floating point 'dollar'.
The why -
You may have heard that in some languages, such as JavaScript, numbers with a fractional part are calling floating-point numbers, and floating-point numbers are about dealing with approximations of numeric operations. Not exact calculations, approximations. Because how exactly would you expect to compute and store 1/3 or square root of 2, with exact calculations?
If you had not, then now you've heard of it.
That means that when you type in the number literal 21480.105, the actual value that ends up stored in computer memory is not actually 21480.105, but an approximation of it. The value closest to 21480.105 that can be represented as a floating-point number.
And since this value is not exactly 21480.105, that means it is either slightly more than that, or slightly less than that. More will be rounded up, and less will be rounded down, as expected.
The solution -
Your problem comes from approximations, that it seems you cannot afford. The solution is to work with exact numbers, not approximate.
Use whole numbers. Those are exact. Add in a fractional dot when you convert your numbers to string.
This works in most cases. (See note below.)
The rounding problem can be avoided by using numbers represented in
exponential notation:
function round(value, decimals) {
return Number(Math.round(value+'e'+decimals)+'e-'+decimals);
}
console.log(round(21480.105, 2).toFixed(2));
Found at http://www.jacklmoore.com/notes/rounding-in-javascript/
NOTE: As pointed out by Mark Dickinson, this is not a general solution because it returns NaN in certain cases, such as round(0.0000001, 2) and with large inputs.
Edits to make this more robust are welcome.
You could round to an Integer, then shift in a comma while displaying:
function round(n, digits = 2) {
// rounding to an integer is accurate in more cases, shift left by "digits" to get the number of digits behind the comma
const str = "" + Math.round(n * 10 ** digits);
return str
.padStart(digits + 1, "0") // ensure there are enough digits, 0 -> 000 -> 0.00
.slice(0, -digits) + "." + str.slice(-digits); // add a comma at "digits" counted from the end
}
What the hell is going on with this erratic rouding?
Please reference the cautionary Mozilla Doc, which identifies the cause for these discrepancies. "Floating point numbers cannot represent all decimals precisely in binary which can lead to unexpected results..."
Also, please reference Is floating point math broken? (Thank you Robby Cornelissen for the reference)
What's the quickest easiest way to get a "rounded up" result (when hitting 0.5)?
Use a JS library like accounting.js to round, format, and present currency.
For example...
function roundToNearestCent(rawValue) {
return accounting.toFixed(rawValue, 2);
}
const roundedValue = roundToNearestCent(21480.105);
console.log(roundedValue);
<script src="https://combinatronics.com/openexchangerates/accounting.js/master/accounting.js"></script>
Also, consider checking out BigDecimal in JavaScript.
Hope that helps!
I need to take a user-input value and force it to 6 decimal places, even if the value is an integer. For example, the user types in 12, I need to convert that to 12.000000, as a number. This is not for display purposes - the system on the other end of my app requires decimal values, and there's nothing I can do about that.
As I've read elsewhere, numbers in Javascript are all 64-bit floating point numbers, so it doesn't seem like this should be so difficult.
Alas, toFixed is not an option here because that gives me a string value '12.000000'. Every other trick I've tried just yields the integer 12 with no decimal zeroes (e.g. wrapping toFixed with Number, dividing the string value by 1, and other such silliness).
Is it possible to represent an integer as a float in Javascript, without ending up with a string value?
UPDATE
Thanks for all the comments and answers. Unfortunately for me, #Enzey's comment actually answers my core question when he said that forcing precision can only be done with a string. If he submits that as an answer I'll accept it. I kept the details of my implementation purposefully vague because I didn't want to get into why I wanted to do what I'm doing, I just wanted to know if it was possible. But I guess I just ended up confusing people. Sorry about that.
Alas, there is no such thing as float or int in JavaScript. You only have Number, which does not have the slightest clue about a difference between 12 and 12.000000.
If you're sending it as a stringified JSON, you can use .toFixed on the number, and then strip the " signs from the numbers in the stringified JSON:
var result = JSON.stringify({
number: (12).toFixed(6)
})
.replace(/"[\d]+\.\d{6}"/g, function(v) {
return v.replace(/"/g, '');
});
console.log(result);
I am having a little problem with rounding numbers which are brought in from html.
For example a value extracted from <input id="salesValue"> using var salesValue = $("salesValue").val() would give me a text value.
So if I did something like var doubleSalesValue = salesValue + salesValue; , it would return the number as a concatenation instead of summation of the two values.
I could use var doubleSalesValue = salesValue * 2.0; which does return the value which is to multiple decimal places. However, if I did want to use the other method, how can I approach the situation.
What methods do you use? I have created a function which I run on each number where I want to restrict the decimal places along with converting the type to number
function round(number, figure){
return Number(Number(number).toFixed(figure));
}
I have to run Number initially to make sure that the value is converted to type number and has the method toFixed, otherwise it would throw an error here. Then I have to round the number again to the number of decimal places as required by the function, and somehow after running the toFixed method the number would sometimes turn to a string.
So, I decided to run the Number function Number(number).toFixed(figure)
Is there anything else or any different paradigm that you follow?
EDIT: I want to know if what I am doing here is conventional or are there better methods for this in general?
If you want to round it to 2 decimals you can simply do this:
var roundedNum = Math.round(parseFloat(originalNum) * 100) / 100;
Regarding your question:
and somehow after running the toFixed method the number would sometimes turn to a string.
I suggest next time read the dox a bit better https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/toFixed which says:
Returns
A string representation of number that does not use exponential
notation and has exactly digits digits after the decimal place. The
number is rounded if necessary, and the fractional part is padded with
zeros if necessary so that it has the specified length. If number is
greater than 1e+21, this method simply calls
Number.prototype.toString() and returns a string in exponential
notation.
I have an interesting question, I have been doing some work with javascript and a database ID came out as "3494793310847464221", now this is being entered into javascript as a number yet it is using the number as a different value, both when output to an alert and when being passed to another javascript function.
Here is some example code to show the error to its fullest.
<html><head><script language="javascript">alert( 3494793310847464221);
var rar = 3494793310847464221;
alert(rar);
</script></head></html>
This has completly baffeled me and for once google is not my friend...
btw the number is 179 more then the number there...
Your number is larger than the maximum allowed integer value in javascript (2^53). This has previously been covered by What is JavaScript's highest integer value that a Number can go to without losing precision?
In JavaScript, all numbers (even integral ones) are stored as IEEE-754 floating-point numbers. However, FPs have limited "precision" (see the Wikipedia article for more info), so your number isn't able to be represented exactly.
You will need to either store your number as a string or use some other "bignum" approach (unfortunately, I don't know of any JS bignum libraries off the top of my head).
Edit: After doing a little digging, it doesn't seem as if there's been a lot of work done in the way of JavaScript bignum libraries. In fact, the only bignum implementation of any kind that I was able to find is Edward Martin's JavaScript High Precision Calculator.
Use a string instead.
179 more is one way to look at it. Another way is, after the first 16 digits, any further digit is 0. I don't know the details, but it looks like your variable only stores up to 16 digits.
That number exceeds (2^31)-1, and that's the problem; javascript uses 32-bit signed integers (meaning, a range from –2,147,483,648 to 2,147,483,647). Your best choice is to use strings, and create functions to manipulate the strings as numbers.
I wouldn't be all too surprised, if there already was a library that does what you need.
One possible solution is to use a BigInt library such as: http://www.leemon.com/crypto/BigInt.html
This will allow you to store integers of arbitrary precision, but it will not be as fast as standard arithmetic.
Since it's to big to be stored as int, it's converted to float. In JavaScript ther is no explicit integer and float types, there's only universal Number type.
"Can't increment and decrement a string easily..."
Really?
function incr_num(x) {
var lastdigit=Number(x.charAt(x.length-1));
if (lastdigit!=9) return (x.substring(0,x.length-1))+""+(lastdigit+1);
if (x=="9") return "10";
return incr_num(x.substring(0,x.length-1))+"0";
}
function decr_num(x) {
if(x=="0") return "(error: cannot decrement zero)";
var lastdigit=Number(x.charAt(x.length-1));
if (lastdigit!=0) return (x.substring(0,x.length-1))+""+(lastdigit-1);
if (x=="10") return "9"; // delete this line if you like leading zero
return decr_num(x.substring(0,x.length-1))+"9";
}
Just guessing, but perhaps the number is stored as a floating type, and the difference might be because of some rounding error. If that is the case it might work correctly if you use another interpreter (browser, or whatever you are running it in)