I have been trying to do the following - change a positive to a negative number.
It appears there are a number of ways to do this. There is the standard
x *= -1
or just placing a negative sign in front the variable i.e if x = 5, then -x is equal to -5.
This seems a great shorthand but wanted to know what the difference is, I can't find any documentation regarding this shorthand on MDN.
I assume there are other ways too.
Probably a basic question but it is annoying not understanding this apparent shorthand.
Any ideas ?
Unary operators in Javascript are basically shorthand functions. You can find the documentation for the Unary (-) here
The - takes in one argument. The number you pass to it. Under the hood, I'm guessing it multiplies it by -1 and returns the product. The function could be written along the lines of:
function -(arg) {
return arg * -1;
}
This is conjecture though. Will need to go through V8's codebase to know for sure.
Update:
So from further research I figure that it's instead of multiplication by -1, it could be a simple sign change. I referred to V8's implementation but that proved to be a dead end because I suck at C++, but upon checking ECMA's specs and the IEEE 754 specs defined here in Steve Hollasch's wonderful blog, I am leaning towards a inversion of the sign bit. All Javascript numbers are 64 Bit IEEE 754 FLoating Points, they can be represented like so:
SEEEEEEE EEEEMMMM MMMMMMMM MMMMMMMM MMMMMMMM MMMMMMMM MMMMMMMM MMMMMMMM
Where S is the sign bit. So it looks like the Unary - just flips the sign bit.
Related
I am having an issue with the way Javascript is rounding numbers when hitting 0.5.
I am writing levies calculators, and am noticing a 0.1c discrepancy in the results.
The problem is that the result for them is 21480.705 which my application translates into 21480.71, whereas the tariff says 21480.70.
This is what I am seeing with Javascript:
(21480.105).toFixed(2)
"21480.10"
(21480.205).toFixed(2)
"21480.21"
(21480.305).toFixed(2)
"21480.31"
(21480.405).toFixed(2)
"21480.40"
(21480.505).toFixed(2)
"21480.51"
(21480.605).toFixed(2)
"21480.60"
(21480.705).toFixed(2)
"21480.71"
(21480.805).toFixed(2)
"21480.81"
(21480.905).toFixed(2)
"21480.90"
Questions:
What the hell is going on with this erratic rouding?
What's the quickest easiest way to get a "rounded up" result (when hitting 0.5)?
So as some of the others already explained the reason for the 'erratic' rounding is a floating point precision problem. You can investigate this by using the toExponential() method of a JavaScript number.
(21480.905).toExponential(20)
#>"2.14809049999999988358e+4"
(21480.805).toExponential(20)
#>"2.14808050000000002910e+4"
As you can see here 21480.905, gets a double representation that is slightly smaller than 21480.905, while 21480.805 gets a double representation slightly larger than the original value. Since the toFixed() method works with the double representation and has no idea of your original intended value, it does all it can and should do with the information it has.
One way to work around this, is to shift the decimal point to the number of decimals you require by multiplication, then use the standard Math.round(), then shift the decimal point back again, either by division or multiplication by the inverse. Then finally we call toFixed() method to make sure the output value gets correctly zero-padded.
var x1 = 21480.905;
var x2 = -21480.705;
function round_up(x,nd)
{
var rup=Math.pow(10,nd);
var rdwn=Math.pow(10,-nd); // Or you can just use 1/rup
return (Math.round(x*rup)*rdwn).toFixed(nd)
}
function round_down(x,nd)
{
var rup=Math.pow(10,nd);
var rdwn=Math.pow(10,-nd);
return (Math.round(x*-rup)*-rdwn).toFixed(nd)
}
function round_tozero(x,nd)
{
return x>0?round_down(x,nd):round_up(x,nd)
}
console.log(x1,'up',round_up(x1,2));
console.log(x1,'down',round_down(x1,2));
console.log(x1,'to0',round_tozero(x1,2));
console.log(x2,'up',round_up(x2,2));
console.log(x2,'down',round_down(x2,2));
console.log(x2,'to0',round_tozero(x2,2));
Finally:
Encountering a problem like this is usually a good time to sit down and have a long think about wether you are actually using the correct data type for your problem. Since floating point errors can accumulate with iterative calculation, and since people are sometimes strangely sensitive with regards to money magically disappearing/appearing in the CPU, maybe you would be better off keeping monetary counters in integer 'cents' (or some other well thought out structure) rather than floating point 'dollar'.
The why -
You may have heard that in some languages, such as JavaScript, numbers with a fractional part are calling floating-point numbers, and floating-point numbers are about dealing with approximations of numeric operations. Not exact calculations, approximations. Because how exactly would you expect to compute and store 1/3 or square root of 2, with exact calculations?
If you had not, then now you've heard of it.
That means that when you type in the number literal 21480.105, the actual value that ends up stored in computer memory is not actually 21480.105, but an approximation of it. The value closest to 21480.105 that can be represented as a floating-point number.
And since this value is not exactly 21480.105, that means it is either slightly more than that, or slightly less than that. More will be rounded up, and less will be rounded down, as expected.
The solution -
Your problem comes from approximations, that it seems you cannot afford. The solution is to work with exact numbers, not approximate.
Use whole numbers. Those are exact. Add in a fractional dot when you convert your numbers to string.
This works in most cases. (See note below.)
The rounding problem can be avoided by using numbers represented in
exponential notation:
function round(value, decimals) {
return Number(Math.round(value+'e'+decimals)+'e-'+decimals);
}
console.log(round(21480.105, 2).toFixed(2));
Found at http://www.jacklmoore.com/notes/rounding-in-javascript/
NOTE: As pointed out by Mark Dickinson, this is not a general solution because it returns NaN in certain cases, such as round(0.0000001, 2) and with large inputs.
Edits to make this more robust are welcome.
You could round to an Integer, then shift in a comma while displaying:
function round(n, digits = 2) {
// rounding to an integer is accurate in more cases, shift left by "digits" to get the number of digits behind the comma
const str = "" + Math.round(n * 10 ** digits);
return str
.padStart(digits + 1, "0") // ensure there are enough digits, 0 -> 000 -> 0.00
.slice(0, -digits) + "." + str.slice(-digits); // add a comma at "digits" counted from the end
}
What the hell is going on with this erratic rouding?
Please reference the cautionary Mozilla Doc, which identifies the cause for these discrepancies. "Floating point numbers cannot represent all decimals precisely in binary which can lead to unexpected results..."
Also, please reference Is floating point math broken? (Thank you Robby Cornelissen for the reference)
What's the quickest easiest way to get a "rounded up" result (when hitting 0.5)?
Use a JS library like accounting.js to round, format, and present currency.
For example...
function roundToNearestCent(rawValue) {
return accounting.toFixed(rawValue, 2);
}
const roundedValue = roundToNearestCent(21480.105);
console.log(roundedValue);
<script src="https://combinatronics.com/openexchangerates/accounting.js/master/accounting.js"></script>
Also, consider checking out BigDecimal in JavaScript.
Hope that helps!
I was making a calculator (something like excel in javascript) and I have found a strange behavior with ParseFloat.
parseFloat(999999999999999) //999999999999999
parseFloat(9999999999999999) //10000000000000000
parseFloat(9999999999999899) //9999999999999900
Is there a limit with parseFloat function in javascript? Following ECMA Standard there is no issue about this.
Float is not an endless container. Consider this example:
console.log(0.1 + 0.2 == 0.3) // Prints... FALSE!
Or, another case:
console.log(99999999999999999999999999999999999) // Prints 1e+35
...while 1e+35 is just 1 with 35 zeroes. Original number (9999...) is so large and precise that JS starts cutting lower digits to store at least something - the source is too big to save in float.
This actually happens because of internal float conversions made by JavaScript engine and the philosophy of float type is that higher digits are more important that lower.
Your case is somewhat similar. This is because floating point type accuracy depends on its value length. So, If your value is too big or too small, you will lose precision for lower digits.
Thus you should never trust float and never compare it with other values using '==' of '===' - it may be anything.
I did an investigation and I saw that there is a whole website to explain to you what is the correct way to use floats at: http://floating-point-gui.de/
In Java for example I was always using BigDecimal for the floats just to make sure that everything will work correctly without confusing me. For example:
BigDecimal a = new BigDecimal("0.1");
BigDecimal b = new BigDecimal("0.2");
BigDecimal c = a.add(b); // returns a BigDecimal representing exactly 0.3
// instead of this number: 0.30000000000000004 that can
// easily confuse me
However in JavaScript I realized that there is not such thing as a build-in library (at least at the Math object that I've looked)
So the best way that I did find so far was to use a JavaScript library that it is doing exactly that! In my projects I am using this one: https://github.com/dtrebbien/BigDecimal.js
Although I think this is the best library I could find, the library doesn't really matter so much. My main questions are:
Is it the best way possible to use a library like BigDecimal the best way to use floats for JavaScript? or am I missing something? I want to do basic calculations like add, multiplying ...e.t.c.
Is there any other suggested way for example to add two floats in JavaScript?
For example, let's say that I want to have: 0.1 + 0.2 . With the BigDecimal library, I will have:
var a = new BigDecimal("0.1");
var b = new BigDecimal("0.2");
console.log(a.add(b).toString()); //returns exactly 0.3
So is there any other way to add 0.1 + 0.2 and have exactly 0.3, in JavaScript without having to actually round the number ?
For the reference the below example in JavaScript will not work:
var a = 0.1;
var b = 0.2;
console.log(a + b); //This will have as an output: 0.30000000000000004
As all numbers in javascript are 64bit, in general the best way to do floating point aritmetic in javascript is to simply use numbers straight.
However, if you specifically have a problem where you need higher precision than what 64bits will provide, then you need to do something like that.
I urge you, however, to strongly consider if you have such a usecase or not.
If your problem is with some far-down decimals affecting your comparisons, there are functions to deal with that sort of thing specifically. I urge you to look up the Number.prototype.toFixed(n) function and also see this dicussion on almostEquals which proposes that you incorporate the use of an epsilon for float comparisons.
You could use the toFixed(n) method if you are not relying on high precision:
var a = 0.1;
var b = 0.2;
var sum = a + b;
console.log(sum.toFixed(1));
Your calculation shows a precision lost on the 17th floating point which is no big issue in the most cases.
I would advice you to go with toFixed() if you want to get the output right.
There are a few things to consider here.
A lot of times people use the term 'float' when they really mean to use fixed decimal
Fixed Decimal
US currency, for example, is a fixed decimal.
$12.40
$0.90
In this case, there will always be two decimal points.
If your values fit into the range of JavaScript integers (2^53-1) or 9007199254740991 then you can simply work in cents and store all your values that way.
1240
90
Floating point decimal
Now, floating point is where you deal with extreme ranges of numbers and the decimal point actually moves, or floats.
12.394
1294856.9458566
.0000000998984
49586747435893
In cases of floating point, accuracy to 53 significant bits (which means around 15 digits of accuracy). For a lot of things, that is good enough.
Big Decimal Classes
You should only look at big decimal classes if you need something beyond the range of JavaScript native numbers. BigDecimal classes are much slower than native math and you have to use a functional style of programming rather than use the math operators.
JavaScript does not support operator overloading, so there is no built-in way to do natural calculations like '0.1 + 0.2' with BigNumbers.
What you can do is use math.js, which has an expression parser and support for BigNumbers:
math.config({
number: 'bignumber', // Default type of number: 'number' or 'bignumber'
precision: 64 // Number of significant digits for BigNumbers
});
math.eval('0.1 + 0.2'); // returns a BigNumber, 0.3
See docs: http://mathjs.org/docs/datatypes/bignumbers.html
You could convert operands to integers beforehand
(0.1 * 10 + 0.2 * 10) / 10
will give the exact answer, but people probably don't want to deal with that floating point round stuff
I was bored, so I started fidlling around in the console, and stumbled onto this (ignore the syntax error):
Some variable "test" has a value, which I multiply by 10K, it suddenly changes into different number (you could call it a rounding error, but that depends on how much accuracy you need). I then multiply that number by 10, and it changes back/again.
That raises a few questions for me:
How in accurate is Javascript? Has this been determined? I.e. a number that can be taken into account?
Is there a way to fix this? I.e. to do math in Javascript with complete accuracy (within the limitations of its datatype).
Should the changed number after the second operation be interpreted as 'changing back to the original number' or 'changing again, because of the inaccuracy'?
I'm not sure whether this should be a separate question, but I was actually trying to round numbers to a certain amount after the decimal point. I've researched it a bit, and have found two methods:
> Method A
function roundNumber(number, digits) {
var multiple = Math.pow(10, digits);
return Math.floor(number * multiple) / multiple;
}
> Method B
function roundNumber(number, digits) {
return Number(number.toFixed(digits));
}
Intuitively I like method B more (looks more efficient), but I don't know what going on behind the scenes so I can't really judge. Anyone have an idea on that? Or a way to benchmark this? And why is there no native round_to_this_many_decimals function? (one that returns an integer, not a string)
How in accurate is Javascript?
Javascript uses standard double precision floating point numbers, so the precision limitations are the same as for any other language that uses them, which is most languages. It's the native format used by the processor to handle floating point numbers.
Is there a way to fix this? I.e. to do math in Javascript with complete accuracy (within the limitations of its datatype).
No. The precision limitations lies in the way that the number is stored. Floating point numbers doesn't have complete accuracy, so no matter how you do the calculations you can't achieve absolute accuracy as the result goes back into a floating point number.
If you want complete accuracy then you need to use a different data type.
Should the changed number after the second operation be interpreted as
'changing back to the original number' or 'changing again, because of
the inaccuracy'?
It's changing again.
When a number is converted to text to be displayed, it's rounded to a certain number of digits. The numbers that look like they are exact aren't, it's just that the limitations in precision doesn't show up.
When the number "changes back" it's just because the rounding again hides the limitations in the precision. Each calculation adds or subtracts a small inaccuracy in the number, and sometimes it just happens to take the number closer to the number that you had originally. Eventhough it looks like it's more accurate, it's actually less accurate as each calculation adds a bit of uncertainty.
Internally, JavaScript uses 64-bit IEEE 754 floating-point numbers, which are a widely used standard and usually guarantee about 16 digits of accuracy. The error you witnessesed was on the 17th significant digit of the number and was reeeally tiny.
Is there a way to [...] do math in Javascript with complete accuracy (within the limitations of its datatype).
I would say that JavaScript's math is completely accurate within the limitations of its datatype. The error you witnessed was outside of those limitations.
Are you working with calculations that require a higher degree of precision than that?
Should the changed number after the second operation be interpreted as 'changing back to the original number' or 'changing again, because of the inaccuracy'?
The number never really became more or less accurate than the original value. It was only when the value was converted into a decimal value that a rounding error became apparent. But this was not a case of the value "changing back" to an accurate number. The rounding error was just too small to display.
And why is there no native round_to_this_many_decimals function? (one that returns an integer, not a string)
"Why is the language this way" questions are not considered very productive here, but it is easy to get around this limitation (assuming you mean numbers and not integers). This answer has 337 upvotes: +numb.toFixed(digits);, but note that if you try to display a number produced with that expression, there's no guarantee that it will actually display with only six digits. That's probably one of the reasons why JavaScript's "round to N places" function produces a string and not a number.
I came across the same few times and with further research I was able solve the little issues by using the library below
Math.js Library
Sample
import {
atan2, chain, derivative, e, evaluate, log, pi, pow, round, sqrt
} from 'mathjs'
// functions and constants
round(e, 3) // 2.718
atan2(3, -3) / pi // 0.75
log(10000, 10) // 4
sqrt(-4) // 2i
pow([[-1, 2], [3, 1]], 2) // [[7, 0], [0, 7]]
derivative('x^2 + x', 'x') // 2 * x + 1
// expressions
evaluate('12 / (2.3 + 0.7)') // 4
evaluate('12.7 cm to inch') // 5 inch
evaluate('sin(45 deg) ^ 2') // 0.5
evaluate('9 / 3 + 2i') // 3 + 2i
evaluate('det([-1, 2; 3, 1])') // -7
// chaining
chain(3)
.add(4)
.multiply(2)
.done() // 14
Let N(x) be the value of the decimal numeral with the fewest significant digits
such that x is the double value nearest the value of the numeral.
Given double values a and b, how can we compute the double value nearest N(b)-N(a)?
E.g.:
If a and b are the double values nearest .2 and .3,
the desired result is the double value nearest .1,
0.1000000000000000055511151231257827021181583404541015625,
rather than than the result of directly subtracting a and b,
0.09999999999999997779553950749686919152736663818359375.
As a baseline: In Java, the Double.toString() provides the N(x) function described in the question, returning its value as a numeral. One could take the strings for a and b, subtract them with the elementary-school method, and convert the resulting string to double.
This demonstrates solving the problem is quite feasible using existing library routines. This leaves the task of improving the solution. I suggest exploring:
Is there a function D(x) that returns the number of significant digits after the decimal place for the numeral described in N(x)? If so, can we multiply a and b by a power of ten determined by D(a) and D(b), round as necessary to produce the correct integer results (for situations where they are representable as double values), subtract them, and divide by the power of ten?
Can we establish criteria for which b-a or some simple expression can be quickly rounded to something near a decimal numeral, bypassing the code that would be necessary for harder cases? E.g., could we prove that for numbers within a certain range, (round(10000*b)-round(10000*a))/10000 always produces the desired result?
You can convert to 'integers' by multiplying then dividing by a power of ten:
(10*.3 - 10*.2)/10 == 0.1000000000000000055511151231257827021181583404541015625
It may be possible to work out the appropriate power of ten from the string representation of the number. #PatriciaShanahan suggests looking for repeated 0's or 9's.
Consider using a BigDecimal library such as javascript-bignum instead.
You could also inquire in Smalltalk Pharo 2.0 where your request translates:
^(b asMinimalDecimalFraction - a asMinimalDecimalFraction) asFloat
Code could be found as attachment to issue 4957 at code.google.com/p/pharo/issues - alas, dead link, and the new bugtracker requires a login...
https://pharo.fogbugz.com/f/cases/5000/Let-asScaledDecimal-use-the-right-number-of-decimals
source code is also on github, currently:
https://github.com/pharo-project/pharo-core/blob/6.0/Kernel.package/Float.class/instance/printing/asMinimalDecimalFraction.st
The algorithm is based on:
Robert G. Burger and R. Kent Dybvig
Printing Floating Point Numbers Quickly and Accurately
ACM SIGPLAN 1996 Conference on Programming Language Design and Implementation
June 1996.
http://www.cs.indiana.edu/~dyb/pubs/FP-Printing-PLDI96.pdf