Compute the double value nearest preferred decimal result - javascript

Let N(x) be the value of the decimal numeral with the fewest significant digits
such that x is the double value nearest the value of the numeral.
Given double values a and b, how can we compute the double value nearest N(b)-N(a)?
E.g.:
If a and b are the double values nearest .2 and .3,
the desired result is the double value nearest .1,
0.1000000000000000055511151231257827021181583404541015625,
rather than than the result of directly subtracting a and b,
0.09999999999999997779553950749686919152736663818359375.

As a baseline: In Java, the Double.toString() provides the N(x) function described in the question, returning its value as a numeral. One could take the strings for a and b, subtract them with the elementary-school method, and convert the resulting string to double.
This demonstrates solving the problem is quite feasible using existing library routines. This leaves the task of improving the solution. I suggest exploring:
Is there a function D(x) that returns the number of significant digits after the decimal place for the numeral described in N(x)? If so, can we multiply a and b by a power of ten determined by D(a) and D(b), round as necessary to produce the correct integer results (for situations where they are representable as double values), subtract them, and divide by the power of ten?
Can we establish criteria for which b-a or some simple expression can be quickly rounded to something near a decimal numeral, bypassing the code that would be necessary for harder cases? E.g., could we prove that for numbers within a certain range, (round(10000*b)-round(10000*a))/10000 always produces the desired result?

You can convert to 'integers' by multiplying then dividing by a power of ten:
(10*.3 - 10*.2)/10 == 0.1000000000000000055511151231257827021181583404541015625
It may be possible to work out the appropriate power of ten from the string representation of the number. #PatriciaShanahan suggests looking for repeated 0's or 9's.
Consider using a BigDecimal library such as javascript-bignum instead.

You could also inquire in Smalltalk Pharo 2.0 where your request translates:
^(b asMinimalDecimalFraction - a asMinimalDecimalFraction) asFloat
Code could be found as attachment to issue 4957 at code.google.com/p/pharo/issues - alas, dead link, and the new bugtracker requires a login...
https://pharo.fogbugz.com/f/cases/5000/Let-asScaledDecimal-use-the-right-number-of-decimals
source code is also on github, currently:
https://github.com/pharo-project/pharo-core/blob/6.0/Kernel.package/Float.class/instance/printing/asMinimalDecimalFraction.st
The algorithm is based on:
Robert G. Burger and R. Kent Dybvig
Printing Floating Point Numbers Quickly and Accurately
ACM SIGPLAN 1996 Conference on Programming Language Design and Implementation
June 1996.
http://www.cs.indiana.edu/~dyb/pubs/FP-Printing-PLDI96.pdf

Related

How does Javascript store a numeric value?

I am new to JavaScript programming and referring to Eloquent JavaScript, 3rd Edition by Marijn Haverbeke.
There is a statement in this book which reads like,
"JavaScript uses a fixed number of bits, 64 of them, to store a single number value. There are only so many patterns you can make with 64 bits, which means that the number of different numbers that can be represented is limited. With N decimal digits, you can represent 10^N numbers. Similarly, given 64 binary digits, you can represent 2^64 different numbers, which is about 18 Quintilian (an 18 with 18 zeros after it). That’s a lot."
Can someone help me with the actual meaning of this statement. I am confused as to how the values more than 2^64 are stored in the computer memory.
Can someone help me with the actual meaning of this statement. I am
confused as to how the values more than 2^64 are stored in the
computer memory.
Your questions is related with more generic concepts in Computer Science. For this question Javascript stays at higher level.
Please understand basic concepts for memory and storage first;
https://study.com/academy/lesson/how-do-computers-store-data-memory-function.html
https://www.britannica.com/technology/computer-memory
https://www.reddit.com/r/askscience/comments/2kuu9e/how_do_computers_handle_extremely_large_numbers/
How do computers evaluate huge numbers?
Also for Javascript please see this ECMAScript section
Ref: https://www.ecma-international.org/ecma-262/5.1/#sec-8.5
The Number type has exactly 18437736874454810627 (that is, 264−253+3) values, representing the double-precision 64-bit format IEEE 754 values as specified in the IEEE Standard for Binary Floating-Point Arithmetic, except that the 9007199254740990 (that is, 253−2) distinct “Not-a-Number” values of the IEEE Standard are represented in ECMAScript as a single special NaN value. (Note that the NaN value is produced by the program expression NaN.) In some implementations, external code might be able to detect a difference between various Not-a-Number values, but such behaviour is implementation-dependent; to ECMAScript code, all NaN values are indistinguishable from each other.
There are two other special values, called positive Infinity and negative Infinity. For brevity, these values are also referred to for expository purposes by the symbols +∞ and −∞, respectively. (Note that these two infinite Number values are produced by the program expressions +Infinity (or simply Infinity) and -Infinity.)
The other 18437736874454810624 (that is, 264−253) values are called the finite numbers. Half of these are positive numbers and half are negative numbers; for every finite positive Number value there is a corresponding negative value having the same magnitude.
Note that there is both a positive zero and a negative zero. For brevity, these values are also referred to for expository purposes by the symbols +0 and −0, respectively. (Note that these two different zero Number values are produced by the program expressions +0 (or simply 0) and -0.)
The 18437736874454810622 (that is, 264−253−2) finite nonzero values are of two kinds:
18428729675200069632 (that is, 264−254) of them are normalised, having the form
s × m × 2e
where s is +1 or −1, m is a positive integer less than 253 but not less than 252, and e is an integer ranging from −1074 to 971, inclusive.
The remaining 9007199254740990 (that is, 253−2) values are denormalised, having the form
s × m × 2e
where s is +1 or −1, m is a positive integer less than 252, and e is −1074.
Note that all the positive and negative integers whose magnitude is no greater than 253 are representable in the Number type (indeed, the integer 0 has two representations, +0 and -0).
A finite number has an odd significand if it is nonzero and the integer m used to express it (in one of the two forms shown above) is odd. Otherwise, it has an even significand.
In this specification, the phrase “the Number value for x” where x represents an exact nonzero real mathematical quantity (which might even be an irrational number such as π) means a Number value chosen in the following manner. Consider the set of all finite values of the Number type, with −0 removed and with two additional values added to it that are not representable in the Number type, namely 21024 (which is +1 × 253 × 2971) and −21024 (which is −1 × 253 × 2971). Choose the member of this set that is closest in value to x. If two values of the set are equally close, then the one with an even significand is chosen; for this purpose, the two extra values 21024 and −21024 are considered to have even significands. Finally, if 21024 was chosen, replace it with +∞; if −21024 was chosen, replace it with −∞; if +0 was chosen, replace it with −0 if and only if x is less than zero; any other chosen value is used unchanged. The result is the Number value for x. (This procedure corresponds exactly to the behaviour of the IEEE 754 “round to nearest” mode.)
Some ECMAScript operators deal only with integers in the range −231 through 231−1, inclusive, or in the range 0 through 232−1, inclusive. These operators accept any value of the Number type but first convert each such value to one of 232 integer values. See the descriptions of the ToInt32 and ToUint32 operators in 9.5 and 9.6, respectively.
Probably you have learned about big numbers of mathematics.
For example Avogadro constant is 6.022x10**23
Computers can also store numbers in this format.
Except for two things:
They store it as a binary number
They would store Avogadro as 0.6022*10**24, more precisely
the precision: a value that is between 0 and 1 (0.6022); this is usually 2-8 byte
the size/greatness of the number (24); this is usually only 1 byte because of 2**256 is already a very big number.
As you can see this method can be used to store an inexact value of a very big/small number.
An example of inaccuracy: 0.1 + 0.2 == 0.30000000000000004
Due to performance issues, most engines are often using the normal format, if it makes no difference in the results.

What are the chances of Math.random returning 0?

Like the asker of this question, I was wondering why Math.ceil(Math.random() * 10) was not preferred over Math.floor(Math.random() * 10) + 1, and found that it was because Math.random has a tiny (but relevant) chance of returning 0 exactly. But how tiny?
Further research told me that this random number is accurate to 16 decimal places... well, sort of. And it's the "sort of" that I'm curious about.
I understand that floating point numbers work differently to decimals. I struggle with the specifics though. If the number were a strict decimal value, I believe the chances would be one in ten billiard (or ten quadrillion, in the American system) - 1:1016.
Is this correct, or have I messed up, or does the floating point thing make a difference?
JavaScript is a dialect of ECMAScript. The ECMAScript-262 standard fails to specify Math.random precisely. The relevant clause says:
Math.random ( )
Returns a Number value with positive sign, greater than or equal to +0𝔽 but strictly less than 1𝔽, chosen randomly or pseudo randomly with approximately uniform distribution over that range, using an implementation-defined algorithm or strategy. This function takes no arguments.
Each Math.random function created for distinct realms must produce a distinct sequence of values from successive calls.
In the absence of a complete specification, no definitive statement can be made about the probability of Math.random returning zero. Each ECMAScript implementation may choose a different algorithm and need not provide a truly uniform distribution.
ECMAScript uses the IEEE-754 basic 64-bit binary floating-point format for its Number type. In this format, the significand (fraction portion) of the number has 53 bits. Every floating-point number has the form s • f • 2e, where s (for sign) is +1 or −1, f (for fraction) is the significand and is an integer in [0, 253), and e (for exponent) is an integer in [−1074, 971]. The number is said to be normalized if the high bit of f is set (so f is in [252, 253)). Since negative numbers are not a concern in this answer, let s be implicitly +1 for the rest of this answer.
One issue with distributing random numbers in [0, 1) is that the representable values are not evenly spaced. There are 252 representable values in [½, 1)—all those with f in [252, 253) and e = −53. And there are the same number of values in [¼, ½)—all those with f in [252, 253) and e = −54. Since there are the same number of numbers in this interval but the interval is half as long, the numbers are more closely spaced. Similarly, in [⅛, ¼), the spacing halves again. This continues until the exponent reaches −1074, at which point the normal numbers end with f = 252. The numbers smaller than that are said to be subnormal (or zero), with f in [0, 252) and e = −1074, and they are evenly spaced.
One choice about how to distribute the numbers for Math.random is to use only the set of evenly spaced numbers f • 2−53 for f in [0, 253). This uses all the representable values in [½, 1), but only half the values in [¼, ½), one-fourth the values in [⅛, ¼), and so on. This is simple and avoids some oddities in the distribution. If implemented correctly, the probability zero is produced is one in 253.
Another choice is to use all the representable values in [0, 1), each with probability proportional to the distance from it to the next higher representable value. Thus, each representable number in [½, 1) would be chosen with probability 1/253, each representable number in [¼, ½) would be chosen with probability 1/254, each representable number in [⅛, ¼) would be chosen with probability 1/255, and so on. This distribution approximates a uniform distribution on the reals and provides finer precision where the floating-point format is finer. If implemented correctly, the probability zero is produced is one in 21074.
Another choice is to use all the representable values in [0, 1), each with probability proportional to the length of the segment in which the representable value is the nearest representable value of all the real numbers in the segment. I will omit discussion of some details of this distribution except to say it mimics the results one would get by choosing a real number with uniform distribution and then rounding it to a representable value using the round-to-nearest-ties-to-even rule. If implemented correctly, the probability zero is produced is one in 21075. (One problem with this distribution is that a uniform distribution over the reals in [0, 1) will sometimes produce a number so close to 1 that rounding produces 1. This then requires either that Math.random be allowed to return 1 or that the distribution be fudged in some way, perhaps by returning the next lower representable value instead of 1.)
I will note that the ECMAScript specification is sufficiently lax that one might assert that Math.random may distribute the numbers with equal probability for each representable value, ignoring the spacing between them. This would not mimic a uniform distribution over the real numbers at all, and I expect very few people would favor it. However, if implemented, the probability zero is returned is one in 1021 • 252, because there are 252 normalized numbers with exponents from −53 to −1074 (1020 values of e), and 252 subnormal or zero numbers.

Lack of precision of the toFixed method in javascript

I have do some test about Number.prototype.toFixed method in chrome(v60.0.3112.101) console and found sth puzzled me.
Why 1.15.toFixed(1) return "1.1" but not the "1.2"?
Why 1.05.toFixed(1) return "1.1" but not the "1.0"?
and so on...
I do research in the ECMAScript specification.
NOTE 1
toFixed returns a String containing this Number value represented in decimal fixed-point notation with fractionDigits digits after the decimal point. If fractionDigits is undefined, 0 is assumed.
I know what's the fixed point notation.But I can't explain the puzzles above. Could someone give a clear explaination?
BTW, I think the details arithmetic under the specification should be improved.
Saying 1.105 for instance, the relative arithmetic is the following:
Let n be an integer for which the exact mathematical value of n ÷ 10^f - x is as close to zero as possible. If there are two such n, pick the larger n.
According to pick the larger n, 111 should be taken into consideration but not the 110, which is contradicted to reality.
I'll try my best to clarify the points around that question. First of all the fixed-point notation:
I know what's the fixed point notation. But I can't explain it well.
The fixed-point natation is opposed to the floating point notation. The floating point notation allow a better precision most of the time. But it also is more difficult to understand and to compute.
Well, let's go back to the fixed-point notation. This is also an arithmetic notation for real numbers. The difference is that a number in fixed-point notation is represented by an integer with a scaling factor.
For example :
If you want to write 4.56 and you got a 1/1000 scaling factor, your number will be represented by 4560. Indeed, 4560 * (1/1000) = 4.56
Now that we know how does the fixed-point notation does work, we can better understand the results of the toFixed(n) function. Let's say for the example that the scaling factor is 1/1000 (that is not the real value but this is easier to visualise the results).
1.15.toFixed(1)
Will take on decimal and then represent the number with a fixed-point notation, so it does not care about the '5'. The number you got in memory is 1100. This is why the number is rounded to the closest inferior value.
Now as you can see on the MDN Doc of toFixed function the you can keep up to 20 decimals. From that information we can say that the scaling factor is 1/10^20.
I hope that answers your questions.

Odd behavior with ParseFloat when my string is too long

I was making a calculator (something like excel in javascript) and I have found a strange behavior with ParseFloat.
parseFloat(999999999999999) //999999999999999
parseFloat(9999999999999999) //10000000000000000
parseFloat(9999999999999899) //9999999999999900
Is there a limit with parseFloat function in javascript? Following ECMA Standard there is no issue about this.
Float is not an endless container. Consider this example:
console.log(0.1 + 0.2 == 0.3) // Prints... FALSE!
Or, another case:
console.log(99999999999999999999999999999999999) // Prints 1e+35
...while 1e+35 is just 1 with 35 zeroes. Original number (9999...) is so large and precise that JS starts cutting lower digits to store at least something - the source is too big to save in float.
This actually happens because of internal float conversions made by JavaScript engine and the philosophy of float type is that higher digits are more important that lower.
Your case is somewhat similar. This is because floating point type accuracy depends on its value length. So, If your value is too big or too small, you will lose precision for lower digits.
Thus you should never trust float and never compare it with other values using '==' of '===' - it may be anything.

Math inaccuracy in Javascript: safe to use JS for important stuff?

I was bored, so I started fidlling around in the console, and stumbled onto this (ignore the syntax error):
Some variable "test" has a value, which I multiply by 10K, it suddenly changes into different number (you could call it a rounding error, but that depends on how much accuracy you need). I then multiply that number by 10, and it changes back/again.
That raises a few questions for me:
How in accurate is Javascript? Has this been determined? I.e. a number that can be taken into account?
Is there a way to fix this? I.e. to do math in Javascript with complete accuracy (within the limitations of its datatype).
Should the changed number after the second operation be interpreted as 'changing back to the original number' or 'changing again, because of the inaccuracy'?
I'm not sure whether this should be a separate question, but I was actually trying to round numbers to a certain amount after the decimal point. I've researched it a bit, and have found two methods:
> Method A
function roundNumber(number, digits) {
var multiple = Math.pow(10, digits);
return Math.floor(number * multiple) / multiple;
}
> Method B
function roundNumber(number, digits) {
return Number(number.toFixed(digits));
}
Intuitively I like method B more (looks more efficient), but I don't know what going on behind the scenes so I can't really judge. Anyone have an idea on that? Or a way to benchmark this? And why is there no native round_to_this_many_decimals function? (one that returns an integer, not a string)
How in accurate is Javascript?
Javascript uses standard double precision floating point numbers, so the precision limitations are the same as for any other language that uses them, which is most languages. It's the native format used by the processor to handle floating point numbers.
Is there a way to fix this? I.e. to do math in Javascript with complete accuracy (within the limitations of its datatype).
No. The precision limitations lies in the way that the number is stored. Floating point numbers doesn't have complete accuracy, so no matter how you do the calculations you can't achieve absolute accuracy as the result goes back into a floating point number.
If you want complete accuracy then you need to use a different data type.
Should the changed number after the second operation be interpreted as
'changing back to the original number' or 'changing again, because of
the inaccuracy'?
It's changing again.
When a number is converted to text to be displayed, it's rounded to a certain number of digits. The numbers that look like they are exact aren't, it's just that the limitations in precision doesn't show up.
When the number "changes back" it's just because the rounding again hides the limitations in the precision. Each calculation adds or subtracts a small inaccuracy in the number, and sometimes it just happens to take the number closer to the number that you had originally. Eventhough it looks like it's more accurate, it's actually less accurate as each calculation adds a bit of uncertainty.
Internally, JavaScript uses 64-bit IEEE 754 floating-point numbers, which are a widely used standard and usually guarantee about 16 digits of accuracy. The error you witnessesed was on the 17th significant digit of the number and was reeeally tiny.
Is there a way to [...] do math in Javascript with complete accuracy (within the limitations of its datatype).
I would say that JavaScript's math is completely accurate within the limitations of its datatype. The error you witnessed was outside of those limitations.
Are you working with calculations that require a higher degree of precision than that?
Should the changed number after the second operation be interpreted as 'changing back to the original number' or 'changing again, because of the inaccuracy'?
The number never really became more or less accurate than the original value. It was only when the value was converted into a decimal value that a rounding error became apparent. But this was not a case of the value "changing back" to an accurate number. The rounding error was just too small to display.
And why is there no native round_to_this_many_decimals function? (one that returns an integer, not a string)
"Why is the language this way" questions are not considered very productive here, but it is easy to get around this limitation (assuming you mean numbers and not integers). This answer has 337 upvotes: +numb.toFixed(digits);, but note that if you try to display a number produced with that expression, there's no guarantee that it will actually display with only six digits. That's probably one of the reasons why JavaScript's "round to N places" function produces a string and not a number.
I came across the same few times and with further research I was able solve the little issues by using the library below
Math.js Library
Sample
import {
atan2, chain, derivative, e, evaluate, log, pi, pow, round, sqrt
} from 'mathjs'
// functions and constants
round(e, 3) // 2.718
atan2(3, -3) / pi // 0.75
log(10000, 10) // 4
sqrt(-4) // 2i
pow([[-1, 2], [3, 1]], 2) // [[7, 0], [0, 7]]
derivative('x^2 + x', 'x') // 2 * x + 1
// expressions
evaluate('12 / (2.3 + 0.7)') // 4
evaluate('12.7 cm to inch') // 5 inch
evaluate('sin(45 deg) ^ 2') // 0.5
evaluate('9 / 3 + 2i') // 3 + 2i
evaluate('det([-1, 2; 3, 1])') // -7
// chaining
chain(3)
.add(4)
.multiply(2)
.done() // 14

Categories