JavaScript Math.min returns wrong value - javascript

I'm using John Resig's function to find min value in an array but it returns somehow floored value.
Here's a demo and here's the code.
var arr = Math.min.apply(Math, [310127563311820800, 310127563190202368, 310127563110502401, 310127562443595776, 310127562326163457, 310127561751556097]);
document.write(arr);
Can you explain me what happens and why it's returning wrong(floored) value?

The problem isn't in finding the min but in using those numbers. You can't represent them as numbers in JavaScript as integers can only be completely kept between -9007199254740992 and +9007199254740992.
It's because all numbers in JavaScript are double precision IEEE754 floats and the size of the mantissa is 53.
See more details in ECMAScript specification on the number type :
Note that all the positive and negative integers whose magnitude is no
greater than 253 are representable in the Number type (indeed, the
integer 0 has two representations, +0 and -0).
To deal with those numbers (and find their min), you need to use another representation than the native JavaScript number. Hopefully, there are many libraries dealing with big numbers, for example bignum (but you should google and pick the one you like).

You will lose precision in javascript unless you make the input strings of digits.
If they are all positive integers you can sort them and return the lowest indexed element, remembering that a string with more digits must be larger than one with fewer-
var input= ['310127563311820800', '310127563190202368', '310127563110502401',
'310127562443595776', '310127562326163457', '310127561751556097'];
var minim= input.slice(0).sort(function(a, b){
if(a=== b) return 0;
if(a.length!= b.length) return a.length-b.length;
return a>b? 1:-1;
})[0];
alert(minim);
/* returned value: (String)
310127561751556097
*/
You can skip the slice(0) bit if you don't need to keep the original in its order.

Related

Will I possibly loose any decimal digits (precision) when multiplying Number.MAX_SAFE_INTEGER by Math.random()?

Will I possibly loose any decimal digits (precision) when multiplying Number.MAX_SAFE_INTEGER by Math.random() in JavaScript?
I presume I won't but it'd be nice to have a credible explanation as to why 😎
Edited, In layman terms, we're dealing with two IEEE 754 double-precision floating-point numbers, one is the maximal integer (for double-precision), the other one is fractional with quite a few digits after a decimal point. What if (say) I first converted them to quadruple-precision format, then multiplied, and then converted the product back to double-precision, would the result be any different?
const max = Number.MAX_SAFE_INTEGER;
const random = Math.random();
console.log(`\
MAX_SAFE_INTEGER: ${max}, \
random: ${random}, \
product: ${max * random}`);
For more elaborate examples, I use it to generate BigInt random numbers.
Your implementation should be safe - in theory, all numbers between 0 and MAX_SAFE_INTEGER should have a possibility of appearing, if the engine implementing Math.random uses a completely unbiased algorithm.
But an absolutely unbiased algorithm is not guaranteed by the specification - the numbers chosen are meant to be psuedo random, not truly, completely random. (does such a thing even exist? it's debatable...) Modern versions V8 and some other implementations use an algorithm with a period on the order of 2 ** 128, larger than MAX_SAFE_INTEGER (2 ** 53 - 1) - but it'd be completely plausible for other implementations (especially older ones) to have a much smaller period, resulting in certain integers within the range being picked much more often than others.
If this is important for your script (which is pretty unlikely in most situations, I'd think), you might consider using a higher-quality random generatior than Math.random - but it's almost certainly not worth worrying about.
What if (say) I first converted them to quadruple-precision format, then multiplied, and then converted the product back to double-precision, would the result be any different?
It could be in cases where the rounding behaves differently between multiplying two doubles vs converting quadruple to double, but the main problem remains the same. The spacing between representable doubles in the range from 2n to 2n+1 is 2n−52. So between 252 and 253 only whole numbers can be represented, between 251 and 252 only every 0.5 can be represented, etc.
If you want more precision you could try decimal.js. The library is included on that documentation page so you can try these out in your console.
Number.MAX_SAFE_INTEGER*.9
8106479329266892
new Decimal(Number.MAX_SAFE_INTEGER).mul(new Decimal(0.9)).toString()
"8106479329266891.9"
Both answers are correct, but I couldn't help running this little experiment in C#, where double is the same thing as Number in JavaScript (fiddle):
using System;
public class Program
{
public static void Main()
{
const double MAX_SAFE_INT = 9007199254740991;
Decimal maxD = Convert.ToDecimal(MAX_SAFE_INT.ToString());
var rng = new Random(Environment.TickCount);
for (var i = 0; i < 1000; i++)
{
double random = rng.NextDouble();
double product = MAX_SAFE_INT * random;
// converting via string to workaround the "15 significant digits" limitation for Decimal(Double)
Decimal randomD = Decimal.Parse(String.Format("{0:F18}", random));
Decimal productD = maxD * randomD;
double converted = Convert.ToDouble(productD);
if (Math.Floor(converted) != Math.Floor(product))
{
Console.WriteLine($"{maxD}, {randomD, 22}, products: decimal {productD, 32}, converted {converted, 20}, original {product, 20}");
}
}
}
}
As far as I'm concerned, I'm still getting the desired distribution of the random numbers within the 0 - 9007199254740991 range.
Here is a JavaScript playground code to check for possible recurrences.

Math.min and Math.max Not Working for Large (Floating-Point) Numbers

It's relating to a problem with how JavaScript handles large (Floating-Point) numbers.
What is JavaScript's highest integer value that a number can go to without losing precision? is referring to the highest possible number. I was after a way to bypass that for getting the min and max in the example below.
var lowest = Math.min(131472982990263674, 131472982995395415);
console.log(lowest);
Wil return:
131472982990263680
To get the min and max value would it be required to write a function to suit or is there a way I can get it working with the Math.min and Math.max functions?
The closest solution I've found was this but I couldn't manage to get it working as I cannot avail of BigInt function as it's not exposed to my version.
Large numbers erroneously rounded in JavaScript
You can try to convert the numbers to BigInt:
const n1 = BigInt("131472982990263674"); // or const n1 = 131472982990263674n;
const n2 = BigInt("131472982995395415"); // or const n1 = 131472982995395415n;
Then find the min and max using this post:
const n1 = BigInt("131472982990263674");
const n2 = BigInt("131472982995395415");
function BigIntMinAndMax (...args){
return args.reduce(([min,max], e) => {
return [
e < min ? e : min,
e > max ? e : max,
];
}, [args[0], args[0]]);
};
const [min, max] = BigIntMinAndMax(n1, n2);
console.log(`Min: ${min}`);
console.log(`Max: ${max}`);
Math.min and Math.max are working just fine.
When you write 131472982990263674 in a JavaScript program, it is rounded to the nearest IEEE 754 binary64 floating-point number, which is 131472982990263680.
Written in hexadecimal or binary, you can see that 131472982990263674 = 0x1d315fb40b2157a = 0b111010011000101011111101101000000101100100001010101111010 takes 56 bits of precision to represent.
If you round that to the nearest number with only 53 bits of precision, what you get is 0b111010011000101011111101101000000101100100001010110000000 = 0x1d315fb40b21580 = 131472982990263680 (significant bits in bold).
Similarly, when you write 131472982995395415 in JavaScript, what you get back is 131472982995395410.
So when you write the code Math.min(131472982990263674, 131472982995395415), you pass the numbers 131472982990263680 and 131472982995395410 into the Math.min function.
Given that, it should come as no surprise that Math.min returns 131472982990263680.
> 131472982990263674
131472982990263680
> 131472982995395415
131472982995395410
> Math.min(131472982990263674, 131472982995395415)
131472982990263680
It's not clear what your original goal is.
Are you given two JavaScript numbers to begin with, and are you trying to find the min or max?
If so, Math.min and Math.max are the right thing.
Are you given two strings, and are you trying to order them by the numbers they represent?
If so, it depends on the notation you want to support.
If you only want to support decimal notation for integers (with no scientific notation, like 123e4), then you can simply chop leading zeros and compare the strings lexicographically with < or > in JavaScript.
> function strmin(x, y) { return x < y ? x : y }
> strmin("131472982990263674", "131472982995395415")
'131472982990263674'
If you want to support arbitrary-precision decimal notation (including non-integers and perhaps scientific notation), and you want to maintain distinctions between, for instance, 1.00000000000000001 and 1.00000000000000002, then you probably want a general arbitrary-precision decimal arithmetic library.
Are you trying to do arithmetic with integers in a range that might exceed 2⁵³, and need the computation to be exact, requiring >53 bits of precision?
If so, you may need some kind of wider-precision or arbitrary-precision arithmetic than JavaScript numbers alone provide, like bigint recently added to JavaScript.
If you only need a little more than 53 bits of precision, as is often the case inside numerical algorithms for transcendental elementary functions, there's also T.J. Dekker's algorithm for extending (say) binary64 arithmetic into double-binary64 or “double-double” arithmetic: a double-binary64 number is the sum 𝑥 + 𝑦 of two binary64 floating-point numbers 𝑥 and 𝑦, where typically 𝑥 holds the higher-order bits and 𝑦 holds the lower-order bits so together they can store 106 bits of precision.

How to get Power of big number in decimal?

How to get big power of 2 in decimal or
how to convert big exponential value into decimal value.
I want 2 to the power of 128 in decimal not exponential
what I did till now
tofixed(+exponent)
which again given me the same value.
var num = Math.pow(2, 128);
Actual result = 3.402823669209385e+38
expected some decimal value not exponential value.
You could use BigInt, if implemented.
var num = BigInt(2) ** BigInt(128);
console.log(num.toString());
console.log(BigInt(2 ** 128).toString());
3.402823669209385e+38 is a decimal number (in string form, because it's been output as a string). It's in scientific notation, specifically E-notation. It's the number 3.402823669209385 times 100000000000000000000000000000000000000.
If you want a string that isn't in scientific notation, you can use Intl.NumberFormat for that:
console.log(new Intl.NumberFormat().format(Math.pow(2, 128)));
Note: Although that number is well outside the range that JavaScript's number type can represent with precision in general (any integer above Number.MAX_SAFE_INTEGER [9,007,199,254,740,991] may be the result of rounding), it's one of the values that is held precisely, even at that magnitude, because it's a power of 2. But operations on it that would have a true mathematical result that wasn't a power of 2 would almost certainly get rounded.
I think the default power function won't be able to the results you want.
You can refer to the article below to understand how to create an Power function with big number by yourself.
Demo code is not JS but still quite understandable.
↓
Writing power function for large numbers

Issue with combining large array of numbers into one single number

I am trying to convert an array of numbers into one single number, for example
[1,2,3] to 123.
However, my code can't handle big arrays since it can’t return exact number. Such as
[6,1,4,5,3,9,0,1,9,5,1,8,6,7,0,5,5,4,3] returns 6145390195186705000
Is there any way that I could properly convert into a single number.I would really appreciate any help.
var integer = 0;
var digits = [1,2,3,4]
//combine array of digits into int
digits.forEach((num,index,self) => {
integer += num * Math.pow(10,self.length-index-1)
});
The biggest integer value javacript can hold is +/- 9007199254740991. Note that the bitwise operators and shift operators operate on 32-bit ints, so in that case, the max safe integer is 2^31-1, or 2147483647.
In my opinion, you can choose one of the following:
store the numbers as strings and manipulate them as numbers; you might have to implement special functions to add/subtract/multiply/divide them (these are classic algorithmic problems)
use the BigInt; BigInts are a new numeric primitive in JavaScript that can represent integers with arbitrary precision. With BigInts, you can safely store and operate on large integers even beyond the safe integer limit. Unfortunately, they work only with Chrome right now. If you want to work with other browsers, you might check this or even this if you work with angularjs or nodejs.
Try the following code in the Chrome's console:
let x = BigInt([6,1,4,5,3,9,0,1,9,5,1,8,6,7,0,5,5,4,3].join(''));
console.log(x);
This will print 6145390195186705543n. The n suffix marks that it is a big integer.
Cheers!
You can use JavaScript Array join() Method and parse it into integer.
Example:
parseInt([6,1,4,5,3,9,0,1,9,5,1,8,6,7,0,5].join(''))
results:
6145390195186705
Edited: Use BigInt instead of parseInt , but it works only on chrome browser.
The largest number possible in Javascript is
+/- 9007199254740991
Use BigInt. Join all numbers as a string and pass it in BigInt global function to convert it into int
var integer = 0;
var digits = [1,2,3,4]
//combine array of digits into int
digits.forEach((num,index,self) => {
integer += num;
});
integer= BigInt(integer);
Note : Works only on Chrome as of now. You can use othee libraries like BigInteger.js or MathJS

Math.pow() not giving precise answer

I've used Math.pow() to calculate the exponential value in my project.
Now, For specific values like Math.pow(3,40), it returns 12157665459056929000.
But when i tried the same value using a scientific Calculator, it returns 12157665459056928801.
Then i tried to traverse the loop till the exponential value :
function calculateExpo(base,power){
base = parseInt(base);
power = parseInt(power);
var output = 1;
gameObj.OutPutString = ''; //base + '^' + power + ' = ';
for(var i=0;i<power;i++){
output *= base;
gameObj.OutPutString += base + ' x ';
}
// to remove the last comma
gameObj.OutPutString = gameObj.OutPutString.substring(0,gameObj.OutPutString.lastIndexOf('x'));
gameObj.OutPutString += ' = ' + output;
return output;
}
This also returns 12157665459056929000.
Is there any restriction to Int type in JS ?
This behavior is highly dependent on the platform you are running this code at. Interestingly even the browser matters even on the same very machine.
<script>
document.write(Math.pow(3,40));
</script>
On my 64-bit machine Here are the results:
IE11: 12157665459056928000
FF25: 12157665459056929000
CH31: 12157665459056929000
SAFARI: 12157665459056929000
52 bits of JavaScript's 64-bit double-precision number values are used to store the "fraction" part of a number (the main part of the calculations performed), while 11 bits are used to store the "exponent" (basically, the position of the decimal point), and the 64th bit is used for the sign. (Update: see this illustration: http://en.wikipedia.org/wiki/File:IEEE_754_Double_Floating_Point_Format.svg)
There are slightly more than 63 bits worth of significant figures in the base-two expansion of 3^40 (63.3985... in a continuous sense, and 64 in a discrete sense), so hence it cannot be accurately computed using Math.pow(3, 40) in JavaScript. Only numbers with 52 or fewer significant figures in their base-two expansion (and a similar restriction on their order of magnitude fitting within 11 bits) have a chance to be represented accurately by a double-precision floating point value.
Take note that how large the number is does not matter as much as how many significant figures are used to represent it in base two. There are many numbers as large or larger than 3^40 which can be represented accurately by JavaScript's 64-bit double-precision number values.
Note:
3^40 = 1010100010111000101101000101001000101001000111111110100000100001 (base two)
(The length of the largest substring beginning and ending with a 1 is the number of base-two significant figures, which in this case is the entire string of 64 digits.)
Haskell (ghci) gives
Prelude> 3^40
12157665459056928801
Erlang gives
1> io:format("~f~n", [math:pow(3,40)]).
12157665459056929000.000000
2> io:format("~p~n", [crypto:mod_exp(3,40,trunc(math:pow(10,21)))]).
12157665459056928801
JavaScript
> Math.pow(3,40)
12157665459056929000
You get 12157665459056929000 because it uses IEEE floating point for computation. You get 12157665459056928801 because it uses arbitrary precision (bignum) for computation.
JavaScript can only represent distinct integers to 253 (or ~16 significant digits). This is because all JavaScript numbers have an internal representation of IEEE-754 base-2 doubles.
As a consequence, the result from Math.pow (even if was accurate internally) is brutally "rounded" such that the result is still a JavaScript integer (as it is defined to return an integer per the specification) - and the resulting number is thus not the correct value, but the closest integer approximation of it JavaScript can handle.
I have put underscores above the digits that don't [entirely] make the "significant digit" cutoff so it can be see how this would affect the results.
................____
12157665459056928801 - correct value
12157665459056929000 - closest JavaScript integer
Another way to see this is to run the following (which results in true):
12157665459056928801 == 12157665459056929000
From the The Number Type section in the specification:
Note that all the positive and negative integers whose magnitude is no greater than 253 are representable in the Number type ..
.. but not all integers with large magnitudes are representable.
The only way to handle this situation in JavaScript (such that information is not lost) is to use an external number encoding and pow function. There are a few different options mentioned in https://stackoverflow.com/questions/287744/good-open-source-javascript-math-library-for-floating-point-operations and Is there a decimal math library for JavaScript?
For instance, with big.js, the code might look like this fiddle:
var z = new Big(3)
var r = z.pow(40)
var str = r.toString()
// str === "12157665459056928801"
Can't say I know for sure, but this does look like a range problem.
I believe it is common for mathematics libraries to implement exponentiation using logarithms. This requires that both values are turned into floats and thus the result is also technically a float. This is most telling when I ask MySQL to do the same calculation:
> select pow(3, 40);
+-----------------------+
| pow(3, 40) |
+-----------------------+
| 1.2157665459056929e19 |
+-----------------------+
It might be a courtesy that you are actually getting back a large integer.

Categories