Why do these binary representations yield the same number? - javascript

According to the documentation, one can convert a binary representation of a number to the number itself using parseInt(string, base).
For instance,
var number = parseInt("10100", 2);
// number is 20
However, take a look at the next example:
var number = parseInt("1000110011000011101000010100000110011110010111111100011010000", 2);
// number is 1267891011121314000
var number = parseInt("1000110011000011101000010100000110011110010111111100100000000", 2);
// number is 1267891011121314000
How is this possible?
Note that the binary numbers are almost the same, except for the last 9 bits.

1267891011121314000 is way over Number.MAX_SAFE_INTEGER (9007199254740991). It can't safely represent it in memory.
Take a look at this classic example:
1267891011121314000 == 1267891011121314001 // true

Because that's a 61-bit number and the highest-precision type that Javascript will store is a 64-bit floating point values (using IEEE-754 doubles).
A 64-bit float does not have 61 bits of integral precision, since the 64 bits used are split into the fraction and exponent. As you can see from the graphic of the bit layout, only 52 bits are available to the mantissa (or fraction), which is used when storing an integer.
Many bignum libraries exist to solve this sort of thing, as it's a very common problem in scientific applications. They tend not to be as fast as math with hardware support, but allow much more precision. Pulling from Github results, bignumber.js may be what you need to support these values.

There are upper and lower limits on numbers in JavaScript. JavaScript represents number as 64 bit floating point. IEEE754
Look into a BigDecimal implementation in JavaScript if you need larger support.

Related

Will I possibly loose any decimal digits (precision) when multiplying Number.MAX_SAFE_INTEGER by Math.random()?

Will I possibly loose any decimal digits (precision) when multiplying Number.MAX_SAFE_INTEGER by Math.random() in JavaScript?
I presume I won't but it'd be nice to have a credible explanation as to why šŸ˜Ž
Edited, In layman terms, we're dealing with two IEEE 754 double-precision floating-point numbers, one is the maximal integer (for double-precision), the other one is fractional with quite a few digits after a decimal point. What if (say) I first converted them to quadruple-precision format, then multiplied, and then converted the product back to double-precision, would the result be any different?
const max = Number.MAX_SAFE_INTEGER;
const random = Math.random();
console.log(`\
MAX_SAFE_INTEGER: ${max}, \
random: ${random}, \
product: ${max * random}`);
For more elaborate examples, I use it to generate BigInt random numbers.
Your implementation should be safe - in theory, all numbers between 0 and MAX_SAFE_INTEGER should have a possibility of appearing, if the engine implementing Math.random uses a completely unbiased algorithm.
But an absolutely unbiased algorithm is not guaranteed by the specification - the numbers chosen are meant to be psuedo random, not truly, completely random. (does such a thing even exist? it's debatable...) Modern versions V8 and some other implementations use an algorithm with a period on the order of 2 ** 128, larger than MAX_SAFE_INTEGER (2 ** 53 - 1) - but it'd be completely plausible for other implementations (especially older ones) to have a much smaller period, resulting in certain integers within the range being picked much more often than others.
If this is important for your script (which is pretty unlikely in most situations, I'd think), you might consider using a higher-quality random generatior than Math.random - but it's almost certainly not worth worrying about.
What if (say) I first converted them to quadruple-precision format, then multiplied, and then converted the product back to double-precision, would the result be any different?
It could be in cases where the rounding behaves differently between multiplying two doubles vs converting quadruple to double, but the main problem remains the same. The spacing between representable doubles in the range from 2n to 2n+1 is 2nāˆ’52. So between 252 and 253 only whole numbers can be represented, between 251 and 252 only every 0.5 can be represented, etc.
If you want more precision you could try decimal.js. The library is included on that documentation page so you can try these out in your console.
Number.MAX_SAFE_INTEGER*.9
8106479329266892
new Decimal(Number.MAX_SAFE_INTEGER).mul(new Decimal(0.9)).toString()
"8106479329266891.9"
Both answers are correct, but I couldn't help running this little experiment in C#, where double is the same thing as Number in JavaScript (fiddle):
using System;
public class Program
{
public static void Main()
{
const double MAX_SAFE_INT = 9007199254740991;
Decimal maxD = Convert.ToDecimal(MAX_SAFE_INT.ToString());
var rng = new Random(Environment.TickCount);
for (var i = 0; i < 1000; i++)
{
double random = rng.NextDouble();
double product = MAX_SAFE_INT * random;
// converting via string to workaround the "15 significant digits" limitation for Decimal(Double)
Decimal randomD = Decimal.Parse(String.Format("{0:F18}", random));
Decimal productD = maxD * randomD;
double converted = Convert.ToDouble(productD);
if (Math.Floor(converted) != Math.Floor(product))
{
Console.WriteLine($"{maxD}, {randomD, 22}, products: decimal {productD, 32}, converted {converted, 20}, original {product, 20}");
}
}
}
}
As far as I'm concerned, I'm still getting the desired distribution of the random numbers within the 0 - 9007199254740991 range.
Here is a JavaScript playground code to check for possible recurrences.

javascript/jQuery VAR maxium length

I tried to google it, but all key words references to funtions or solutions working with content of variable.
My question is simple.
If variable represents a number,
var a = 1;
what is its max bit length? I mean, what highest number can it contain before buffer overflow happens?
Is it int32? Is it int64? Is it a different length?
Thanks in advance
As the spec says, numbers in JavaScript are IEEE-754 double-precision floating point:
They're 64 bits in size.
Their range is -1.7976931348623157e+308 through 1.7976931348623157e+308 (that latter is available via Number.MAX_VALUE), which is to say the positive and negative versions of (2 - 2^-52) Ɨ 2^1023, but they can't perfectly represent all of those values. Famously, 0.1 + 0.2 comes out as 0.30000000000000004; see Is floating-point math broken?
The max "safe" integer value (whole number value that won't be imprecise) is 9,007,199,254,740,991, which is available as Number.MAX_SAFE_INTEGER on ES2015-compliant JavaScript engines.
Similarly, MIN_SAFE_INTEGER is -9,007,199,254,740,991
Numbers in Javascript are IEEE 754 floating point double-precision values which has a 53-bit mantissa. See the MDN:
Integer range for Number
The following example shows minimum and maximum integer values that
can be represented as Number object (for details, refer to ECMAScript
standard, chapter 8.5 The Number Type):
var biggestInt = 9007199254740992;
var smallestInt = -9007199254740992;

Math.pow() not giving precise answer

I've used Math.pow() to calculate the exponential value in my project.
Now, For specific values like Math.pow(3,40), it returns 12157665459056929000.
But when i tried the same value using a scientific Calculator, it returns 12157665459056928801.
Then i tried to traverse the loop till the exponential value :
function calculateExpo(base,power){
base = parseInt(base);
power = parseInt(power);
var output = 1;
gameObj.OutPutString = ''; //base + '^' + power + ' = ';
for(var i=0;i<power;i++){
output *= base;
gameObj.OutPutString += base + ' x ';
}
// to remove the last comma
gameObj.OutPutString = gameObj.OutPutString.substring(0,gameObj.OutPutString.lastIndexOf('x'));
gameObj.OutPutString += ' = ' + output;
return output;
}
This also returns 12157665459056929000.
Is there any restriction to Int type in JS ?
This behavior is highly dependent on the platform you are running this code at. Interestingly even the browser matters even on the same very machine.
<script>
document.write(Math.pow(3,40));
</script>
On my 64-bit machine Here are the results:
IE11: 12157665459056928000
FF25: 12157665459056929000
CH31: 12157665459056929000
SAFARI: 12157665459056929000
52 bits of JavaScript's 64-bit double-precision number values are used to store the "fraction" part of a number (the main part of the calculations performed), while 11 bits are used to store the "exponent" (basically, the position of the decimal point), and the 64th bit is used for the sign. (Update: see this illustration: http://en.wikipedia.org/wiki/File:IEEE_754_Double_Floating_Point_Format.svg)
There are slightly more than 63 bits worth of significant figures in the base-two expansion of 3^40 (63.3985... in a continuous sense, and 64 in a discrete sense), so hence it cannot be accurately computed using Math.pow(3, 40) in JavaScript. Only numbers with 52 or fewer significant figures in their base-two expansion (and a similar restriction on their order of magnitude fitting within 11 bits) have a chance to be represented accurately by a double-precision floating point value.
Take note that how large the number is does not matter as much as how many significant figures are used to represent it in base two. There are many numbers as large or larger than 3^40 which can be represented accurately by JavaScript's 64-bit double-precision number values.
Note:
3^40 = 1010100010111000101101000101001000101001000111111110100000100001 (base two)
(The length of the largest substring beginning and ending with a 1 is the number of base-two significant figures, which in this case is the entire string of 64 digits.)
Haskell (ghci) gives
Prelude> 3^40
12157665459056928801
Erlang gives
1> io:format("~f~n", [math:pow(3,40)]).
12157665459056929000.000000
2> io:format("~p~n", [crypto:mod_exp(3,40,trunc(math:pow(10,21)))]).
12157665459056928801
JavaScript
> Math.pow(3,40)
12157665459056929000
You get 12157665459056929000 because it uses IEEE floating point for computation. You get 12157665459056928801 because it uses arbitrary precision (bignum) for computation.
JavaScript can only represent distinct integers to 253 (or ~16 significant digits). This is because all JavaScript numbers have an internal representation of IEEE-754 base-2 doubles.
As a consequence, the result from Math.pow (even if was accurate internally) is brutally "rounded" such that the result is still a JavaScript integer (as it is defined to return an integer per the specification) - and the resulting number is thus not the correct value, but the closest integer approximation of it JavaScript can handle.
I have put underscores above the digits that don't [entirely] make the "significant digit" cutoff so it can be see how this would affect the results.
................____
12157665459056928801 - correct value
12157665459056929000 - closest JavaScript integer
Another way to see this is to run the following (which results in true):
12157665459056928801 == 12157665459056929000
From the The Number Type section in the specification:
Note that all the positive and negative integers whose magnitude is no greater than 253 are representable in the Number type ..
.. but not all integers with large magnitudes are representable.
The only way to handle this situation in JavaScript (such that information is not lost) is to use an external number encoding and pow function. There are a few different options mentioned in https://stackoverflow.com/questions/287744/good-open-source-javascript-math-library-for-floating-point-operations and Is there a decimal math library for JavaScript?
For instance, with big.js, the code might look like this fiddle:
var z = new Big(3)
var r = z.pow(40)
var str = r.toString()
// str === "12157665459056928801"
Can't say I know for sure, but this does look like a range problem.
I believe it is common for mathematics libraries to implement exponentiation using logarithms. This requires that both values are turned into floats and thus the result is also technically a float. This is most telling when I ask MySQL to do the same calculation:
> select pow(3, 40);
+-----------------------+
| pow(3, 40) |
+-----------------------+
| 1.2157665459056929e19 |
+-----------------------+
It might be a courtesy that you are actually getting back a large integer.

Floating point representations seem to do integer arithmetic correctly - why?

I've been playing around with floating point numbers a little bit, and based on what I've learned about them in the past, the fact that 0.1 + 0.2 ends up being something like 0.30000000000000004 doesn't surprise me.
What does surprise me, however, is that integer arithmetic always seems to work just fine and not have any of these artifacts.
I first noticed this in JavaScript (Chrome V8 in node.js):
0.1 + 0.2 == 0.3 // false, NOT surprising
123456789012 + 18 == 123456789030 // true
22334455667788 + 998877665544 == 23333333333332 // true
1048576 / 1024 == 1024 // true
C++ (gcc on Mac OS X) seems to have the same properties.
The net result seems to be that integer numbers just ā€” for lack of a better word ā€” work. It's only when I start using decimal numbers that things get wonky.
Is this is a feature of the design, an mathematical artifact, or some optimisation done by compilers and runtime environments?
Is this is a feature of the design, an mathematical artifact, or some optimisation done by compilers and runtime environments?
It's a feature of the real numbers. A theorem from modern algebra (modern algebra, not high school algebra; math majors take a class in modern algebra after their basic calculus and linear algebra classes) says that for some positive integer b, any positive real number r can be expressed as r = a * bp, where a is in [1,b) and p is some integer. For example, 102410 = 1.02410*103. It is this theorem that justifies our use of scientific notation.
That number a can be classified as terminal (e.g. 1.0), repeating (1/3=0.333...), or non-repeating (the representation of pi). There's a minor issue here with terminal numbers. Any terminal number can be also be represented as a repeating number. For example, 0.999... and 1 are the same number. This ambiguity in representation can be resolved by specifying that numbers that can be represented as terminal numbers are represented as such.
What you have discovered is a consequence of the fact that all integers have a terminal representation in any base.
There is an issue here with how the reals are represented in a computer. Just as int and long long int don't represent all of integers, float and double don't represent all of the reals. The scheme used on most computer to represent a real number r is to represent in the form r = a*2p, but with the mantissa (or significand) a truncated to a certain number of bits and the exponent p limited to some finite number. What this means is that some integers cannot be represented exactly. For example, even though a googol (10100) is an integer, it's floating point representation is not exact. The base 2 representation of a googol is a 333 bit number. This 333 bit mantissa is truncated to 52+1 bits.
On consequence of this is that double precision arithmetic is no longer exact, even for integers if the integers in question are greater than 253. Try your experiment using the type unsigned long long int on values between 253 and 264. You'll find that double precision arithmetic is no longer exact for these large integers.
I'm writing that under assumption that Javascript uses double-precision floating-point representation for all numbers.
Some numbers have an exact representation in the floating-point format, in particular, all integers such that |x| < 2^53. Some numbers don't, in particular, fractions such as 0.1 or 0.2 which become infinite fractions in binary representation.
If all operands and the result of an operation have an exact representation, then it would be safe to compare the result using ==.
Related questions:
What number in binary can only be represented as an approximation?
Why can't decimal numbers be represented exactly in binary?
Integers withing the representable range are exactly representable by the machine, floats are not (well, most of them).
If by "basic integer math" you understand "feature", then yes, you can assume correctly implementing arithmetic is a feature.
The reason is, that you can represent every whole number (1, 2, 3, ...) exactly in binary format (0001, 0010, 0011, ...)
That is why integers are always correct because 0011 - 0001 is always 0010.
The problem with floating point numbers is, that the part after the point cannot be exactly converted to binary.
All of the cases that you say "work" are ones where the numbers you have given can be represented exactly in the floating point format. You'll find that adding 0.25 and 0.5 and 0.125 works exactly too because they can also be represented exactly in a binary floating point number.
it's only values that can't be such as 0.1 where you'll get what appear to be inexact results.
Integers are exact because because the imprecision results mainly from the way we write decimal fractions, and secondarily because many rational numbers simply don't have non-repeating representations in any given base.
See: https://stackoverflow.com/a/9650037/140740 for the full explanation.
That method only works, when you are adding a small enough integer to very large integer -- and even in that case you are not representing both of the integers in the 'floating point' format.
All floating point numbers can't be represented. it's due to the way of coding them. The wiki page explain it better than me: http://en.wikipedia.org/wiki/IEEE_754-1985.
So when you are trying to compare a floating point number, you should use a delta:
myFloat - expectedFloat < delta
You can use the smallest representable floating point number as delta.

Converting string to base36 inconsistencies between languages.

I have noticed some inconsistencies between Python and JavaScript when converting a string to base36.
Python Method:
>>> print int('abcdefghijr', 36)
Result: 37713647386641447
Javascript Method:
<script>
document.write(parseInt("abcdefghijr", 36));
</script>
Result: 37713647386641450
What causes the different results between the two languages? What would be the best approach to produce the same results irregardless of the language?
Thank you.
That number takes 56 bits to represent. JavaScript's numbers are actually double-precision binary floating point numbers, or double for short. These are 64 bit in total, and can represent a far wider range of values than a 64 bit integers, but due to how they achieve that (they represent a number as mantissa * 2^exponent), they cannot represent all numbers in that range, just the ones that are a multiple of 2^exponent where the multiple fits into the mantissa (which includes 2^0 = 1, so you get all integers the mantissa can handle directly). The mantissa is 53 bits, which is insufficient for this number. So it gets rounded to a number which can be represented.
What you can do is use an arbitrary precision number type defined by a third party library like gwt-math or Big.js. These numbers aren't hard to implement if you know your school arithmetic. Doing it efficiently is another matter, but also an area of extensive research. And not your problem if you use an existing library.

Categories