javascript/jQuery VAR maxium length - javascript

I tried to google it, but all key words references to funtions or solutions working with content of variable.
My question is simple.
If variable represents a number,
var a = 1;
what is its max bit length? I mean, what highest number can it contain before buffer overflow happens?
Is it int32? Is it int64? Is it a different length?
Thanks in advance

As the spec says, numbers in JavaScript are IEEE-754 double-precision floating point:
They're 64 bits in size.
Their range is -1.7976931348623157e+308 through 1.7976931348623157e+308 (that latter is available via Number.MAX_VALUE), which is to say the positive and negative versions of (2 - 2^-52) × 2^1023, but they can't perfectly represent all of those values. Famously, 0.1 + 0.2 comes out as 0.30000000000000004; see Is floating-point math broken?
The max "safe" integer value (whole number value that won't be imprecise) is 9,007,199,254,740,991, which is available as Number.MAX_SAFE_INTEGER on ES2015-compliant JavaScript engines.
Similarly, MIN_SAFE_INTEGER is -9,007,199,254,740,991

Numbers in Javascript are IEEE 754 floating point double-precision values which has a 53-bit mantissa. See the MDN:
Integer range for Number
The following example shows minimum and maximum integer values that
can be represented as Number object (for details, refer to ECMAScript
standard, chapter 8.5 The Number Type):
var biggestInt = 9007199254740992;
var smallestInt = -9007199254740992;

Related

Js how to increase a big number [duplicate]

This question already has answers here:
What is JavaScript's highest integer value that a number can go to without losing precision?
(21 answers)
Closed 4 months ago.
How to increase this number(you can try it on the browser console):
36893488147419103000 + 1
The result of this is:
36893488147419103000
The number stays the same no changes to it why is that? and how can I increase it by 1?
For big integers you should use the BigInt (Big Integer) type.
Note 1: you almost always cannot mix BigInt numbers with Numbers (eg for math operations) without first performing an explicit conversion.
Note 2: JSON does not currently natively support BigInt values. As a workaround you can use strings (eg. '1n' for the values and then use a reviver function when calling JSON.parse.
JavaScript currently only has two numeric types: double-precision IEEE 754 floating point Numbers, and Big Integers which can be used to represent arbitrarily large integers. You can declare a BigInt number literal using the suffix n, eg. 1n.
IEEE 754 Numbers can "only" accurately represent integers up to and including Number.MAX_SAFE_INTEGER, which has a value of 2^53 - 1 or 9,007,199,254,740,991 or ~9 quadrillion.
From MDN:
Double precision floating point format only has 52 bits to represent the mantissa, so it can only safely represent integers between -(253 – 1) and 253 – 1. "Safe" in this context refers to the ability to represent integers exactly and to compare them correctly. For example, Number.MAX_SAFE_INTEGER + 1 === Number.MAX_SAFE_INTEGER + 2 will evaluate to true, which is mathematically incorrect. See Number.isSafeInteger() for more information.
A "Decimal" number type, that will be able to represent arbitrarily precise decimal numbers, is under development.
Obviously the number is internally represented as a floating point number.When the value you want do add to this number is less then the value of the least significant bit, it will not change the the value.
The only way would be to use floating point numbers with a higher resolution i.e. with a higher number of significant bits.
Double precision floating point format only has 52 bits to represent the mantissa, so it can only safely represent integers between -(253 – 1) and 253 – 1. See Number.MAX_SAFE_INTEGER. Larger numbers may not be able to be represented exactly.

Advice on converting 8 byte (u64) unsigned integer into javascript

I have a u64 (unsigned integer) stored in 8 bytes of memory. Clearly the range is 0-2^64 integers.
I am converting it to a javascript number by turning each byte into hex and making a hex string:
let s = '0x'
s += buffer.slice(0,1).toString("hex")
s += buffer.slice(1,2).toString("hex")
...
n = parseInt(s)
Works great for everything I have done so far.
But when I look at how javascript stores numbers, I become unsure. Javascript uses 8 bytes for numbers, but treats all numbers the same. This internal javascript "number" representation can also hold floating point numbers.
Can a javascript number store all integers from 0 to 2^64? seems not.
At what point do I get into trouble?
What do people do to get round this?
An unsigned 64 bit integer has the range of a 0 to 18.446.744.073.709.551.615.
You could use the Number wrapper object with the .MAX_VALUE property, it represents the maximum numeric value representable in JavaScript.
The JavaScript Number type is a double-precision 64-bit binary format IEEE 754 value, like double in Java or C#.
General Info:
Integers in JS:
JavaScript has only floating-point numbers. Integers appear internally in two ways. First, most JavaScript engines store a small enough number without a decimal fraction as an integer (with, for example, 31 bits) and maintain that representation as long as possible. They have to switch back to a floating point representation if a number’s magnitude grows too large or if a decimal fraction appears.
Second, the ECMAScript specification has integer operators: namely, all of the bitwise operators. Those operators convert their operands to 32-bit integers and return 32-bit integers. For the specification, integer only means that the numbers don’t have a decimal fraction, and 32-bit means that they are within a certain range. For engines, 32-bit integer means that an actual integer (non-floating-point) representation can usually be introduced or maintained.
Ranges of integers
Internally, the following ranges of integers are important in JavaScript:
Safe integers [1], the largest practically usable range of integers that JavaScript supports:
53 bits plus a sign, range (−2^53, 2^53) which relates to (+/-) 9.007.199.254.740.992
Array indices [2]:
32 bits, unsigned
Maximum length: 2^32−1
Range of indices: [0, 2^32−1) (excluding the maximum length!)
Bitwise operands [3]:
Unsigned right shift operator (>>>): 32 bits, unsigned, range [0, 2^32)
All other bitwise operators: 32 bits, including a sign, range [−2^31, 2^31)
“Char codes”, UTF-16 code units as numbers:
Accepted by String.fromCharCode()
Returned by String.prototype.charCodeAt()
16 bit, unsigned
References:
[1] Safe integers in JavaScript
[2] Arrays in JavaScript
[3] Label bitwise_ops
Source: https://2ality.com/2014/02/javascript-integers.html

Why do these binary representations yield the same number?

According to the documentation, one can convert a binary representation of a number to the number itself using parseInt(string, base).
For instance,
var number = parseInt("10100", 2);
// number is 20
However, take a look at the next example:
var number = parseInt("1000110011000011101000010100000110011110010111111100011010000", 2);
// number is 1267891011121314000
var number = parseInt("1000110011000011101000010100000110011110010111111100100000000", 2);
// number is 1267891011121314000
How is this possible?
Note that the binary numbers are almost the same, except for the last 9 bits.
1267891011121314000 is way over Number.MAX_SAFE_INTEGER (9007199254740991). It can't safely represent it in memory.
Take a look at this classic example:
1267891011121314000 == 1267891011121314001 // true
Because that's a 61-bit number and the highest-precision type that Javascript will store is a 64-bit floating point values (using IEEE-754 doubles).
A 64-bit float does not have 61 bits of integral precision, since the 64 bits used are split into the fraction and exponent. As you can see from the graphic of the bit layout, only 52 bits are available to the mantissa (or fraction), which is used when storing an integer.
Many bignum libraries exist to solve this sort of thing, as it's a very common problem in scientific applications. They tend not to be as fast as math with hardware support, but allow much more precision. Pulling from Github results, bignumber.js may be what you need to support these values.
There are upper and lower limits on numbers in JavaScript. JavaScript represents number as 64 bit floating point. IEEE754
Look into a BigDecimal implementation in JavaScript if you need larger support.

JavaScript Highest Representable Number

This question asks about the highest number in JavaScript without losing precision. Here, I ask about the highest representable number in JavaScript. Some answers to the other question reference the answer to this question, but they do not answer that question, so I hope I am safe asking here.
I tried to find the answer, but I got lost halfway through. The highest number representable in JavaScript seems to be somewhere between 2^1023 and 2^1024. I went further (in the iojs REPL) with
var highest = Math.pow(2, 1023);
for(let i = 1022; i > someNumber; i--) {
highest += Math.pow(2, someNumber);
}
The highest number here seems to be when someNumber is between 969 and 970. This means it is between (2^1023 + 2^1022 + ... + 2^970) and (2^1023 + 2^1022 + ... + 2^969). I'm not sure how to go further without running out of memory and/or waiting years for a loop or function to finish.
What is the highest number representable in JavaScript? Does JavaScript store all digits of this number, or just some, because whenever I see numbers of 10^21 or higher they are represented in scientific notation? Why can JavaScript represent these extremely high numbers, especially if it can "remember" all the digits? Isn't JavaScript a base 64 language?
Finally, why is this the highest representable number? I am asking because it is not an integer exponent of 2. Is it because it is a floating point number? If we took the highest floating point number, would that be related to some exponent of 2?
ECMAScript uses IEEE 754 floating points to represent all numbers (integers and floating points) in memory. It uses double precision (64 bit), so the largest possible number would be the following (in binary):
(-1)^0 * (1.1111111111111111111111111111111111111111111111111111)_2 * 2^1023
^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^
sign bit 52 binary digits exponent
That is 1.9999999999999997779553950749686919152736663818359375 * 2^1023, which is exactly equal to 179769313486231570814527423734518473246981451219193357160576872808257310254570927992989173324623785766498017753719800531497718555288192667185248845624861831489179706103179456665410545164365169396987674822445002542175370097858557402467390846365155202987281348776667818932226328810501776426180817703854493120592.218308495538871112145305600. That number is also available in JavaScript as Number.MAX_VALUE.
JavaScript uses IEE 754 double-precision floating point numbers, aka the binary64. This format has 1 sign bit, 11 bits of exponent, and 52 bits of mantissa.
The highest possible number is that which is encoded using the highest possible exponents and mantissa, with a 0 sign bit. Except that the exponent value of 7ff (base 16) is used to encode Infinity and NaNs. The largest number is therefore encoded as 7fef ffff ffff ffff, and its value is (1 + (1 − 2^(−52))) × 2^1023.
Refer to the linked article for further details about the formula.

1.265 * 10000 = 126499.99999999999? [duplicate]

This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 6 years ago.
When I multiply 1.265 by 10000 , I get 126499.99999999999 when using Javascript.
Why is this so?
Floating point numbers can't handle decimals correctly in all cases. Check out
http://en.wikipedia.org/wiki/Floating-point_number#Accuracy_problems
http://www.mredkj.com/javascript/nfbasic2.html
You should be aware that all information in computers is in binary and the expansions of fractions in different bases vary.
For instance 1/3 in base 10= .33333333333333333333333333, while 1/3 in base 3 is equal to .1 and in base 2 is equal to .0101010101010101.
In case you don't have a complete understanding of how different bases work, here's an example:
The base 4 number 301.12. would be equal to 3 * 4^2 + 0 * 4^1 + 1 * 4^0 + 1 * 4^-1 + 2 *4^-2= 3 * 4^2 +1+ 1 * 4^-1 + 2 * 4^-2=49.375 in base 10.
Now the problems with accuracy in floating point comes from a limited number of bits in the significand. Floating point numbers have 3 parts to them, a sign bit, exponent and mantissa, most likely javascript uses 32 or 64 bit IEEE 754 floating point standard. For simpler calculations we'll use 32 bit, so 1.265 in floating point would be
Sign bit of 0 (0 for positive , 1 for negative) exponent of 0 (which with a 127 offset would be, ie exponent+offset, so 127 in unsigned binary) 01111111 (then finally we have the signifcand of 1.265, ieee floating point standard makes use of a hidden 1 representation so our binary represetnation of 1.265 is 1.01000011110101110000101, ignoring the 1:) 01000011110101110000101.
So our final IEEE 754 single (32-bit) representation of 1.625 is:
Sign Bit(+) Exponent (0) Mantissa (1.625)
0 01111111 01000011110101110000101
Now 1000 would be:
Sign Bit (+) Exponent(9) Mantissa(1000)
0 10001000 11110100000000000000000
Now we have to multiply these two numbers. Floating point multiplication consists of re-adding the hidden 1 to both mantissas, multiplying the two mantissa, subtracting the offset from the two exponents and then adding th two exponents together. After this the mantissa has to be normalized again.
First 1.01000011110101110000101*1.11110100000000000000000=10.0111100001111111111111111000100000000000000000
(this multiplication is a pain)
Now obviously we have an exponent of 9 + an exponent of 0 so we keep 10001000 as our exponent, and our sign bit remains, so all that is left is normalization.
We need our mantissa to be of the form 1.000000, so we have to shift it right once which also means we have to increment our exponent bringing us up to 10001001, now that our mantissa is normalized to 1.00111100001111111111111111000100000000000000000. It must be truncated to 23 bits so we are left with 1.00111100001111111111111 (not including the 1, because it will be hidden in our final representation) so our final answer that we are left with is
Sign Bit (+) Exponent(10) Mantissa
0 10001001 00111100001111111111111
Finally if we conver this answer back to decimal we get (+) 2^10 * (1+ 2^-3 + 2^-4 +2^-5+2^-6+2^-11+2^-12+2^-13+2^-14+2^-15+2^-16+2^-17+2^-18+2^-19+2^-20+2^-21+2^-22+2^-23)=1264.99987792
While I did simplify the problem multiplying 1000 by 1.265 instead of 10000 and using single floating point, instead of double, the concept stays the same. You use lose accuracy because the floating point representation only has so many bits in the mantissa with which to represent any given number.
Hope this helps.
It's a result of floating point representation error. Not all numbers that have finite decimal representation have a finite binary floating point representation.
Have a read of this article. Essentially, computers and floating-point numbers do not go together perfectly!
On the other hand, 126500 IS equal to 126499.99999999.... :)
Just like 1 is equal to 0.99999999....
Because 1 = 3 * 1/3 = 3 * 0.333333... = 0.99999999....
Purely due to the inaccuracies of floating point representation.
You could try using Math.round:
var x = Math.round(1.265 * 10000);
These small errors are usually caused by the precision of the floating points as used by the language. See this wikipedia page for more information about the accuracy problems of floating points.
Here's a way to overcome your problem, although arguably not very pretty:
var correct = parseFloat((1.265*10000).toFixed(3));
// Here's a breakdown of the line of code:
var result = (1.265*10000);
var rounded = result.toFixed(3); // Gives a string representation with three decimals
var correct = parseFloat(rounded); // Convert string into a float
// (doesn't show decimals)
If you need a solution, stop using floats or doubles and start using BigDecimal.
Check the BigDecimal implementation stz-ida.de/html/oss/js_bigdecimal.html.en
Even additions on the MS JScript engine :
WScript.Echo(1083.6-1023.6) give 59.9999999

Categories