Javascript numbers representing limits of inequality comparisons - javascript

I am working with a set of values such that 0 < value < 1. I want to validate some data such that a value outside this range is modified to the closest valid value. After some experimentation, I discovered Number.MIN_VALUE as the best way of representing the lower limit. However, the upper limit, 1 - Number.MIN_VALUE, has unexpected behaviour:
Whereas:
Number.MIN_VALUE; //5e-324 and Number.MIN_VALUE > 0; //true
1 - Number.MIN_VALUE; //1 and 1 - Number.MIN_VALUE < 1; //false
So, what is the best way of generating the smallest and largest numbers in Javascript that satisfy an inequality of the form a < value < b?

The mantissa of a JavaScript float has 52 bits. With denormalization, the minimum float can be extremely small, 5e-324 -- this is represented as something like 51 zero bits, 1 one bit, and the most negative exponent.
But you can't get that close to 1, because the representation of this will be all one bits, and you can't increase the range using exponents. It's limited by the floating point precision.
So the value will be as given below:
console.log(1 - .5**53);

Related

How to convert a BigInt, to two's complement binary in JavaScript?

With "normal" numbers(32bit range), i'm using zero fill right shift operator to convert to binary, which works both with positive and negative numbers(results in the two's complement binary):
const numberToConvert = -100
(numberToConvert >>> 0).toString(2);
//Result is correct, in two's complement: '11111111111111111111111110011100'
But how can this be done with a negative BigInt?
If i do:
(-1000000000000000000n >>> 0).toString(2)
I get an error "Uncaught TypeError: Cannot mix BigInt and other types, use explicit conversions"
So then i try to use 0 as a bigint:
(-1000000000000000000n >>> 0n).toString(2)
I get the following error: Uncaught TypeError: BigInts have no unsigned right shift, use >> instead
Doing so, results in the non two's complement binary, with "-" appended to it:
(-1000000000000000000n >> 0n).toString(2)
//Result is:'-110111100000101101101011001110100111011001000000000000000000'
How can I get the two's complement binary, of a negative bigint?
The bitwise operators are for 32-bit integers anyway, and why it doesn't work with BigInt, as quoted in JavaScript Definitive Guide, 7th Ed, David Flanagan, O'Reilly, p. 78:
Shift right with zero fill (>>>): This is the only one of the JavaScript bitwise operators that cannot be used with BigInt values. BigInt does not represent negative numbers by setting the high bit the way that 32-bit integers do, and this operator only makes sense for that particular two’s complement representation.
Also note that it looks like it is giving you two's complement, but in fact, the negative number is converted to 32-bit unsigned integer, and then printed as binary, giving you the impression that it is two's complement:
console.log(-100 >>> 0); // => 4294967196
The two's complement has this property:
You have a number, say 123, which is 01111011 in 8 bit binary, and you want the negative number of that, which is -123.
Two complement says: the answer you want, just treat it as a positive number, and add it with the original number 123, and you will just get all 0's with the overflow of the 8 bit number.
As an example, treating everything as positive, 123 + theAnswerYouWant is 01111011 + 10000101, which is exactly 00000000 with an overflow, which is 100000000 (note the extra 1 in front). In other words, you want 256 - 123, which is 133 and if you render 133 as 8 bit, that's the answer you want.
As a result, you can use 28 to subtract the orignal number, and treat it as a positive number and display it, using .toString(2), which you already have.
The following is for 64 bits:
function getBinary(a, nBits) {
[a, nBits] = [BigInt(a), BigInt(nBits)];
if ((a > 0 && a >= 2n ** (nBits - 1n)) || (a < 0 && -a > 2n ** (nBits - 1n))) {
throw new RangeError("overflow error");
}
return a >= 0
? a.toString(2).padStart(Number(nBits), "0")
: (2n ** nBits + a).toString(2);
}
console.log(getBinary(1000000000000000000n, 64));
console.log(getBinary(-1000000000000000000n, 64));
console.log(getBinary(-1, 64));
console.log(getBinary(-2, 64));
console.log(getBinary(-3, 64));
console.log(getBinary(-4, 64n)); // trying the nBits as a BigInt as a test
console.log(getBinary(2n ** 63n - 1n, 64));
console.log(getBinary(-(2n ** 63n), 64));
// console.log(getBinary(2n ** 63n, 64)); // throw Error
// console.log(getBinary(-(2n ** 63n) - 1n, 64)); // throw Error
Note that you don't have to pad it when a is negative, because for example, if it is 8 bit, the number being displayed is any where from 11111111 to 10000000 and it is always 8 bits.
Some more details:
You may already know ones' complement is just simply flipping the bits (from 0 to 1, and 1 to 0). Another way to think of it is, you add the two numbers together and it will becomes all 1s.
The usual way two's complement is described, is to flip the bits, and add 1 to it. You see, if you start with 11111111 and subtract 01111011 (which is 123 decimal), you get 10000100 and it is exactly the same as flipping the bit. (actually this follows from above: adding them get all 1s, so using all 1s to subtract one of them get the other one.
Well, so if you start with 11111111 and subtract that number, and then add 1, isn't it the same as using 11111111, add 1, and subtract that number? Well, 11111111 plus 1 is 100000000 (note the extra 1 in front) -- that's exactly starting with 2n where n is the n-bit integer, and then subtract that number. So you see why the property at the beginning of this post is true.
In fact, two's complement is designed with such purpose: if we want to find out 2 - 1, to make the computer calculate that, we only need to consider this "two's complement" as positive numbers and add them together using the processor's "add circuitry": 00000010 plus 11111111. We get 00000001 but have a carry (the overflow). If we handle the overflow correctly by discarding it, we get the answer: 1. If we use ones' complement instead, we can't use the same addition circuitry to carry out 00000010 + 11111110 to get a 1 because the result is 00000000 which is 0
Another way to think about (4) is, if you have a car's odometer, and it says 000002 miles so far, how do you subtract 1 from it? Well, if you represent -1 as 9999999, then you just add 999999 to the 2, and get 1000001 but the leftmost 1 does not show on the odometer, and now the odometer will become 000001. In decimal, representing -1 as 999999 is 10's complement. In binary, representing -1 as 11111111 is called two's complement.
Two's complement only makes sense with fixed bit lengths. Numbers are converted to 32-bit integers (this is an old convention from back when javascript was messier). BigInt doesn't have that kind of conversion as the length is considered arbitrary. So, in order to use two's complement with BigInt, you'll need to figure out what length you want to use then convert it. Conversion to two's complement is described many places including Wikipedia.
Here, we use the LSB to MSB method since it's pretty easy to implement as string processing in javascript:
const toTwosComplement = (n, len) => {
// `n` must be an integer
// `len` must be a positive integer greater than bit-length of `n`
n = BigInt(n);
len = Number(len);
if(!Number.isInteger(len)) throw '`len` must be an integer';
if(len <= 0) throw '`len` must be greater than zero';
// If non-negative, a straight conversion works
if(n >= 0){
n = n.toString(2)
if(n.length >= len) throw 'out of range';
return n.padStart(len, '0');
}
n = (-n).toString(2); // make positive and convert to bit string
if(!(n.length < len || n === '1'.padEnd(len, '0'))) throw 'out of range';
// Start at the LSB and work up. Copy bits up to and including the
// first 1 bit then invert the remaining
let invert = false;
return n.split('').reverse().map(bit => {
if(invert) return bit === '0' ? '1' : '0';
if(bit === '0') return bit;
invert = true;
return bit;
}).reverse().join('').padStart(len, '1');
};
console.log(toTwosComplement( 1000000000000000000n, 64));
console.log(toTwosComplement(-1000000000000000000n, 64));
console.log(toTwosComplement(-1, 64));
console.log(toTwosComplement(2n**63n-1n, 64));
console.log(toTwosComplement(-(2n**63n), 64));
div.as-console-wrapper{max-height:none;height:100%;}

Firefox/Chrome JSON parsing numeric value in JSON as numeric value -1 for 1 specific value [duplicate]

Consider this code (node v5.0.0)
const a = Math.pow(2, 53)
const b = Math.pow(2, 53) + 1
const c = Math.pow(2, 53) + 2
console.log(a === b) // true
console.log(a === c) // false
Why a === b is true?
What is the maximum integer value javascript can handle?
I'm implementing random integer generator up to 2^64. Is there any pitfall I should be aware of?
How javascript treat large integers?
JS does not have integers. JS numbers are 64 bit floats. They are stored as a mantissa and an exponent.
The precision is given by the mantissa, the magnitude by the exponent.
If your number needs more precision than what can be stored in the mantissa, the least significant bits will be truncated.
9007199254740992; // 9007199254740992
(9007199254740992).toString(2);
// "100000000000000000000000000000000000000000000000000000"
// \ \ ... /\
// 1 10 53 54
// The 54-th is not stored, but is not a problem because it's 0
9007199254740993; // 9007199254740992
(9007199254740993).toString(2);
// "100000000000000000000000000000000000000000000000000000"
// \ \ ... /\
// 1 10 53 54
// The 54-th bit should be 1, but the mantissa only has 53 bits!
9007199254740994; // 9007199254740994
(9007199254740994).toString(2);
// "100000000000000000000000000000000000000000000000000010"
// \ \ ... /\
// 1 10 53 54
// The 54-th is not stored, but is not a problem because it's 0
Then, you can store all these integers:
-9007199254740992, -9007199254740991, ..., 9007199254740991, 9007199254740992
The second one is called the minimum safe integer:
The value of Number.MIN_SAFE_INTEGER is the smallest integer n such
that n and n − 1 are both exactly representable as a Number value.
The value of Number.MIN_SAFE_INTEGER is −9007199254740991
(−(253−1)).
The second last one is called the maximum safe integer:
The value of Number.MAX_SAFE_INTEGER is the largest integer n such
that n and n + 1 are both exactly representable as a Number value.
The value of Number.MAX_SAFE_INTEGER is 9007199254740991
(253−1).
Answering your second question, here is your maximum safe integer in JavaScript:
console.log( Number.MAX_SAFE_INTEGER );
All the rest is written in MDN:
The MAX_SAFE_INTEGER constant has a value of 9007199254740991. The
reasoning behind that number is that JavaScript uses double-precision
floating-point format numbers as specified in IEEE 754 and can only
safely represent numbers between -(2 ** 53 - 1) and 2 ** 53 - 1.
Safe in this context refers to the ability to represent integers
exactly and to correctly compare them. For example,
Number.MAX_SAFE_INTEGER + 1 === Number.MAX_SAFE_INTEGER + 2 will
evaluate to true, which is mathematically incorrect. See
Number.isSafeInteger() for more information.
.:: JavaScript only supports 53 bit integers ::.
All numbers in JavaScript are floating point which means that integers are always represented as
sign × mantissa × 2exponent
The mantissa has 53 bits. You can use the exponent to get higher integers, but then they won’t be contiguous, any more. For example, you generally need to multiply the mantissa by two (exponent 1) in order to reach the 54th bit.
However, if you multiply by two, you will only be able to represent every second integer:
Math.pow(2, 53) // 54 bits 9007199254740992
Math.pow(2, 53) + 1 // 9007199254740992
Math.pow(2, 53) + 2 //9007199254740994
Math.pow(2, 53) + 3 //9007199254740996
Math.pow(2, 53) + 4 //9007199254740996
Rounding effects during the addition make things unpredictable for odd increments (+1 versus +3). The actual representation is a bit more complicated but this explanation should help you understand the basic problem.
You can safely use strint library to encode large integers in strings and perform arithmetic operations on them too.
Here is the full article.
Number.MAX_VALUE will tell you the largest floating-point value representable in your JS implementation. The answer will likely be: 1.7976931348623157e+308. But that doesn't mean that every integer up to 10^308 can be represented exactly. As your example code shows, beyond 2^53 only even numbers can be represented, and as you go farther out on the number line the gaps get much wider.
If you need exact integers larger than 2^53, you probably want to work with a bignum package, which allows for arbitrarily large integers (within the bounds of available memory). Two packages that I happen to know are:
BigInt by Leemon
and
Crunch
To supplement to other answers here, it's worth mentioning that BigInt exists. This allows JavaScript to handle arbitrarily large integers.
Use the n suffix on your numbers and use regular operators like 2n ** 53n + 2n. Important to point out that a BigInt is not a Number, but you can do range-limited interoperation with Number via explicit conversions.
Some examples at the Node.js REPL:
> 999999999999999999999999999999n + 1n
1000000000000000000000000000000n
> 2n ** 53n
9007199254740992n
> 2n ** 53n + 1n
9007199254740993n
> 2n ** 53n == 2n ** 53n + 1n
false
> typeof 1n
'bigint'
> 3 * 4n
TypeError: Cannot mix BigInt and other types, use explicit conversions
> BigInt(3) * 4n
12n
> 3 * Number(4n)
12
> Number(2n ** 53n) == Number(2n ** 53n + 1n)
true

max integer value in JavaScript

I'm reading the second chapter of the book Eloquent JavaScript. The author states that:
Any whole number less than 2^52 (which is more than 10^15) will safely fit in a JavaScript number.
I grabbed the value of 2^52 from wikipedia.
4,503,599,627,370,496
The value has to be less than 2^52, so I've substracted 1 from the initial value;
var max = 4503599627370495;
After defining the max variable I'm checking what's the value (I'm using Chrome 32.0.1700.77).
console.log(max); // 4503599627370495
I'd like to see what happens when I go over this limit, so I'm adding one a couple of times.
Unexpectedly:
max += 1;
console.log(max); // 4503599627370496
max += 1;
console.log(max); // 4503599627370497
max += 1;
console.log(max); // 4503599627370498
I went over the limit and the calculations are still precise.
I tried the next power of two instead, 2^53, I didn't substract 1 this time:
9,007,199,254,740,992
var max = 9007199254740992;
This one seems to be a bigger limit, it seems that I can quite safely add and substract numbers:
max += 1;
console.log(max); // 9007199254740992
max += 1;
console.log(max); // 9007199254740992
max -= 1;
console.log(max); // 9007199254740991
max += 1;
console.log(max); // 9007199254740992
max -= 900;
console.log(max); // 9007199254740092
max += 900;
console.log(max); // 9007199254740992
I can assign even a bigger value to the max, however it loses precision and I can't safely add or substract numbers again.
Could you please explain precisely the mechanism that sits under the hood? An example of what happens with the bits after going above 2^52 would be really helpful.
This is not a strongly typed programming language. JS has an object Number. You can even get an infinite number: document.write(Math.exp(1000));.
document.write(Number.MIN_VALUE + "<br>");
document.write(Number.MAX_VALUE + "<br>");
document.write(Number.POSITIVE_INFINITY + "<br>");
document.write(Number.NEGATIVE_INFINITY + "<br>");
alert([
Number.MAX_VALUE/(1e293),
Number.MAX_VALUE/(1e292),
Number.MAX_VALUE/(1e291),
Number.MAX_VALUE/(1e290),
].join('\n'))
Hope it's a useful answer. Thanks!
UPDATE:
max int is - +/- 9007199254740992
You can find some information on JavaScript's Number type here: ECMA-262 5th Edition: The Number Type.
As it mentions, numbers are represented as a 64-bit floating-point number, with 53 bits of mantissa (significant digits) and 11 bits for the exponent (IEEE 754). The result is then obtained with: mantissa * 2^exponent.
This means that up to 2^53 values can be represented in the mantissa (of those a few numbers have special meanings, and the others are positive and negative integers).
The number 2^53 (9007199254740992) can't be represented in the mantissa, and you have to use an exponent. As an example, you can represent 2^53 as (9007199254740992 / 2) * 2^1, ie. mantissa = 9007199254740992 / 2 = 4503599627370496 and exponent = 1.
Let's check what happens with 2^53+1 (9007199254740993). Here we have to do the same, mantissa = 9007199254740993 / 2 = 4503599627370496. Oops, isn't this the same mantissa we had for 2^53? Looks like there's been some rounding error! :)
(Note: the above examples are not actually how it really works: the mantissa is always interpreted as having a dot after the first digit, which means that eg. the number 3 is actually stored as 1.5*2. I omitted this in the above explanation to make it easier to follow.)
You can find some more information on floating-point numbers (in general) here: What Every Computer Scientist Should Know About Floating-Point Arithmetic.
You can think of 52 bits integer in JS, but remember that bitwise logical operators & | >> etc.. will only deal with 32 less significant bits discarding the rest.

Math.random and arithmetic shift

If I have the following code in JavaScript:
var index1 = (Math.random() * 6) >> 0;
var index2 = Math.floor(Math.random() * 6);
The results for index1 or index2 are anywhere between 0 and 6.
I must be confused with my understanding of the >> operator. I thought that by using arithmetic shift that the results for index1 would be anywhere between 1 and 6.
I am noticing, however that I don't need to use Math.floor() or Math.round() for index1 if I use the >> operator.
I know I can achieve this by adding 1 to both indexes, but I was hoping there was a better way of ensuring results are from 1 to 6 instead of adding 1.
I'm aware that bitwise operators treat their operands as a sequence of 32 bits (zeros and ones), rather than as decimal, hexadecimal, or octal numbers. For example, the decimal number nine has a binary representation of 1001. Bitwise operators perform their operations on such binary representations, but they return standard JavaScript numerical values.
UPDATE:
I saw the original usage in this CAAT tutorial on line 26 and was wondering whether that would actually return a random number between 1 and 6 and it seems it would only ever return a random number between 0 and 6. So you would never actually see the anim1.png fish image!
Thank you in advance for any enlightenment.
Wikipedia says '(Arithmetic right shifts for negative numbers would be equivalent to division using rounding towards 0 in one's complement representation of signed numbers as was used by some historic computers.)'
Not exactly an answer, but the idea is that >> 0 is really specific and shouldn't be used in general for getting a range between 1 and 6.
Most people would tell you to do
Math.floor((Math.random()*10)+1);

Highest and lowest possible result of this javascript expression?

What is the highest number this javascript expression can evaluate to? What is the lowest number? Why?
+(''+Math.random()).substring(2)
Extra credit: How many different values can the expression evaluate to? Can it be every value from the minimum to the maximum, or are some intermediate values not obtainable due to rounding issues?
Response to Daniel's answer (deleted, was 10000000000000000 max, 0 min):
I was playing around in Chrome's console and got this:
Math.random();
>> 0.00012365682050585747
'12365682050585747'.length
>> 17
12365682050585747 > 10000000000000000
>> true
... so 10000000000000000 can't be the max!
It depends on how the random number is generated, and how the number will be converted to string. The ECMAScript spec doesn't specify both of these.
In practice, the number will have at most 17 significant figures, so the maximum should be at most 1017.
The spec does specify that a number will be displayed in decimal form (instead of scientific form) when the exponent is between -6 and 20 (10-6 ≤ x < 1021), so we just need to restrict our attention on numbers in [10-6, 1) when trying to seek the maximum exhaustively.
However, in this range a number must be representable as s × 2e, where 1 ≤ s ≤ 2 − 2-52 with a precision of Δs = 2-52 and -20 ≤ e ≤ -1. The spec recommends that ToNumber(ToString(x)) == x, so the number should be precise down to 2-52+e for a given e. Thus the "17-digit" number with (2 − n × 2-52) × 2e with the smallest n will be the biggest number representable with a given e, after chopping the initial 0..
v
(-20) 0.0000019073486328124998
(-19) 0.0000038146972656249996
(-18) 0.0000076293945312499975 (n=3)
(-17) 0.000015258789062499998
(-16) 0.000030517578124999997
(-15) 0.000061035156249999986 (n=2)
(-14) 0.00012207031249999999
(-13) 0.00024414062499999997
(-12) 0.00048828124999999995
(-11) 0.0009765624999999999 (always 16-digit?)
(-10) 0.0019531249999999998
(-9) 0.0039062499999999996
(-8) 0.0078124999999999965 (n=4)
(-7) 0.015624999999999998
(-6) 0.031249999999999997
(-5) 0.062499999999999986 (n=2)
(-4) 0.12499999999999999
(-3) 0.24999999999999997
(-2) 0.49999999999999994
(-1) 0.9999999999999999 (always 16-digit?)
From here we know that the absolute maximum is 78,124,999,999,999,965.
Math.random() can return any nonnegative numbers in the interval [0, 1), so the safe minimum is -324 from 5e-324 (the smallest subnormal number in double precision is 4.94 × 10-324).
For me the highest number is 1/0 (===Infinity) and the lowest obviously -1/0 (in Chromium browser).
Edit: You can also try parse a number from string to see which evaluates to Infinity.
var a = "1";
while(parseInt(a)!==Infinity) a=a+"0";
alert("Length of the highest number is: " + (a.length-1));
309 for me.

Categories