I'm trying to understand how JS engines convert a JS Number (Float64) to a 32-bit signed integer. I read that one can quickly convert a 64 bit float to a 32 bit signed integer with the bitwise OR like:
-8589934590 | 0 // which gives 2
I can't understand where does the 2 come from. According to the spec, the ToInt32 algorithm does this (the bold text is mine, not the spec's):
Let number be ? ToNumber(argument): -8589934590 is already a Number
If number is NaN, +0, -0, +∞, or -∞, return +0.: No
Let int be the Number value that is the same sign as number and whose magnitude is floor(abs(number)): -8589934590 is already an integer
Let int32bit be int modulo 2³² Since 2³² is positive the result should also be positive. In JS the remainder operator uses the sign of the left operand, so to get a modulo in this case (where -8589934590 is negative), we negate it: let int32bit = 8589934590 % 2**32 // 4294967294 which has 32 bit length 0b11111111111111111111111111111110
If int32bit ≥ 2³¹, return int32bit - 2³²; otherwise return int32bit. int32bit is smaller 2³¹ (since it's negative), so I use int32bit which equals -2 (Even if we consider 0b11111111111111111111111111111110 an unsigned integer, then it's greater 2³¹ and int32bit - 2³² still equals -2
Could someone, please, explain, do I correctly understand the ToInt32 algorithm and the bitwise OR operator?
Your step 4 is wrong. Modulo is defined by the spec as:
The notation “x modulo y” (y must be finite and nonzero) computes a value k of the same sign as y (or zero) such that abs(k) < abs(y) and x-k = q × y for some integer q.
So -8589934590 is our x, and 2**32 is our y, from that we also know that k must be positive. If we choose q = -1 we can solve the equation to k = -4294967294. That is however not a valid solution, as k (negative) does not have the same sign as y (positive). If we choose q = -2 instead, we get k = 2.
So for negative numbers x and positive numbers y, q * y will always have to result in a smaller number than x for k to be positive. Thus if we are transforming that to positive numbers (like you did), we are looking for the larger multiple of the number not the smaller one. E.g. if we take 2 % 3, that'll return 2 (2 - 2 = 3 * 0), whereas -2 modulo 3 will return 1 (-2 -1 = 3 * -1).
Related
With "normal" numbers(32bit range), i'm using zero fill right shift operator to convert to binary, which works both with positive and negative numbers(results in the two's complement binary):
const numberToConvert = -100
(numberToConvert >>> 0).toString(2);
//Result is correct, in two's complement: '11111111111111111111111110011100'
But how can this be done with a negative BigInt?
If i do:
(-1000000000000000000n >>> 0).toString(2)
I get an error "Uncaught TypeError: Cannot mix BigInt and other types, use explicit conversions"
So then i try to use 0 as a bigint:
(-1000000000000000000n >>> 0n).toString(2)
I get the following error: Uncaught TypeError: BigInts have no unsigned right shift, use >> instead
Doing so, results in the non two's complement binary, with "-" appended to it:
(-1000000000000000000n >> 0n).toString(2)
//Result is:'-110111100000101101101011001110100111011001000000000000000000'
How can I get the two's complement binary, of a negative bigint?
The bitwise operators are for 32-bit integers anyway, and why it doesn't work with BigInt, as quoted in JavaScript Definitive Guide, 7th Ed, David Flanagan, O'Reilly, p. 78:
Shift right with zero fill (>>>): This is the only one of the JavaScript bitwise operators that cannot be used with BigInt values. BigInt does not represent negative numbers by setting the high bit the way that 32-bit integers do, and this operator only makes sense for that particular two’s complement representation.
Also note that it looks like it is giving you two's complement, but in fact, the negative number is converted to 32-bit unsigned integer, and then printed as binary, giving you the impression that it is two's complement:
console.log(-100 >>> 0); // => 4294967196
The two's complement has this property:
You have a number, say 123, which is 01111011 in 8 bit binary, and you want the negative number of that, which is -123.
Two complement says: the answer you want, just treat it as a positive number, and add it with the original number 123, and you will just get all 0's with the overflow of the 8 bit number.
As an example, treating everything as positive, 123 + theAnswerYouWant is 01111011 + 10000101, which is exactly 00000000 with an overflow, which is 100000000 (note the extra 1 in front). In other words, you want 256 - 123, which is 133 and if you render 133 as 8 bit, that's the answer you want.
As a result, you can use 28 to subtract the orignal number, and treat it as a positive number and display it, using .toString(2), which you already have.
The following is for 64 bits:
function getBinary(a, nBits) {
[a, nBits] = [BigInt(a), BigInt(nBits)];
if ((a > 0 && a >= 2n ** (nBits - 1n)) || (a < 0 && -a > 2n ** (nBits - 1n))) {
throw new RangeError("overflow error");
}
return a >= 0
? a.toString(2).padStart(Number(nBits), "0")
: (2n ** nBits + a).toString(2);
}
console.log(getBinary(1000000000000000000n, 64));
console.log(getBinary(-1000000000000000000n, 64));
console.log(getBinary(-1, 64));
console.log(getBinary(-2, 64));
console.log(getBinary(-3, 64));
console.log(getBinary(-4, 64n)); // trying the nBits as a BigInt as a test
console.log(getBinary(2n ** 63n - 1n, 64));
console.log(getBinary(-(2n ** 63n), 64));
// console.log(getBinary(2n ** 63n, 64)); // throw Error
// console.log(getBinary(-(2n ** 63n) - 1n, 64)); // throw Error
Note that you don't have to pad it when a is negative, because for example, if it is 8 bit, the number being displayed is any where from 11111111 to 10000000 and it is always 8 bits.
Some more details:
You may already know ones' complement is just simply flipping the bits (from 0 to 1, and 1 to 0). Another way to think of it is, you add the two numbers together and it will becomes all 1s.
The usual way two's complement is described, is to flip the bits, and add 1 to it. You see, if you start with 11111111 and subtract 01111011 (which is 123 decimal), you get 10000100 and it is exactly the same as flipping the bit. (actually this follows from above: adding them get all 1s, so using all 1s to subtract one of them get the other one.
Well, so if you start with 11111111 and subtract that number, and then add 1, isn't it the same as using 11111111, add 1, and subtract that number? Well, 11111111 plus 1 is 100000000 (note the extra 1 in front) -- that's exactly starting with 2n where n is the n-bit integer, and then subtract that number. So you see why the property at the beginning of this post is true.
In fact, two's complement is designed with such purpose: if we want to find out 2 - 1, to make the computer calculate that, we only need to consider this "two's complement" as positive numbers and add them together using the processor's "add circuitry": 00000010 plus 11111111. We get 00000001 but have a carry (the overflow). If we handle the overflow correctly by discarding it, we get the answer: 1. If we use ones' complement instead, we can't use the same addition circuitry to carry out 00000010 + 11111110 to get a 1 because the result is 00000000 which is 0
Another way to think about (4) is, if you have a car's odometer, and it says 000002 miles so far, how do you subtract 1 from it? Well, if you represent -1 as 9999999, then you just add 999999 to the 2, and get 1000001 but the leftmost 1 does not show on the odometer, and now the odometer will become 000001. In decimal, representing -1 as 999999 is 10's complement. In binary, representing -1 as 11111111 is called two's complement.
Two's complement only makes sense with fixed bit lengths. Numbers are converted to 32-bit integers (this is an old convention from back when javascript was messier). BigInt doesn't have that kind of conversion as the length is considered arbitrary. So, in order to use two's complement with BigInt, you'll need to figure out what length you want to use then convert it. Conversion to two's complement is described many places including Wikipedia.
Here, we use the LSB to MSB method since it's pretty easy to implement as string processing in javascript:
const toTwosComplement = (n, len) => {
// `n` must be an integer
// `len` must be a positive integer greater than bit-length of `n`
n = BigInt(n);
len = Number(len);
if(!Number.isInteger(len)) throw '`len` must be an integer';
if(len <= 0) throw '`len` must be greater than zero';
// If non-negative, a straight conversion works
if(n >= 0){
n = n.toString(2)
if(n.length >= len) throw 'out of range';
return n.padStart(len, '0');
}
n = (-n).toString(2); // make positive and convert to bit string
if(!(n.length < len || n === '1'.padEnd(len, '0'))) throw 'out of range';
// Start at the LSB and work up. Copy bits up to and including the
// first 1 bit then invert the remaining
let invert = false;
return n.split('').reverse().map(bit => {
if(invert) return bit === '0' ? '1' : '0';
if(bit === '0') return bit;
invert = true;
return bit;
}).reverse().join('').padStart(len, '1');
};
console.log(toTwosComplement( 1000000000000000000n, 64));
console.log(toTwosComplement(-1000000000000000000n, 64));
console.log(toTwosComplement(-1, 64));
console.log(toTwosComplement(2n**63n-1n, 64));
console.log(toTwosComplement(-(2n**63n), 64));
div.as-console-wrapper{max-height:none;height:100%;}
Can I ever run into any floating number precision errors if I don't perform any arithmetic operations on the floats? The only operations I do with numbers in my program are limited to the following:
Getting numbers as strings from a web service and converting them to floats using parseFloat()
Comparing resulting floats using <= < == > >=
Example:
const input = ['1000.69', '1001.04' /*, ... */]
const x = parseFloat(input[0])
const y = parseFloat(input[1])
console.log(x < y)
console.log(x > y)
console.log(x == y)
As for parseFloat() implemetation, I'm using latest Node.js.
The source of floats is prices in USD as strings, always two decimals.
As long as the source of your floats is reliable, your checks are safe, yes.
I'd still round them to an acceptable decimal number after the parsing, just to be 100% safe.
As the MDN docs show in one of their examples
// these all return 3.14
parseFloat(3.14);
parseFloat('3.14');
parseFloat(' 3.14 ');
parseFloat('314e-2');
parseFloat('0.0314E+2');
parseFloat('3.14some non-digit characters');
parseFloat({ toString: function() { return "3.14" } });
//and of course
parseFloat('3.140000000') === 3.14
The parseFloat operation converts a string into it's number value. The spec says:
In this specification, the phrase “the Number value for x” where x represents an exact real mathematical quantity (which might even be an irrational number such as π) means a Number value chosen in the following manner. Consider the set of all finite values of the Number type, with -0 removed and with two additional values added to it that are not representable in the Number type, namely 2ℝ1024ℝ (which is +1ℝ × 2ℝ53ℝ × 2ℝ971ℝ) and -2ℝ1024ℝ (which is -1ℝ × 2ℝ53ℝ × 2ℝ971ℝ). Choose the member of this set that is closest in value to x.
That reads as if two same strings are always converted to the same closest number. Except for NaN, two same numbers are equal.
6.1.6.1.13 Number::equal ( x, y )
If x is NaN, return false.
If y is NaN, return false.
If x is the same Number value as y, return true.
If x is +0 and y is -0, return true.
If x is -0 and y is +0, return true.
Return false.
emphasis mine
I am working with software (Oracle Siebel) that only supports JavaScript expressions with operators multiply, divide, subtract, add, and XOR (*, /, -, +, ^). I don't have other operators such as ! or ? : available.
Using the above operators, is it possible to convert a number to 1 if it is non-zero and leave it 0 if it's already zero? The number may be positive, zero, or negative.
Example:
var c = 55;
var d; // d needs to set as 1
I tried c / c , but it evaluates to NaN when c is 0. d needs to be 0 when c is 0.
c is a currency value, and it will have a maximum of two trailing digits and 12 leading digits.
I am trying to emulate an if condition by converting a number to a Boolean 0 or 1, and then multiplying other parts of the expression.
Use the expression n/n^0.
If n is not zero:
Step Explanation
------- -------------------------------------------------------------------------------
n/n^0 Original expression.
1^0 Any number divided by itself equals 1. Therefore n/n becomes 1.
1 1 xor 0 equals 1.
If n is zero:
Step Explanation
------- -------------------------------------------------------------------------------
n/n^0 Original expression.
0/0^0 Since n is 0, n/n is 0/0.
NaN^0 Zero divided by zero is mathematically undefined. Therefore 0/0 becomes NaN.
0^0 In JavaScript, before any bitwise operation occurs, both operands are normalized.
This means NaN becomes 0.
0 0 xor 0 equals 0.
As you can see, all non-zero values get converted to 1, and 0 stays at 0. This leverages the fact that in JavaScript, NaN^0 is 0.
Demo:
[0, 1, 19575, -1].forEach(n => console.log(`${n} becomes ${n/n^0}.`))
c / (c + 5e-324) should work. (The constant 5e-324 is Number.MIN_VALUE, the smallest representable positive number.) If x is 0, that is exactly 0, and if x is nonzero (technically, if x is at least 4.45014771701440252e-308, which the smallest non-zero number allowed in the question, 0.01, is), JavaScript's floating-point math is too imprecise for the answer to be different than 1, so it will come out as exactly 1.
(((c/c)^c) - c) * (((c/c)^c) - c) will always return 1 for negatives and positives and 0 for 0.
It is definitely more confusing than the chosen answer and longer. However, I feel like it is less hacky and not relying on constants.
EDIT: As #JosephSible mentions, a more compact version of mine and #CRice's version which does not use constants is:
c/c^c-c
A very complicated answer, but one that doesn't depend on limited precision: If you take x^(2**n), this will always be equal to x+2**n if x is zero, but it will be equal to x-2**n if x has a one in the nth place. Thus, for x=0, (x^(2**n)-x+2**n)/(2**(n+1) will always be 1, but it will sometimes be zero for x !=0. So if you take the product of (x^(2**n)-x+2**n)/(2**(n+1) over all n, then XOR that with 1, you will get your desired function. You'll have to manually code each factor, though. And you'll have to modify this if you're using floating points.
If you have the == operator, then (x==0)^1 works.
Javascript evaluates the following code snippet to -1.
-5 % 4
I understand that the remainder theorem states a = bq + r such that 0 ≤ r < b.
Given the definition above should the answer not be 3? Why does JavaScript return -1?
Because it's a remainder operator, not a modulo. But there's a proposal for a proper one.
A quote from Ecma 5.1
remainder r from a dividend n and a divisor d is defined by the
mathematical relation r = n − (d × q)
where q is an integer that is negative only if n/d is negative and
positive only if n/d is positive
Most programming languages use a symmetric modulo which is different than the mathematical one for negative values.
The mathematical modulo can be computed using the symmetric modulo like this:
a mod b = ((a % b) + b) % b
mod mathematical modulo
% symmetric modulo
The reason is that % is not a modulus but a remainder operator. See here
If you're using % to do modular arithmetic, it doesn't matter (conceptually, at least) whether -5 % 4 evaluates to –1 or 3, because those two numbers are congruent modulo 4: for the purposes of modular arithmetic, they're the same.
...if the remainder is nonzero, there are two possible choices for the remainder, one negative and the other positive, and there are also two possible choices for the quotient. Usually, in number theory, the positive remainder is always chosen, but programming languages choose depending on the language and the signs of a and n. (http://en.wikipedia.org/wiki/Modulo_operation)
in python, which takes the sign of divisor:
-5 % 4 == 3 # -5 = (-2) * 4 + 3
in javascript, which takes the sign of divident:
-5 % 4 == -1 # -5 = (-1) * 4 - 1
What is the highest number this javascript expression can evaluate to? What is the lowest number? Why?
+(''+Math.random()).substring(2)
Extra credit: How many different values can the expression evaluate to? Can it be every value from the minimum to the maximum, or are some intermediate values not obtainable due to rounding issues?
Response to Daniel's answer (deleted, was 10000000000000000 max, 0 min):
I was playing around in Chrome's console and got this:
Math.random();
>> 0.00012365682050585747
'12365682050585747'.length
>> 17
12365682050585747 > 10000000000000000
>> true
... so 10000000000000000 can't be the max!
It depends on how the random number is generated, and how the number will be converted to string. The ECMAScript spec doesn't specify both of these.
In practice, the number will have at most 17 significant figures, so the maximum should be at most 1017.
The spec does specify that a number will be displayed in decimal form (instead of scientific form) when the exponent is between -6 and 20 (10-6 ≤ x < 1021), so we just need to restrict our attention on numbers in [10-6, 1) when trying to seek the maximum exhaustively.
However, in this range a number must be representable as s × 2e, where 1 ≤ s ≤ 2 − 2-52 with a precision of Δs = 2-52 and -20 ≤ e ≤ -1. The spec recommends that ToNumber(ToString(x)) == x, so the number should be precise down to 2-52+e for a given e. Thus the "17-digit" number with (2 − n × 2-52) × 2e with the smallest n will be the biggest number representable with a given e, after chopping the initial 0..
v
(-20) 0.0000019073486328124998
(-19) 0.0000038146972656249996
(-18) 0.0000076293945312499975 (n=3)
(-17) 0.000015258789062499998
(-16) 0.000030517578124999997
(-15) 0.000061035156249999986 (n=2)
(-14) 0.00012207031249999999
(-13) 0.00024414062499999997
(-12) 0.00048828124999999995
(-11) 0.0009765624999999999 (always 16-digit?)
(-10) 0.0019531249999999998
(-9) 0.0039062499999999996
(-8) 0.0078124999999999965 (n=4)
(-7) 0.015624999999999998
(-6) 0.031249999999999997
(-5) 0.062499999999999986 (n=2)
(-4) 0.12499999999999999
(-3) 0.24999999999999997
(-2) 0.49999999999999994
(-1) 0.9999999999999999 (always 16-digit?)
From here we know that the absolute maximum is 78,124,999,999,999,965.
Math.random() can return any nonnegative numbers in the interval [0, 1), so the safe minimum is -324 from 5e-324 (the smallest subnormal number in double precision is 4.94 × 10-324).
For me the highest number is 1/0 (===Infinity) and the lowest obviously -1/0 (in Chromium browser).
Edit: You can also try parse a number from string to see which evaluates to Infinity.
var a = "1";
while(parseInt(a)!==Infinity) a=a+"0";
alert("Length of the highest number is: " + (a.length-1));
309 for me.