JavaScript and Python bitwise OR operation gave different results - javascript

JS
console.log(1 | 1); // 1
console.log(1 | 0x8); // 9
console.log(1 | 0x80000000); // -2147483647
python
print (1 | 1) # 1
print (1 | 0x8) # 9
print (1 | 0x80000000) # 2147483649
Why the results in last examples are different?

Tha JavaScript behavior is described in MDN
The operands of all bitwise operators are converted to signed 32-bit integers in two's complement format, except for zero-fill right shift which results in an unsigned 32-bit integer.
So you get negative numbers in JavaScript because it treats the value as a 32-bit signed number. The 0x80000000 bit is the sign bit.
The qualifier at the end of the above quote points the way to get the same result as Python:
console.log((1 | 0x80000000) >>> 0);
>>> is the zero-fill right shift operator. Shifting by 0 bits doesn't change the value, but it gets converted to unsigned.
Python integers have infinite precision, so they don't wrap around to negative numbers when they get to 32 bits.

Related

Javascript's Shift right with zero-fill operator (>>>) yielding unexpected result

First, (-1 >>> 0) === (2**32 - 1) which I expect is due to adding a new zero to the left, thus converting the number into 33-bit number?
But, Why is (-1 >>> 32) === (2**32 - 1) as well, while I expect it (after shifting the 32-bit number 32 times and replacing the Most Significant Bits with zeros) to be 0.
Shouldn't it be equal ((-1 >>> 31) >>> 1) === 0? or Am I missing something?
When you execute (-1 >>> 0) you are executing an unsigned right shift. The unsigned here is key. Per the spec, the result of >>> is always unsigned. -1 is represented as the two's compliment of 1. This in binary is all 1s (In an 8 bit system it'd be 11111111).
So now you are making it unsigned by executing >>> 0. You are saying, "shift the binary representation of -1, which is all 1s, by zero bits (make no changes), but make it return an unsigned number.” So, you get the value of all 1s. Go to any javascript console in a browser and type:
console.log(2**32 - 1) //4294967295
// 0b means binary representation, and it can have a negative sign
console.log(0b11111111111111111111111111111111) //4294967295
console.log(-0b1 >>> 0) //4294967295
Remember 2 ** any number minus 1 is always all ones in binary. It's the same number of ones as the power you raised two to. So 2**32 - 1 is 32 1s. For example, two to the 3rd power (eight) minus one (seven) is 111 in binary.
So for the next one (-1 >>> 32) === (2**32 - 1).... let's look at a few things. We know the binary representation of -1 is all 1s. Then shift it right one digit and you get the same value as having all 1s but precede it with a zero (and return an unsigned number).
console.log(-1 >>> 1) //2147483647
console.log(0b01111111111111111111111111111111) //2147483647
And keep shifting until you have 31 zeros and a single 1 at the end.
console.log(-1 >>> 31) //1
This makes sense to me, we have 31 0s and a single 1 now for our 32 bits.
So then you hit the weird case, shifting one more time should make zero right?
Per the spec:
6.1.6.1.11 Number::unsignedRightShift ( x, y )
Let lnum be ! ToInt32(x).
Let rnum be ! ToUint32(y).
Let shiftCount be the result of masking out all but the least significant 5 bits of rnum, that is, compute rnum & 0x1F.
Return the result of performing a zero-filling right shift of lnum by shiftCount bits. Vacated bits are filled with zero. The result is an unsigned 32-bit integer.
So we know we already have -1, which is all 1s in twos compliment. And we are going to shift it per the last step of the docs by shiftCount bits (which we think is 32). And shiftCount is:
Let shiftCount be the result of masking out all but the least significant 5 bits of rnum, that is, compute rnum & 0x1F.
So what is rnum & 0x1F? Well & means a bitwise AND operation. lnum is the number left of the >>> and rnum is the number right of it. So we are saying 32 AND 0x1F. Remember 32 is 100000. 0x is hexadecimal where each character can be represented by 4 bits. 1 is 0001 and F is 1111. So 0x1F is 00011111 or 11111 (31 in base 10, 2**5 - 1 also).
console.log(0x1F) //31 (which is 11111)
32: 100000 &
0x1F: 011111
---------
000000
The number of bits to shift if zero. This is because the leading 1 in 32 is not part of the 5 most significant bits! 32 is six bits. So we take 32 1s and shift it zero bits! That's why. The answer is still 32 1s.
On the example -1 >>> 31 this made sense because 31 is <= 5 bits. So we did
31: 11111 &
0x1F: 11111
-------
11111
And shifted it 31 bits.... as expected.
Let's test this further.... let's do
console.log(-1 >>> 33) //2147483647
console.log(-1 >>> 1) //2147483647
That makes sense, just shift it one bit.
33: 100001 &
0x1F: 011111
---------
00001
So, go over 5 bits with a bitwise operator and get confused. Want to play stump the dummy with a person who hasn't researched the ECMAScript to answer a stackoverflow post? Just ask why are these the same.
console.log(-1 >>> 24033) //2147483647
console.log(-1 >>> 1) //2147483647
Well of course it's because
console.log(0b101110111100001) // 24033
console.log(0b000000000000001) // 1
// ^^^^^ I only care about these bits!!!
When you do (-1 >>> 0), you are turning the sign bit into zero while keeping the rest of the number the same, therefore ending up as 2**32 - 1.
The next behaviour is documented in the ECMAScript specification. The actual number of shifts is going to be "the result of masking out all but the least significant 5 bits of rnum, that is, compute rnum & 0x1F".
Since 32 & 0x1F === 0, both of your results will be identical.

Left shift results in negative numbers in Javascript

I'm having trouble understanding how shifting works. I would expect that a and b would be the same but that's not the case:
a = 0xff000000;
console.log(a.toString(16));
b = 0xff << 24;
console.log(b.toString(16));
resulting in:
ff000000
-1000000
I came to this code while trying to create a 32bit number from 4 bytes.
Bitwise operators convert their operands to signed 32 bit numbers. That means the most significant bit is the sign bit, which gives you only 31 bits for the number value.
0xff000000 by itself is interpreted as 64bit floating point value. But truncating this to a 32bit signed integer produces a negative value since the most significant bit is 1:
0xff000000.toString(2);
> "11111111000000000000000000000000"
(0xff000000 | 0).toString(16)
> -1000000
According to Bitwise operations on 32-bit unsigned ints? you can use >>> 0 to convert the value back to an unsigned value:
0xff << 24 >>> 0
> 4278190080
From the spec:
The result is an unsigned 32-bit integer.
So it turns out this is as per the spec. Bit shift operators return signed, 32-bit integer results.
The result is a signed 32-bit integer.
From the latest ECMAScript spec.
Because your number is already 8 bits long, shifting it left by 24 bits and then interpreting that as a signed integer means that the leading 1 bit is seen as making it a negative number.

`Math.trunc` vs `|0` vs `<<0` vs `>>0` vs `&-1` vs `^0`

I have just found that in ES6 there's a new math method: Math.trunc.
I have read its description in MDN article, and it sounds like using |0.
Moreover, <<0, >>0, &-1, ^0 also do similar things (thanks #kojiro & #Bergi).
After some tests, it seems that the only differences are:
Math.trunc returns -0 with numbers in interval (-1,-0]. Bitwise operators return 0.
Math.trunc returns NaN with non numbers. Bitwise operators return 0.
Are there more differences (among all of them)?
n | Math.trunc | Bitwise operators
----------------------------------------
42.84 | 42 | 42
13.37 | 13 | 13
0.123 | 0 | 0
0 | 0 | 0
-0 | -0 | 0
-0.123 | -0 | 0
-42.84 | -42 | -42
NaN | NaN | 0
"foo" | NaN | 0
void(0)| NaN | 0
How about Math.trunc(Math.pow(2,31)) vs. Math.pow(2,31) | 0
Bitwise operations are performed on signed 32-bit integers. So, when you do Math.pow(2, 31) you get this representation in bits "10000000000000000000000000000000". Because this number has to be converted to signed 32-bit form, we now have a 1 in the sign bit position. This means that we are looking at a -eve number in signed 32-bit form. Then when we do the bitwise OR with 0 we get the same thing in signed 32-bit form. In decimal it is -2147483648.
Side note: In signed 32-bit form the range of decimals that can be represented in binary for is [10000000000000000000000000000000, 01111111111111111111111111111111]. In decimal (base 10) this range is [-2147483648, 2147483647].
In many programming languages with bitwise operators, attempting to do a bitwise operation on a non-integer is a type error:
>>> # Python
>>> 1 << 0; 1.2 << 0
1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for <<: 'float' and 'int'
In ECMA-262, a Number is a double-precision 64-bit binary format IEEE 754. In other words, there are no integers in JavaScript. As long as the values you're dealing with fit within -(Math.pow(2,32)) and Math.pow(2,31) then the bitwise operations are a fast way to truncate floating point values. All of the different bitwise operations do different things, but in every example here they're essentially doing an identity operation. The critical difference is that JavaScript does a ToInt32 operation on the value before doing the nothing else part.
Bitwise identity operations:
i | 0 // For each bit that is 1, return 1|0. For each bit that is 0, return 0|0.
i ^ 0 // xor, but effectively same as previous.
i << 0 // Shift the value left zero bits.
i >> 0 // Shift the value right zero bits.
i & -1 // Identity mask
~~i // Not not - I had forgotten this one above.

Javascript & and | symbols

I know that && and || are used in comparisons as AND and OR, respectively.
But what about the & and | operators? What are they used for?
Here's the code I'm testing:
var a = 2;
var b = 3;
var c = a & b;//2
var d = a | b;//3
Difference between && , || and &, | ....
They are bitwise operators that operate on the bits of the number. You have to think the bits of the numbers in order for the result to make sense:
a = 2 //0010
b = 3 //0011
a & b; //0010 & 0011 === 0010 === 2
https://developer.mozilla.org/en/JavaScript/Reference/Operators/Bitwise_Operators
They are mainly used for reading and manipulating binary data, such as .mp3 files.
They are bitwise AND and OR respectively. The usage is mainly for low-level (now don't why you would want that here, there may be many applications) operations.
For example:
Decimal Number Binary Form
20 10100
30 11110
20 ==> 10100
& 30 ==> 11110
----- ----------
20 10100 (when both bits are 1)
20 ==> 10100
| 30 ==> 11110
----- ----------
30 11110 (when either of the bits is 1)
Similarly there are other operators too:
operator meaning example
xor (^) only one bit is 1 1101
^1001
------
0100
I could provide the whole list but that would be of no use. Already many answers contain links to excellent resources. You might want to look at those. My answer just gives some idea.
JavaScript has the same set of bitwise operators as Java:
& and
| or
^ xor
~ not
>> signed right shift
>>> unsigned right shift
<< left shift
In Java, the bitwise operators work with integers. JavaScript doesn't have integers. It only has double
precision floating-point numbers. So, the bitwise operators convert their number operands into integers, do
their business, and then convert them back.
e.g.:
a = 2 //0010
b = 3 //0011
a & b; //0010 & 0011 === 0010 === 2
a | b; //0010 | 0011 === 0011 === 3
In most languages, these operators are very close to the hardware
and very fast. In JavaScript, they are very far from the hardware and very slow. JavaScript is rarely used for
doing bit manipulation.
As a result, in JavaScript programs, it is more likely that & is a mistyped && operator. The presence of the
bitwise operators reduces some of the language's redundancy, making it easier for bugs to hide
Javascript: The Good Parts by Douglas Crockford + Esailija answer
The && and || are logical operators, while & and | are bitwise operators
The && and || are logical operators, while & and | are bitwise operators
Bitwise Operators - Bitwise operators treat their operands as a sequence of 32 bits (zeros and ones), rather than as decimal, hexadecimal, or octal numbers. For example, the decimal number nine has a binary representation of 1001. Bitwise operators perform their operations on such binary representations, but they return standard JavaScript numerical values.
assume the variable 'a' to be 13 (binary 1101) and 'b' to be 9 (binary 1001)
Bitwise AND Operator (&) (JavaScript) - This is the bitwise AND operator which returns a 1 for each bit position where the corresponding bits of both its operands are 1. The following code would return 9 (1001):
Code:
result = a & b;
Bitwise OR Operator (|) (JavaScript) - This is the bitwise OR operator and returns a one for each bit position where one or both of the corresponding bits of its operands is a one. This example would return 13 (1101):
Code:
result = a | b;
Read more
OPERATORS: & | ^ ~ << >> >>>

What is the best method to convert floating point to an integer in JavaScript?

There are several different methods for converting floating point numbers to Integers in JavaScript. My question is what method gives the best performance, is most compatible, or is considered the best practice?
Here are a few methods that I know of:
var a = 2.5;
window.parseInt(a); // 2
Math.floor(a); // 2
a | 0; // 2
I'm sure there are others out there. Suggestions?
According to this website:
parseInt is occasionally used as a means of turning a floating point number into an integer. It is very ill suited to that task because if its argument is of numeric type it will first be converted into a string and then parsed as a number...
For rounding numbers to integers one of Math.round, Math.ceil and Math.floor are preferable...
Apparently double bitwise-not is the fastest way to floor a number:
var x = 2.5;
console.log(~~x); // 2
Used to be an article here, getting a 404 now though: http://james.padolsey.com/javascript/double-bitwise-not/
Google has it cached: http://74.125.155.132/search?q=cache:wpZnhsbJGt0J:james.padolsey.com/javascript/double-bitwise-not/+double+bitwise+not&cd=1&hl=en&ct=clnk&gl=us
But the Wayback Machine saves the day! http://web.archive.org/web/20100422040551/http://james.padolsey.com/javascript/double-bitwise-not/
From "Javascript: The Good Parts" from Douglas Crockford:
Number.prototype.integer = function () {
return Math[this < 0 ? 'ceil' : 'floor'](this);
}
Doing that your are adding a method to every Number object.
Then you can use it like that:
var x = 1.2, y = -1.2;
x.integer(); // 1
y.integer(); // -1
(-10 / 3).integer(); // -3
The 'best' way depends on:
rounding mode: what type of rounding (of the float to integer) you expect/require
for positive and/or negative numbers that have a fractional part.
Common examples:
float | trunc | floor | ceil | near (half up)
------+-------+-------+-------+---------------
+∞ | +∞ | +∞ | +∞ | +∞
+2.75 | +2 | +2 | +3 | +3
+2.5 | +2 | +2 | +3 | +3
+2.25 | +2 | +2 | +3 | +2
+0 | +0 | +0 | +0 | +0
NaN | NaN | NaN | NaN | NaN
-0 | -0 | -0 | -0 | -0
-2.25 | -2 | -3 | -2 | -2
-2.5 | -2 | -3 | -2 | -2
-2.75 | -2 | -3 | -2 | -3
-∞ | -∞ | -∞ | -∞ | -∞
For float to integer conversions we commonly expect "truncation"
(aka "round towards zero" aka "round away from infinity").
Effectively this just 'chops off' the fractional part of a floating point number.
Most techniques and (internally) built-in methods behave this way.
input: how your (floating point) number is represented:
String
Commonly radix/base: 10 (decimal)
floating point ('internal') Number
output: what you want to do with the resulting value:
(intermediate) output String (default radix 10) (on screen)
perform further calculations on resulting value
range:
in what numerical range do you expect input/calculation-results
and for which range do you expect corresponding 'correct' output.
Only after these considerations are answered we can think about appropriate method(s) and speed!
Per ECMAScript 262 spec: all numbers (type Number) in javascript are represented/stored in:
"IEEE 754 Double Precision Floating Point (binary64)" format.
So integers are also represented in the same floating point format (as numbers without a fraction).
Note: most implementations do use more efficient (for speed and memory-size) integer-types internally when possible!
As this format stores 1 sign bit, 11 exponent bits and the first 53 significant bits ("mantissa"), we can say that: only Number-values between -252 and +252 can have a fraction.
In other words: all representable positive and negative Number-values between 252 to (almost) 2(211/2=1024) (at which point the format calls it a day Infinity) are already integers (internally rounded, as there are no bits left to represent the remaining fractional and/or least significant integer digits).
And there is the first 'gotcha':
You can not control the internal rounding-mode of Number-results for the built-in Literal/String to float conversions (rounding-mode: IEEE 754-2008 "round to nearest, ties to even") and built-in arithmetic operations (rounding-mode: IEEE 754-2008 "round-to-nearest").
For example:
252+0.25 = 4503599627370496.25 is rounded and stored as: 4503599627370496
252+0.50 = 4503599627370496.50 is rounded and stored as: 4503599627370496
252+0.75 = 4503599627370496.75 is rounded and stored as: 4503599627370497
252+1.25 = 4503599627370497.25 is rounded and stored as: 4503599627370497
252+1.50 = 4503599627370497.50 is rounded and stored as: 4503599627370498
252+1.75 = 4503599627370497.75 is rounded and stored as: 4503599627370498
252+2.50 = 4503599627370498.50 is rounded and stored as: 4503599627370498
252+3.50 = 4503599627370499.50 is rounded and stored as: 4503599627370500
To control rounding your Number needs a fractional part (and at least one bit to represent that), otherwise ceil/floor/trunc/near returns the integer you fed into it.
To correctly ceil/floor/trunc a Number up to x significant fractional decimal digit(s), we only care if the corresponding lowest and highest decimal fractional value will still give us a binary fractional value after rounding (so not being ceiled or floored to the next integer).
So, for example, if you expect 'correct' rounding (for ceil/floor/trunc) up to 1 significant fractional decimal digit (x.1 to x.9), we need at least 3 bits (not 4) to give us a binary fractional value:
0.1 is closer to 1/(23=8)=0.125 than it is to 0 and 0.9 is closer to 1-1/(23=8)=0.875 than it is to 1.
only up to ±2(53-3=50) will all representable values have a non-zero binary fraction for no more than the first significant decimal fractional digit (values x.1 to x.9).
For 2 decimals ±2(53-6=47), for 3 decimals ±2(53-9=44), for 4 decimals ±2(53-13=40), for 5 decimals ±2(53-16=37), for 6 decimals ±2(53-19=34), for 7 decimals ±2(53-23=30), for 8 decimals ±2(53-26=27), for 9 decimals ±2(53-29=24), for 10 decimals ±2(53-33=20), for 11 decimals ±2(53-36=17), etc..
A "Safe Integer" in javascript is an integer:
that can be exactly represented as an IEEE-754 double precision number, and
whose IEEE-754 representation cannot be the result of rounding any other integer to fit the IEEE-754 representation
(even though ±253 (as an exact power of 2) can exactly be represented, it is not a safe integer because it could also have been ±(253+1) before it was rounded to fit into the maximum of 53 most significant bits).
This effectively defines a subset range of (safely representable) integers between -253 and +253:
from: -(253 - 1) = -9007199254740991 (inclusive)
(a constant provided as static property Number.MIN_SAFE_INTEGER since ES6)
to: +(253 - 1) = +9007199254740991 (inclusive)
(a constant provided as static property Number.MAX_SAFE_INTEGER since ES6)
Trivial polyfill for these 2 new ES6 constants:
Number.MIN_SAFE_INTEGER || (Number.MIN_SAFE_INTEGER=
-(Number.MAX_SAFE_INTEGER=9007199254740991) //Math.pow(2,53)-1
);
Since ES6 there is also a complimentary static method Number.isSafeInteger() which tests if the passed value is of type Number and is an integer within the safe integer range (returning a boolean true or false).
Note: will also return false for: NaN, Infinity and obviously String (even if it represents a number).
Polyfill example:
Number.isSafeInteger || (Number.isSafeInteger = function(value){
return typeof value === 'number' &&
value === Math.floor(value) &&
value < 9007199254740992 &&
value > -9007199254740992;
});
ECMAScript 2015 / ES6 provides a new static method Math.trunc()
to truncate a float to an integer:
Returns the integral part of the number x, removing any fractional digits. If x is already an integer, the result is x.
Or put simpler (MDN):
Unlike other three Math methods: Math.floor(), Math.ceil() and Math.round(), the way Math.trunc() works is very simple and straightforward:
just truncate the dot and the digits behind it, no matter whether the argument is a positive number or a negative number.
We can further explain (and polyfill) Math.trunc() as such:
Math.trunc || (Math.trunc = function(n){
return n < 0 ? Math.ceil(n) : Math.floor(n);
});
Note, the above polyfill's payload can potentially be better pre-optimized by the engine compared to:
Math[n < 0 ? 'ceil' : 'floor'](n);
Usage: Math.trunc(/* Number or String */)
Input: (Integer or Floating Point) Number (but will happily try to convert a String to a Number)
Output: (Integer) Number (but will happily try to convert Number to String in a string-context)
Range: -2^52 to +2^52 (beyond this we should expect 'rounding-errors' (and at some point scientific/exponential notation) plain and simply because our Number input in IEEE 754 already lost fractional precision: since Numbers between ±2^52 to ±2^53 are already internally rounded integers (for example 4503599627370509.5 is internally already represented as 4503599627370510) and beyond ±2^53 the integers also loose precision (powers of 2)).
Float to integer conversion by subtracting the Remainder (%) of a devision by 1:
Example: result = n-n%1 (or n-=n%1)
This should also truncate floats. Since the Remainder operator has a higher precedence than Subtraction we effectively get: (n)-(n%1).
For positive Numbers it's easy to see that this floors the value: (2.5) - (0.5) = 2,
for negative Numbers this ceils the value: (-2.5) - (-0.5) = -2 (because --=+ so (-2.5) + (0.5) = -2).
Since the input and output are Number we should get the same useful range and output compared to ES6 Math.trunc() (or it's polyfill).
Note: tough I fear (not sure) there might be differences: because we are doing arithmetic (which internally uses rounding mode "nearTiesEven" (aka Banker's Rounding)) on the original Number (the float) and a second derived Number (the fraction) this seems to invite compounding digital_representation and arithmetic rounding errors, thus potentially returning a float after all..
Float to integer conversion by (ab-)using bitwise operations:
This works by internally forcing a (floating point) Number conversion (truncation and overflow) to a signed 32-bit integer value (two's complement) by using a bitwise operation on a Number (and the result is converted back to a (floating point) Number which holds just the integer value).
Again, input and output is Number (and again silent conversion from String-input to Number and Number-output to String).
More important tough (and usually forgotten and not explained):
depending on bitwise operation and the number's sign, the useful range will be limited between:
-2^31 to +2^31 (like ~~num or num|0 or num>>0) OR 0 to +2^32 (num>>>0).
This should be further clarified by the following lookup-table (containing all 'critical' examples):
n | n>>0 OR n<<0 OR | n>>>0 | n < 0 ? -(-n>>>0) : n>>>0
| n|0 OR n^0 OR ~~n | |
| OR n&0xffffffff | |
----------------------------+-------------------+-------------+---------------------------
+4294967298.5 = (+2^32)+2.5 | +2 | +2 | +2
+4294967297.5 = (+2^32)+1.5 | +1 | +1 | +1
+4294967296.5 = (+2^32)+0.5 | 0 | 0 | 0
+4294967296 = (+2^32) | 0 | 0 | 0
+4294967295.5 = (+2^32)-0.5 | -1 | +4294967295 | +4294967295
+4294967294.5 = (+2^32)-1.5 | -2 | +4294967294 | +4294967294
etc... | etc... | etc... | etc...
+2147483649.5 = (+2^31)+1.5 | -2147483647 | +2147483649 | +2147483649
+2147483648.5 = (+2^31)+0.5 | -2147483648 | +2147483648 | +2147483648
+2147483648 = (+2^31) | -2147483648 | +2147483648 | +2147483648
+2147483647.5 = (+2^31)-0.5 | +2147483647 | +2147483647 | +2147483647
+2147483646.5 = (+2^31)-1.5 | +2147483646 | +2147483646 | +2147483646
etc... | etc... | etc... | etc...
+1.5 | +1 | +1 | +1
+0.5 | 0 | 0 | 0
0 | 0 | 0 | 0
-0.5 | 0 | 0 | 0
-1.5 | -1 | +4294967295 | -1
etc... | etc... | etc... | etc...
-2147483646.5 = (-2^31)+1.5 | -2147483646 | +2147483650 | -2147483646
-2147483647.5 = (-2^31)+0.5 | -2147483647 | +2147483649 | -2147483647
-2147483648 = (-2^31) | -2147483648 | +2147483648 | -2147483648
-2147483648.5 = (-2^31)-0.5 | -2147483648 | +2147483648 | -2147483648
-2147483649.5 = (-2^31)-1.5 | +2147483647 | +2147483647 | -2147483649
-2147483650.5 = (-2^31)-2.5 | +2147483646 | +2147483646 | -2147483650
etc... | etc... | etc... | etc...
-4294967294.5 = (-2^32)+1.5 | +2 | +2 | -4294967294
-4294967295.5 = (-2^32)+0.5 | +1 | +1 | -4294967295
-4294967296 = (-2^32) | 0 | 0 | 0
-4294967296.5 = (-2^32)-0.5 | 0 | 0 | 0
-4294967297.5 = (-2^32)-1.5 | -1 | +4294967295 | -1
-4294967298.5 = (-2^32)-2.5 | -2 | +4294967294 | -2
Note 1: the last column has extended range 0 to -4294967295 using (n < 0 ? -(-n>>>0) : n>>>0).
Note 2: bitwise introduces its own conversion-overhead(s) (severity vs Math depends on actual implementation, so bitwise could be faster (often on older historic browsers)).
Obviously, if your 'floating point' number was a String to begin with,
parseInt(/*String*/, /*Radix*/) would be an appropriate choice to parse it into a integer Number.
parseInt() will truncate as well (for positive and negative numbers).
The range is again limited to IEEE 754 double precision floating point as explained above for the Math method(s).
Finally, if you have a String and expect a String as output you could also chop of the radix point and fraction (which also gives you a larger accurate truncation range compared to IEEE 754 double precision floating point (±2^52))!
EXTRA:
From the info above you should now have all you need to know.
If for example you'd want round away from zero (aka round towards infinity) you could modify the Math.trunc() polyfill, for example:
Math.intToInf || (Math.intToInf = function(n){
return n < 0 ? Math.floor(n) : Math.ceil(n);
});
The answer has already been given but just to be clear.
Use the Math library for this. round, ceil or floor functions.
parseInt is for converting a string to an int which is not what is needed here
toFixed is for converting a float to a string also not what is needed here
Since the Math functions will not be doing any conversions to or from a string it will be faster than any of the other choices which are wrong anyway.
You can use Number(a).toFixed(0);
Or even just a.toFixed(0);
Edit:
That's rounding to 0 places, slightly different than truncating, and as someone else suggested, toFixed returns a string, not a raw integer. Useful for display purposes.
var num = 2.7; // typeof num is "Number"
num.toFixed(0) == "3"
var i = parseInt(n, 10);
If you don't specify a radix values like '010' will be treated as octal (and so the result will be 8 not 10).
Using bitwise operators. It may not be the clearest way of converting to an integer, but it works on any kind of datatype.
Suppose your function takes an argument value, and the function works in such a way that value must always be an integer (and 0 is accepted). Then any of the following will assign value as an integer:
value = ~~(value)
value = value | 0;
value = value & 0xFF; // one byte; use this if you want to limit the integer to
// a predefined number of bits/bytes
The best part is that this works with strings (what you might get from a text input, etc) that are numbers ~~("123.45") === 123. Any non numeric values result in 0, ie,
~~(undefined) === 0
~~(NaN) === 0
~~("ABC") === 0
It does work with hexadecimal numbers as strings (with a 0x prefix)
~~("0xAF") === 175
There is some type coercion involved, I suppose. I'll do some performance tests to compare these to parseInt() and Math.floor(), but I like having the extra convenience of no Errors being thrown and getting a 0 for non-numbers
So I made a benchmark, on Chrome when the input is already a number, the fastest would be ~~num and num|0, half speed: Math.floor, and the slowest would be parseInt see here
EDIT: it seems there are already another person who made rounding benchmark (more result) and additional ways: num>>0 (as fast as |0) and num - num%1 (sometimes fast)
The question appears to be asking specifically about converting from a float to an int. My understanding is that the way to do this is to use toFixed. So...
var myFloat = 2.5;
var myInt = myFloat.toFixed(0);
Does anyone know if Math.floor() is more or less performant than Number.toFixed()?
you could also do it this way:
var string = '1';
var integer = a * 1;
parseInt() is probably the best one. a | 0 doesn't do what you really want (it just assigns 0 if a is an undefined or null value, which means an empty object or array passes the test), and Math.floor works by some type trickery (it basically calls parseInt() in the background).

Categories