What is the use of "|" (pipe) symbol in a JS array - javascript

I have a JS array which is being used as follows in our existing code:
temp = charArray[0 | Math.random() * 26];
Wanted to know what exactly is the usage of "|" symbol in the above code and are there more such operators?

From the MDN:
Bitwise operators treat their operands as a set of 32 bits (zeros and
ones) and return standard JavaScript numerical values.
As the 32 bit part is (a part of) the integer part of the IEEE754 representation of the number, this is just a trick to remove the non integer part of the number (be careful that it also breaks big integers not fitting in 32 bits!).
It's equivalent to
temp = charArray[Math.floor(Math.random() * 26)];

| is bitwise OR, which means, that all bits that are 1 in either of the arguments will be 1 in the result. A bitwise OR with 0 returns the given input interpreted as an integer.
In your code the its majorily used to convert the
Math.random()
number to integer. The bottom line is :
var a = 5.6 | 0 //a=5
Explanation:
Lets take
var a = 5; //binary - 101
var b = 6; //binary - 110
a|b a|a a|0
101 101 101
110 101 000
------ ------ ------
111-->7 101-->5 101-->5

Related

Why does | bitwise OR operator evaluate both sides when 1 is found in first side? [duplicate]

I'm someone who writes code just for fun and haven't really delved into it in either an academic or professional setting, so stuff like these bitwise operators really escapes me.
I was reading an article about JavaScript, which apparently supports bitwise operations. I keep seeing this operation mentioned in places, and I've tried reading about to figure out what exactly it is, but I just don't seem to get it at all. So what are they? Clear examples would be great! :D
Just a few more questions - what are some practical applications of bitwise operations? When might you use them?
Since nobody has broached the subject of why these are useful:
I use bitwise operations a lot when working with flags. For example, if you want to pass a series of flags to an operation (say, File.Open(), with Read mode and Write mode both enabled), you could pass them as a single value. This is accomplished by assigning each possible flag it's own bit in a bitset (byte, short, int, or long). For example:
Read: 00000001
Write: 00000010
So if you want to pass read AND write, you would pass (READ | WRITE) which then combines the two into
00000011
Which then can be decrypted on the other end like:
if ((flag & Read) != 0) { //...
which checks
00000011 &
00000001
which returns
00000001
which is not 0, so the flag does specify READ.
You can use XOR to toggle various bits. I've used this when using a flag to specify directional inputs (Up, Down, Left, Right). For example, if a sprite is moving horizontally, and I want it to turn around:
Up: 00000001
Down: 00000010
Left: 00000100
Right: 00001000
Current: 00000100
I simply XOR the current value with (LEFT | RIGHT) which will turn LEFT off and RIGHT on, in this case.
Bit Shifting is useful in several cases.
x << y
is the same as
x * 2y
if you need to quickly multiply by a power of two, but watch out for shifting a 1-bit into the top bit - this makes the number negative unless it's unsigned. It's also useful when dealing with different sizes of data. For example, reading an integer from four bytes:
int val = (A << 24) | (B << 16) | (C << 8) | D;
Assuming that A is the most-significant byte and D the least. It would end up as:
A = 01000000
B = 00000101
C = 00101011
D = 11100011
val = 01000000 00000101 00101011 11100011
Colors are often stored this way (with the most significant byte either ignored or used as Alpha):
A = 255 = 11111111
R = 21 = 00010101
G = 255 = 11111111
B = 0 = 00000000
Color = 11111111 00010101 11111111 00000000
To find the values again, just shift the bits to the right until it's at the bottom, then mask off the remaining higher-order bits:
Int Alpha = Color >> 24
Int Red = Color >> 16 & 0xFF
Int Green = Color >> 8 & 0xFF
Int Blue = Color & 0xFF
0xFF is the same as 11111111. So essentially, for Red, you would be doing this:
Color >> 16 = (filled in 00000000 00000000)11111111 00010101 (removed 11111111 00000000)
00000000 00000000 11111111 00010101 &
00000000 00000000 00000000 11111111 =
00000000 00000000 00000000 00010101 (The original value)
It is worth noting that the single-bit truth tables listed as other answers work on only one or two input bits at a time. What happens when you use integers, such as:
int x = 5 & 6;
The answer lies in the binary expansion of each input:
5 = 0 0 0 0 0 1 0 1
& 6 = 0 0 0 0 0 1 1 0
---------------------
0 0 0 0 0 1 0 0
Each pair of bits in each column is run through the "AND" function to give the corresponding output bit on the bottom line. So the answer to the above expression is 4. The CPU has done (in this example) 8 separate "AND" operations in parallel, one for each column.
I mention this because I still remember having this "AHA!" moment when I learned about this many years ago.
Bitwise operators are operators that work on a bit at a time.
AND is 1 only if both of its inputs are 1.
OR is 1 if one or more of its inputs are 1.
XOR is 1 only if exactly one of its inputs are 1.
NOT is 1 only if its input are 0.
These can be best described as truth tables. Inputs possibilities are on the top and left, the resultant bit is one of the four (two in the case of NOT since it only has one input) values shown at the intersection of the two inputs.
AND|0 1 OR|0 1
---+---- ---+----
0|0 0 0|0 1
1|0 1 1|1 1
XOR|0 1 NOT|0 1
---+---- ---+---
0|0 1 |1 0
1|1 0
One example is if you only want the lower 4 bits of an integer, you AND it with 15 (binary 1111) so:
203: 1100 1011
AND 15: 0000 1111
------------------
IS 11: 0000 1011
These are the bitwise operators, all supported in JavaScript:
op1 & op2 -- The AND operator compares two bits and generates a result of 1 if both bits are 1; otherwise, it returns 0.
op1 | op2 -- The OR operator compares two bits and generates a result of 1 if the bits are complementary; otherwise, it returns 0.
op1 ^ op2 -- The EXCLUSIVE-OR operator compares two bits and returns 1 if either of the bits are 1 and it gives 0 if both bits are 0 or 1.
~op1 -- The COMPLEMENT operator is used to invert all of the bits of the operand.
op1 << op2 -- The SHIFT LEFT operator moves the bits to the left, discards the far left bit, and assigns the rightmost bit a value of 0. Each move to the left effectively multiplies op1 by 2.
op1 >> op2 -- The SHIFT RIGHT operator moves the bits to the right, discards the far right bit, and assigns the leftmost bit a value of 0. Each move to the right effectively divides op1 in half. The left-most sign bit is preserved.
op1 >>> op2 -- The SHIFT RIGHT - ZERO FILL operator moves the bits to the right, discards the far right bit, and assigns the leftmost bit a value of 0. Each move to the right effectively divides op1 in half. The left-most sign bit is discarded.
In digital computer programming, a bitwise operation operates on one or more bit patterns or binary numerals at the level of their individual bits. It is a fast, primitive action directly supported by the processor, and is used to manipulate values for comparisons and calculations.
operations:
bitwise AND
bitwise OR
bitwise NOT
bitwise XOR
etc
List item
AND|0 1 OR|0 1
---+---- ---+----
0|0 0 0|0 1
1|0 1 1|1 1
XOR|0 1 NOT|0 1
---+---- ---+---
0|0 1 |1 0
1|1 0
Eg.
203: 1100 1011
AND 15: 0000 1111
------------------
= 11: 0000 1011
Uses of bitwise operator
The left-shift and right-shift operators are equivalent to multiplication and division by x * 2y respectively.
Eg.
int main()
{
int x = 19;
printf ("x << 1 = %d\n" , x <<1);
printf ("x >> 1 = %d\n", x >>1);
return 0;
}
// Output: 38 9
The & operator can be used to quickly check if a number is odd or even
Eg.
int main()
{
int x = 19;
(x & 1)? printf("Odd"): printf("Even");
return 0;
}
// Output: Odd
Quick find minimum of x and y without if else statement
Eg.
int min(int x, int y)
{
return y ^ ((x ^ y) & - (x < y))
}
Decimal to binary
conversion
Eg.
#include <stdio.h>
int main ()
{
int n , c , k ;
printf("Enter an integer in decimal number system\n " ) ;
scanf( "%d" , & n );
printf("%d in binary number
system is: \n " , n ) ;
for ( c = 31; c >= 0 ; c -- )
{
k = n >> c ;
if ( k & 1 )
printf("1" ) ;
else
printf("0" ) ;
}
printf(" \n " );
return 0 ;
}
The XOR gate encryption is popular technique, because of its complixblity and reare use by the programmer.
bitwise XOR operator is the most useful operator from technical interview perspective.
bitwise shifting works only with +ve number
Also there is a wide range of use of bitwise logic
To break it down a bit more, it has a lot to do with the binary representation of the value in question.
For example (in decimal):
x = 8
y = 1
would come out to (in binary):
x = 1000
y = 0001
From there, you can do computational operations such as 'and' or 'or'; in this case:
x | y =
1000
0001 |
------
1001
or...9 in decimal
Hope this helps.
When the term "bitwise" is mentioned, it is sometimes clarifying that is is not a "logical" operator.
For example in JavaScript, bitwise operators treat their operands as a sequence of 32 bits (zeros and ones); meanwhile, logical operators are typically used with Boolean (logical) values but can work with non-Boolean types.
Take expr1 && expr2 for example.
Returns expr1 if it can be converted
to false; otherwise, returns expr2.
Thus, when used with Boolean values,
&& returns true if both operands are
true; otherwise, returns false.
a = "Cat" && "Dog" // t && t returns Dog
a = 2 && 4 // t && t returns 4
As others have noted, 2 & 4 is a bitwise AND, so it will return 0.
You can copy the following to test.html or something and test:
<html>
<body>
<script>
alert("\"Cat\" && \"Dog\" = " + ("Cat" && "Dog") + "\n"
+ "2 && 4 = " + (2 && 4) + "\n"
+ "2 & 4 = " + (2 & 4));
</script>
It might help to think of it this way. This is how AND (&) works:
It basically says are both of these numbers ones, so if you have two numbers 5 and 3 they will be converted into binary and the computer will think
5: 00000101
3: 00000011
are both one: 00000001
0 is false, 1 is true
So the AND of 5 and 3 is one. The OR (|) operator does the same thing except only one of the numbers must be one to output 1, not both.
I kept hearing about how slow JavaScript bitwise operators were. I did some tests for my latest blog post and found out they were 40% to 80% faster than the arithmetic alternative in several tests. Perhaps they used to be slow. In modern browsers, I love them.
I have one case in my code that will be faster and easier to read because of this. I'll keep my eyes open for more.

Division and Power in Javascript

I have the following code to divide a variable by 100 and power it.
var a = 1;
var b = (a / 100) ^ 2;
The value in 'b' becomes 2 when it should be 0.01 ^ 2 = 0.0001.
Why is that?
^ is not the exponent operator. It's the bitwise XOR operator. To apply a power to a number, use Math.pow():
var b = Math.pow(a / 100, 2);
As to why you get 2 as the result when you use ^, bitwise operators compare the individual bits of two numbers to produce a result. This first involves converting both operands to integers by removing the fractional part. Converting 0.01 to an integer produces 0, so you get:
00000000 XOR 00000010 (0 ^ 2)
00000010 (2)
Try this:
2 ^ 10
It gives you 8. This is easily explained: JS does not have a power operator, but a XOR: MDN.
You are looking for Math.pow (MDN)
Power in javasript is made with Math.pow(x, y) function, not typing ˆ in between.
http://www.w3schools.com/jsref/jsref_pow.asp
Update 2021:
Exponentiation operator is available since ECMAScript 2016.
So, you can do something like:
var b = (a / 100) ** 2;

Endianness of JavaScript integer literals

If I write
var n = 0x1234
in Javascript, is
n == 4660
always true? The question can also stated this way: Does 0x1234 denote a sequence of bytes with 0x12 being the first and 0x34 being the last byte? Or does 0x1234 denote a number to the base 16 with the left digit being the most signigicant?
In the first case 0x1234 might be 4660 if interpreted as big endian and 13330 if interpreted as little endian.
In the latter case 0x1234 always equals 1 * 4096 + 2 * 256 + 3 * 16 + 4 = 4660.
The 0x notation in JS always represents a number with base 16 with the left digit being the most significant.

Javascript & and | symbols

I know that && and || are used in comparisons as AND and OR, respectively.
But what about the & and | operators? What are they used for?
Here's the code I'm testing:
var a = 2;
var b = 3;
var c = a & b;//2
var d = a | b;//3
Difference between && , || and &, | ....
They are bitwise operators that operate on the bits of the number. You have to think the bits of the numbers in order for the result to make sense:
a = 2 //0010
b = 3 //0011
a & b; //0010 & 0011 === 0010 === 2
https://developer.mozilla.org/en/JavaScript/Reference/Operators/Bitwise_Operators
They are mainly used for reading and manipulating binary data, such as .mp3 files.
They are bitwise AND and OR respectively. The usage is mainly for low-level (now don't why you would want that here, there may be many applications) operations.
For example:
Decimal Number Binary Form
20 10100
30 11110
20 ==> 10100
& 30 ==> 11110
----- ----------
20 10100 (when both bits are 1)
20 ==> 10100
| 30 ==> 11110
----- ----------
30 11110 (when either of the bits is 1)
Similarly there are other operators too:
operator meaning example
xor (^) only one bit is 1 1101
^1001
------
0100
I could provide the whole list but that would be of no use. Already many answers contain links to excellent resources. You might want to look at those. My answer just gives some idea.
JavaScript has the same set of bitwise operators as Java:
& and
| or
^ xor
~ not
>> signed right shift
>>> unsigned right shift
<< left shift
In Java, the bitwise operators work with integers. JavaScript doesn't have integers. It only has double
precision floating-point numbers. So, the bitwise operators convert their number operands into integers, do
their business, and then convert them back.
e.g.:
a = 2 //0010
b = 3 //0011
a & b; //0010 & 0011 === 0010 === 2
a | b; //0010 | 0011 === 0011 === 3
In most languages, these operators are very close to the hardware
and very fast. In JavaScript, they are very far from the hardware and very slow. JavaScript is rarely used for
doing bit manipulation.
As a result, in JavaScript programs, it is more likely that & is a mistyped && operator. The presence of the
bitwise operators reduces some of the language's redundancy, making it easier for bugs to hide
Javascript: The Good Parts by Douglas Crockford + Esailija answer
The && and || are logical operators, while & and | are bitwise operators
The && and || are logical operators, while & and | are bitwise operators
Bitwise Operators - Bitwise operators treat their operands as a sequence of 32 bits (zeros and ones), rather than as decimal, hexadecimal, or octal numbers. For example, the decimal number nine has a binary representation of 1001. Bitwise operators perform their operations on such binary representations, but they return standard JavaScript numerical values.
assume the variable 'a' to be 13 (binary 1101) and 'b' to be 9 (binary 1001)
Bitwise AND Operator (&) (JavaScript) - This is the bitwise AND operator which returns a 1 for each bit position where the corresponding bits of both its operands are 1. The following code would return 9 (1001):
Code:
result = a & b;
Bitwise OR Operator (|) (JavaScript) - This is the bitwise OR operator and returns a one for each bit position where one or both of the corresponding bits of its operands is a one. This example would return 13 (1101):
Code:
result = a | b;
Read more
OPERATORS: & | ^ ~ << >> >>>

What is the best method to convert floating point to an integer in JavaScript?

There are several different methods for converting floating point numbers to Integers in JavaScript. My question is what method gives the best performance, is most compatible, or is considered the best practice?
Here are a few methods that I know of:
var a = 2.5;
window.parseInt(a); // 2
Math.floor(a); // 2
a | 0; // 2
I'm sure there are others out there. Suggestions?
According to this website:
parseInt is occasionally used as a means of turning a floating point number into an integer. It is very ill suited to that task because if its argument is of numeric type it will first be converted into a string and then parsed as a number...
For rounding numbers to integers one of Math.round, Math.ceil and Math.floor are preferable...
Apparently double bitwise-not is the fastest way to floor a number:
var x = 2.5;
console.log(~~x); // 2
Used to be an article here, getting a 404 now though: http://james.padolsey.com/javascript/double-bitwise-not/
Google has it cached: http://74.125.155.132/search?q=cache:wpZnhsbJGt0J:james.padolsey.com/javascript/double-bitwise-not/+double+bitwise+not&cd=1&hl=en&ct=clnk&gl=us
But the Wayback Machine saves the day! http://web.archive.org/web/20100422040551/http://james.padolsey.com/javascript/double-bitwise-not/
From "Javascript: The Good Parts" from Douglas Crockford:
Number.prototype.integer = function () {
return Math[this < 0 ? 'ceil' : 'floor'](this);
}
Doing that your are adding a method to every Number object.
Then you can use it like that:
var x = 1.2, y = -1.2;
x.integer(); // 1
y.integer(); // -1
(-10 / 3).integer(); // -3
The 'best' way depends on:
rounding mode: what type of rounding (of the float to integer) you expect/require
for positive and/or negative numbers that have a fractional part.
Common examples:
float | trunc | floor | ceil | near (half up)
------+-------+-------+-------+---------------
+∞ | +∞ | +∞ | +∞ | +∞
+2.75 | +2 | +2 | +3 | +3
+2.5 | +2 | +2 | +3 | +3
+2.25 | +2 | +2 | +3 | +2
+0 | +0 | +0 | +0 | +0
NaN | NaN | NaN | NaN | NaN
-0 | -0 | -0 | -0 | -0
-2.25 | -2 | -3 | -2 | -2
-2.5 | -2 | -3 | -2 | -2
-2.75 | -2 | -3 | -2 | -3
-∞ | -∞ | -∞ | -∞ | -∞
For float to integer conversions we commonly expect "truncation"
(aka "round towards zero" aka "round away from infinity").
Effectively this just 'chops off' the fractional part of a floating point number.
Most techniques and (internally) built-in methods behave this way.
input: how your (floating point) number is represented:
String
Commonly radix/base: 10 (decimal)
floating point ('internal') Number
output: what you want to do with the resulting value:
(intermediate) output String (default radix 10) (on screen)
perform further calculations on resulting value
range:
in what numerical range do you expect input/calculation-results
and for which range do you expect corresponding 'correct' output.
Only after these considerations are answered we can think about appropriate method(s) and speed!
Per ECMAScript 262 spec: all numbers (type Number) in javascript are represented/stored in:
"IEEE 754 Double Precision Floating Point (binary64)" format.
So integers are also represented in the same floating point format (as numbers without a fraction).
Note: most implementations do use more efficient (for speed and memory-size) integer-types internally when possible!
As this format stores 1 sign bit, 11 exponent bits and the first 53 significant bits ("mantissa"), we can say that: only Number-values between -252 and +252 can have a fraction.
In other words: all representable positive and negative Number-values between 252 to (almost) 2(211/2=1024) (at which point the format calls it a day Infinity) are already integers (internally rounded, as there are no bits left to represent the remaining fractional and/or least significant integer digits).
And there is the first 'gotcha':
You can not control the internal rounding-mode of Number-results for the built-in Literal/String to float conversions (rounding-mode: IEEE 754-2008 "round to nearest, ties to even") and built-in arithmetic operations (rounding-mode: IEEE 754-2008 "round-to-nearest").
For example:
252+0.25 = 4503599627370496.25 is rounded and stored as: 4503599627370496
252+0.50 = 4503599627370496.50 is rounded and stored as: 4503599627370496
252+0.75 = 4503599627370496.75 is rounded and stored as: 4503599627370497
252+1.25 = 4503599627370497.25 is rounded and stored as: 4503599627370497
252+1.50 = 4503599627370497.50 is rounded and stored as: 4503599627370498
252+1.75 = 4503599627370497.75 is rounded and stored as: 4503599627370498
252+2.50 = 4503599627370498.50 is rounded and stored as: 4503599627370498
252+3.50 = 4503599627370499.50 is rounded and stored as: 4503599627370500
To control rounding your Number needs a fractional part (and at least one bit to represent that), otherwise ceil/floor/trunc/near returns the integer you fed into it.
To correctly ceil/floor/trunc a Number up to x significant fractional decimal digit(s), we only care if the corresponding lowest and highest decimal fractional value will still give us a binary fractional value after rounding (so not being ceiled or floored to the next integer).
So, for example, if you expect 'correct' rounding (for ceil/floor/trunc) up to 1 significant fractional decimal digit (x.1 to x.9), we need at least 3 bits (not 4) to give us a binary fractional value:
0.1 is closer to 1/(23=8)=0.125 than it is to 0 and 0.9 is closer to 1-1/(23=8)=0.875 than it is to 1.
only up to ±2(53-3=50) will all representable values have a non-zero binary fraction for no more than the first significant decimal fractional digit (values x.1 to x.9).
For 2 decimals ±2(53-6=47), for 3 decimals ±2(53-9=44), for 4 decimals ±2(53-13=40), for 5 decimals ±2(53-16=37), for 6 decimals ±2(53-19=34), for 7 decimals ±2(53-23=30), for 8 decimals ±2(53-26=27), for 9 decimals ±2(53-29=24), for 10 decimals ±2(53-33=20), for 11 decimals ±2(53-36=17), etc..
A "Safe Integer" in javascript is an integer:
that can be exactly represented as an IEEE-754 double precision number, and
whose IEEE-754 representation cannot be the result of rounding any other integer to fit the IEEE-754 representation
(even though ±253 (as an exact power of 2) can exactly be represented, it is not a safe integer because it could also have been ±(253+1) before it was rounded to fit into the maximum of 53 most significant bits).
This effectively defines a subset range of (safely representable) integers between -253 and +253:
from: -(253 - 1) = -9007199254740991 (inclusive)
(a constant provided as static property Number.MIN_SAFE_INTEGER since ES6)
to: +(253 - 1) = +9007199254740991 (inclusive)
(a constant provided as static property Number.MAX_SAFE_INTEGER since ES6)
Trivial polyfill for these 2 new ES6 constants:
Number.MIN_SAFE_INTEGER || (Number.MIN_SAFE_INTEGER=
-(Number.MAX_SAFE_INTEGER=9007199254740991) //Math.pow(2,53)-1
);
Since ES6 there is also a complimentary static method Number.isSafeInteger() which tests if the passed value is of type Number and is an integer within the safe integer range (returning a boolean true or false).
Note: will also return false for: NaN, Infinity and obviously String (even if it represents a number).
Polyfill example:
Number.isSafeInteger || (Number.isSafeInteger = function(value){
return typeof value === 'number' &&
value === Math.floor(value) &&
value < 9007199254740992 &&
value > -9007199254740992;
});
ECMAScript 2015 / ES6 provides a new static method Math.trunc()
to truncate a float to an integer:
Returns the integral part of the number x, removing any fractional digits. If x is already an integer, the result is x.
Or put simpler (MDN):
Unlike other three Math methods: Math.floor(), Math.ceil() and Math.round(), the way Math.trunc() works is very simple and straightforward:
just truncate the dot and the digits behind it, no matter whether the argument is a positive number or a negative number.
We can further explain (and polyfill) Math.trunc() as such:
Math.trunc || (Math.trunc = function(n){
return n < 0 ? Math.ceil(n) : Math.floor(n);
});
Note, the above polyfill's payload can potentially be better pre-optimized by the engine compared to:
Math[n < 0 ? 'ceil' : 'floor'](n);
Usage: Math.trunc(/* Number or String */)
Input: (Integer or Floating Point) Number (but will happily try to convert a String to a Number)
Output: (Integer) Number (but will happily try to convert Number to String in a string-context)
Range: -2^52 to +2^52 (beyond this we should expect 'rounding-errors' (and at some point scientific/exponential notation) plain and simply because our Number input in IEEE 754 already lost fractional precision: since Numbers between ±2^52 to ±2^53 are already internally rounded integers (for example 4503599627370509.5 is internally already represented as 4503599627370510) and beyond ±2^53 the integers also loose precision (powers of 2)).
Float to integer conversion by subtracting the Remainder (%) of a devision by 1:
Example: result = n-n%1 (or n-=n%1)
This should also truncate floats. Since the Remainder operator has a higher precedence than Subtraction we effectively get: (n)-(n%1).
For positive Numbers it's easy to see that this floors the value: (2.5) - (0.5) = 2,
for negative Numbers this ceils the value: (-2.5) - (-0.5) = -2 (because --=+ so (-2.5) + (0.5) = -2).
Since the input and output are Number we should get the same useful range and output compared to ES6 Math.trunc() (or it's polyfill).
Note: tough I fear (not sure) there might be differences: because we are doing arithmetic (which internally uses rounding mode "nearTiesEven" (aka Banker's Rounding)) on the original Number (the float) and a second derived Number (the fraction) this seems to invite compounding digital_representation and arithmetic rounding errors, thus potentially returning a float after all..
Float to integer conversion by (ab-)using bitwise operations:
This works by internally forcing a (floating point) Number conversion (truncation and overflow) to a signed 32-bit integer value (two's complement) by using a bitwise operation on a Number (and the result is converted back to a (floating point) Number which holds just the integer value).
Again, input and output is Number (and again silent conversion from String-input to Number and Number-output to String).
More important tough (and usually forgotten and not explained):
depending on bitwise operation and the number's sign, the useful range will be limited between:
-2^31 to +2^31 (like ~~num or num|0 or num>>0) OR 0 to +2^32 (num>>>0).
This should be further clarified by the following lookup-table (containing all 'critical' examples):
n | n>>0 OR n<<0 OR | n>>>0 | n < 0 ? -(-n>>>0) : n>>>0
| n|0 OR n^0 OR ~~n | |
| OR n&0xffffffff | |
----------------------------+-------------------+-------------+---------------------------
+4294967298.5 = (+2^32)+2.5 | +2 | +2 | +2
+4294967297.5 = (+2^32)+1.5 | +1 | +1 | +1
+4294967296.5 = (+2^32)+0.5 | 0 | 0 | 0
+4294967296 = (+2^32) | 0 | 0 | 0
+4294967295.5 = (+2^32)-0.5 | -1 | +4294967295 | +4294967295
+4294967294.5 = (+2^32)-1.5 | -2 | +4294967294 | +4294967294
etc... | etc... | etc... | etc...
+2147483649.5 = (+2^31)+1.5 | -2147483647 | +2147483649 | +2147483649
+2147483648.5 = (+2^31)+0.5 | -2147483648 | +2147483648 | +2147483648
+2147483648 = (+2^31) | -2147483648 | +2147483648 | +2147483648
+2147483647.5 = (+2^31)-0.5 | +2147483647 | +2147483647 | +2147483647
+2147483646.5 = (+2^31)-1.5 | +2147483646 | +2147483646 | +2147483646
etc... | etc... | etc... | etc...
+1.5 | +1 | +1 | +1
+0.5 | 0 | 0 | 0
0 | 0 | 0 | 0
-0.5 | 0 | 0 | 0
-1.5 | -1 | +4294967295 | -1
etc... | etc... | etc... | etc...
-2147483646.5 = (-2^31)+1.5 | -2147483646 | +2147483650 | -2147483646
-2147483647.5 = (-2^31)+0.5 | -2147483647 | +2147483649 | -2147483647
-2147483648 = (-2^31) | -2147483648 | +2147483648 | -2147483648
-2147483648.5 = (-2^31)-0.5 | -2147483648 | +2147483648 | -2147483648
-2147483649.5 = (-2^31)-1.5 | +2147483647 | +2147483647 | -2147483649
-2147483650.5 = (-2^31)-2.5 | +2147483646 | +2147483646 | -2147483650
etc... | etc... | etc... | etc...
-4294967294.5 = (-2^32)+1.5 | +2 | +2 | -4294967294
-4294967295.5 = (-2^32)+0.5 | +1 | +1 | -4294967295
-4294967296 = (-2^32) | 0 | 0 | 0
-4294967296.5 = (-2^32)-0.5 | 0 | 0 | 0
-4294967297.5 = (-2^32)-1.5 | -1 | +4294967295 | -1
-4294967298.5 = (-2^32)-2.5 | -2 | +4294967294 | -2
Note 1: the last column has extended range 0 to -4294967295 using (n < 0 ? -(-n>>>0) : n>>>0).
Note 2: bitwise introduces its own conversion-overhead(s) (severity vs Math depends on actual implementation, so bitwise could be faster (often on older historic browsers)).
Obviously, if your 'floating point' number was a String to begin with,
parseInt(/*String*/, /*Radix*/) would be an appropriate choice to parse it into a integer Number.
parseInt() will truncate as well (for positive and negative numbers).
The range is again limited to IEEE 754 double precision floating point as explained above for the Math method(s).
Finally, if you have a String and expect a String as output you could also chop of the radix point and fraction (which also gives you a larger accurate truncation range compared to IEEE 754 double precision floating point (±2^52))!
EXTRA:
From the info above you should now have all you need to know.
If for example you'd want round away from zero (aka round towards infinity) you could modify the Math.trunc() polyfill, for example:
Math.intToInf || (Math.intToInf = function(n){
return n < 0 ? Math.floor(n) : Math.ceil(n);
});
The answer has already been given but just to be clear.
Use the Math library for this. round, ceil or floor functions.
parseInt is for converting a string to an int which is not what is needed here
toFixed is for converting a float to a string also not what is needed here
Since the Math functions will not be doing any conversions to or from a string it will be faster than any of the other choices which are wrong anyway.
You can use Number(a).toFixed(0);
Or even just a.toFixed(0);
Edit:
That's rounding to 0 places, slightly different than truncating, and as someone else suggested, toFixed returns a string, not a raw integer. Useful for display purposes.
var num = 2.7; // typeof num is "Number"
num.toFixed(0) == "3"
var i = parseInt(n, 10);
If you don't specify a radix values like '010' will be treated as octal (and so the result will be 8 not 10).
Using bitwise operators. It may not be the clearest way of converting to an integer, but it works on any kind of datatype.
Suppose your function takes an argument value, and the function works in such a way that value must always be an integer (and 0 is accepted). Then any of the following will assign value as an integer:
value = ~~(value)
value = value | 0;
value = value & 0xFF; // one byte; use this if you want to limit the integer to
// a predefined number of bits/bytes
The best part is that this works with strings (what you might get from a text input, etc) that are numbers ~~("123.45") === 123. Any non numeric values result in 0, ie,
~~(undefined) === 0
~~(NaN) === 0
~~("ABC") === 0
It does work with hexadecimal numbers as strings (with a 0x prefix)
~~("0xAF") === 175
There is some type coercion involved, I suppose. I'll do some performance tests to compare these to parseInt() and Math.floor(), but I like having the extra convenience of no Errors being thrown and getting a 0 for non-numbers
So I made a benchmark, on Chrome when the input is already a number, the fastest would be ~~num and num|0, half speed: Math.floor, and the slowest would be parseInt see here
EDIT: it seems there are already another person who made rounding benchmark (more result) and additional ways: num>>0 (as fast as |0) and num - num%1 (sometimes fast)
The question appears to be asking specifically about converting from a float to an int. My understanding is that the way to do this is to use toFixed. So...
var myFloat = 2.5;
var myInt = myFloat.toFixed(0);
Does anyone know if Math.floor() is more or less performant than Number.toFixed()?
you could also do it this way:
var string = '1';
var integer = a * 1;
parseInt() is probably the best one. a | 0 doesn't do what you really want (it just assigns 0 if a is an undefined or null value, which means an empty object or array passes the test), and Math.floor works by some type trickery (it basically calls parseInt() in the background).

Categories