I was checking out an online game physics library today and came across the ~~ operator. I know a single ~ is a bitwise NOT, would that make ~~ a NOT of a NOT, which would give back the same value, wouldn't it?
It removes everything after the decimal point because the bitwise operators implicitly convert their operands to signed 32-bit integers. This works whether the operands are (floating-point) numbers or strings, and the result is a number.
In other words, it yields:
function(x) {
if(x < 0) return Math.ceil(x);
else return Math.floor(x);
}
only if x is between -(231) and 231 - 1. Otherwise, overflow will occur and the number will "wrap around".
This may be considered useful to convert a function's string argument to a number, but both because of the possibility of overflow and that it is incorrect for use with non-integers, I would not use it that way except for "code golf" (i.e. pointlessly trimming bytes off the source code of your program at the expense of readability and robustness). I would use +x or Number(x) instead.
How this is the NOT of the NOT
The number -43.2, for example is:
-43.210 = 111111111111111111111111110101012
as a signed (two's complement) 32-bit binary number. (JavaScript ignores what is after the decimal point.) Inverting the bits gives:
NOT -4310 = 000000000000000000000000001010102 = 4210
Inverting again gives:
NOT 4210 = 111111111111111111111111110101012 = -4310
This differs from Math.floor(-43.2) in that negative numbers are rounded toward zero, not away from it. (The floor function, which would equal -44, always rounds down to the next lower integer, regardless of whether the number is positive or negative.)
The first ~ operator forces the operand to an integer (possibly after coercing the value to a string or a boolean), then inverts the lowest 31 bits. Officially ECMAScript numbers are all floating-point, but some numbers are implemented as 31-bit integers in the SpiderMonkey engine.
You can use it to turn a 1-element array into an integer. Floating-points are converted according to the C rule, ie. truncation of the fractional part.
The second ~ operator then inverts the bits back, so you know that you will have an integer. This is not the same as coercing a value to boolean in a condition statement, because an empty object {} evaluates to true, whereas ~~{} evaluates to false.
js>~~"yes"
0
js>~~3
3
js>~~"yes"
0
js>~~false
0
js>~~""
0
js>~~true
1
js>~~"3"
3
js>~~{}
0
js>~~{a:2}
0
js>~~[2]
2
js>~~[2,3]
0
js>~~{toString: function() {return 4}}
4
js>~~NaN
0
js>~~[4.5]
4
js>~~5.6
5
js>~~-5.6
-5
In ECMAScript 6, the equivalent of ~~ is Math.trunc:
Returns the integral part of a number by removing any fractional digits. It does not round any numbers.
Math.trunc(13.37) // 13
Math.trunc(42.84) // 42
Math.trunc(0.123) // 0
Math.trunc(-0.123) // -0
Math.trunc("-1.123")// -1
Math.trunc(NaN) // NaN
Math.trunc("foo") // NaN
Math.trunc() // NaN
The polyfill:
function trunc(x) {
return x < 0 ? Math.ceil(x) : Math.floor(x);
}
The ~ seems to do -(N+1). So ~2 == -(2 + 1) == -3 If you do it again on -3 it turns it back: ~-3 == -(-3 + 1) == 2 It probably just converts a string to a number in a round-about way.
See this thread: http://www.sitepoint.com/forums/showthread.php?t=663275
Also, more detailed info is available here: http://dreaminginjavascript.wordpress.com/2008/07/04/28/
Given ~N is -(N+1), ~~N is then -(-(N+1) + 1). Which, evidently, leads to a neat trick.
Just a bit of a warning. The other answers here got me into some trouble.
The intent is to remove anything after the decimal point of a floating point number, but it has some corner cases that make it a bug hazard. I'd recommend avoiding ~~.
First, ~~ doesn't work on very large numbers.
~~1000000000000 == -727279968
As an alternative, use Math.trunc() (as Gajus mentioned, Math.trunc() returns the integer part of a floating point number but is only available in ECMAScript 6 compliant JavaScript). You can always make your own Math.trunc() for non-ECMAScript-6 environments by doing this:
if(!Math.trunc){
Math.trunc = function(value){
return Math.sign(value) * Math.floor(Math.abs(value));
}
}
I wrote a blog post on this for reference: http://bitlords.blogspot.com/2016/08/the-double-tilde-x-technique-in.html
Converting Strings to Numbers
console.log(~~-1); // -1
console.log(~~0); // 0
console.log(~~1); // 1
console.log(~~"-1"); // -1
console.log(~~"0"); // 0
console.log(~~"1"); // 1
console.log(~~true); // 1
console.log(~~false); // 0
~-1 is 0
if (~someStr.indexOf("a")) {
// Found it
} else {
// Not Found
}
source
~~ can be used as a shorthand for Math.trunc()
~~8.29 // output 8
Math.trunc(8.29) // output 8
Here is an example of how this operator can be used efficiently, where it makes sense to use it:
leftOffset = -(~~$('html').css('padding-left').replace('px', '') + ~~$('body').css('margin-left').replace('px', '')),
Source:
See section Interacting with points
Tilde(~) has an algorihm -(N+1)
For examle:
~0 = -(0+1) = -1
~5 = -(5+1) = -6
~-7 = -(-7+1) = 6
Double tilde is -(-(N+1)+1)
For example:
~~5 = -(-(5+1)+1) = 5
~~-3 = -(-(-3+1)+1) = -3
Triple tilde is -(-(-(N+1)+1)+1)
For example:
~~~2 = -(-(-(2+1)+1)+1) = -3
~~~3 = -(-(-(3+1)+1)+1) = -4
Same as Math.abs(Math.trunc(-0.123)) if you want to make sure the - is also removed.
In addition to truncating real numbers, ~~ can also be used as an operator for updating counters in an object. The ~~ applied to an undefined object property will resolve to zero, and will resolve to the same integer if that counter property already exists, which you then increment.
let words=["abc", "a", "b", "b", "bc", "a", "b"];
let wordCounts={};
words.forEach( word => wordCounts[word] = ~~wordCounts[word] + 1 );
console.log("b count == " + wordCounts["b"]); // 3
The following two assignments are equivalent.
wordCounts[word] = (wordCounts[word] ? wordCounts[word] : 0) + 1;
wordCounts[word] = ~~wordCounts[word] + 1;
Related
I'm translating a library from javascript to C#, and felt under this case:
// Javascript
var number = 3144134277.518717 | 0;
console.log(number); // -> -1150833019
From what I read on other post, it might of been used to round values, but in this case the value isn't what I expect it to be (if it was to be rounded) and I can't reproduce in C# the same behavior with:
// C#
3144134277.5187168 | 0 // -> Operator '|' cannot be applied to operands
// of type 'double' and 'int'
// or
Convert.ToInt64(3144134277.5187168) | 0 // -> 3144134278
Thanks for any help!
The way | works in javaScript is detailed in the spec, but primarily what you're seeing is that | implicitly converts its operands to 32-bit ints before doing the bitwise OR (which is a no-op because the second arg is 0). So really what you're seeing is the result of the ToInt32 operation:
Let number be ? ToNumber(argument). (You can largely ignore this bit.)
If number is NaN, +0, -0, +∞, or -∞, return +0.
Let int be the mathematical value that is the same sign as number and whose magnitude is floor(abs(number)).
Let int32bit be int modulo 2^32.
If int32bit ≥ 2^31, return int32bit - 2^32; otherwise return int32bit.
So in C#, I think that's roughly:
double value = 3144134277.5187168;
bool negative = value < 0;
long n = Convert.ToInt64(Math.Floor(Math.Abs(value)));
n = n % 4294967296;
n = n > 2147483648 ? n - 4294967296 : n;
int i = (int)n;
i = negative ? -i : i;
Console.WriteLine(i); // -1150833019
...written verbosely for clarity. Note that the sign isn't added back to the result in quite the same place as the spec; when I did that, it didn't work correctly, which probably has to do with differing definitions between the spec's "modulo" and C#'s % operator.
And just double-checking, those steps with -3144134277.5187168 give you 1150833019, which is as it's supposed to be as well.
Using javascript, i want to check if a certain character is 32 bit or not ? How can i do it ? I have tried with charCodeAt() but it didn't work out for 32bit characters.
Any suggestions/help will be much appreciated.
The charCodeAt() docs returns integer between 0 to 65535 (FFFF) representing UTF-16 code unit.
If you want the entire code point value, use codePointAt(). You can use the string.codePointAt(pos) to easily check if a character is represented by 1 or 2 code point value .
Values greater than FFFF means they take 2 code units for a total of 32 bits.
function is32Bit(c) {
return c.codePointAt(0) > 0xFFFF;
}
console.log(is32Bit("𠮷")); // true
console.log(is32Bit("a")); // false
console.log(is32Bit("₩")); // false
Note: codePointAt() is provided in ECMAScript 6 so this might not work in every browser. For ECMAScript 6 support, check firefox and chrome.
function characterInfo(ch) {
function is32Bit(ch) {
return ch.codePointAt(0) > 0xFFFF;
}
let result = `character: ${ch}\n` +
`CPx0: ${ch.codePointAt(0)}\n`;
if(ch.codePointAt(1)) {
result += `CPx1: ${ch.codePointAt(1)}\n`;
}
console.log( result += is32Bit(ch) ?
'Is 32 bit character.' :
'Is 16 bit character.');
}
//For testing
let ch16 = String.fromCodePoint(10020);
let ch32 = String.fromCodePoint(134071);
characterInfo(ch16);
characterInfo(ch32);
I have this:
"ctypes.UInt64("7")"
It is returned by this:
var chars = SendMessage(hToolbar, TB_GETBUTTONTEXTW, local_tbb.idCommand, ctypes.voidptr_t(0));
so
console.log('chars=', chars, chars.toString(), uneval(chars));
gives
'chars=' 'UInt64 { }' "7" 'ctypes.UInt64("7")'
So I can get the value by going chars.toString(), but I have to run a parseInt on that, is there anyway to read it like a property? Like chars.UInt64?
The problem with 64-bit integers in js-ctypes is that Javascript lacks a compatible type. All Javascript numbers are IEEE double precision floating point numbers (double), and those can represent 53-bit integers at most. So you shouldn't even be trying to parse the int yourself, unless you know for a fact that the result would fit into a double. E.g. You cannot know this for pointers.
E.g. consider the following:
// 6 * 8-bit = 48 bit; 48 < 53, so this is OK
((parseInt("0xffffffffffff", 16) + 2) == parseInt("0xffffffffffff", 16)) == false
// However, 7 * 8-bit = 56 bit; 56 < 53, so this is not OK
((parseInt("0xffffffffffffff", 16) + 2) == parseInt("0xffffffffffffff", 16)) == true
// Oops, that compared equal, because a double precision floating point
// cannot actual hold the parseInt result, which is still well below 64-bit!
Lets deal with 64-bit integers in JS properly...
If you just want to comparisons, use UInt64.compare()/Int64.compare(), e.g.
// number == another number
ctypes.UInt64.compare(ctypes.UInt64("7"), ctypes.UInt64("7")) == 0
// number != another number
ctypes.UInt64.compare(ctypes.UInt64("7"), ctypes.UInt64("6")) != 0
// number > another number
ctypes.UInt64.compare(ctypes.UInt64("7"), ctypes.UInt64("6")) > 0
// number < another number
ctypes.UInt64.compare(ctypes.UInt64("7"), ctypes.UInt64("8")) < 0
If you need the result, but are not sure it is a 32-bit unsigned integer, you can detect if you're dealing with 32 bit unsigned integers that are just packed into Uint64:
ctypes.UInt64.compare(ctypes.UInt64("7"), ctypes.UInt64("0xffffffff")) < 0
And the analog for 32-bit signed integers in Int64, but you need to compare minimum and maximum:
ctypes.Int64.compare(ctypes.Int64("7"), ctypes.Int64("2147483647")) < 0 &&
ctypes.Int64.compare(ctypes.Int64("7"), ctypes.Int64("-2147483648")) > 0
So, once you know or detected that something will fit into a JS double, it is safe to call parseInt on it.
var number = ...;
if (ctypes.UInt64.compare(number, ctypes.UInt64("0xffffffff")) > 0) {
throw Error("Whoops, unexpectedly large value that our code would not handle correctly");
}
chars = parseInt(chars.toString(), 10);
(For the sake of completeness, there is also UInt64.hi()/Int64.hi() and UInt64.lo()/Int64.lo() to get the high and low 32-bits for real 64-bit integers and do 64-bit integer math yourself (e.g.), but beware of endianess).
PS: The return value of SendMessage is intptr_t not uintptr_t, which is important here because SendMessage(hwnd, TB_GETBUTTONTEXT, ...) may return -1 on failure!
So putting all this together (untested):
var SendMessage = user32.declare(
'SendMessageW',
ctypes.winapi_abi,
ctypes.intptr_t,
ctypes.voidptr_t, // HWND
ctypes.uint32_t, // MSG
ctypes.uintptr_t, // WPARAM
ctypes.intptr_t // LPARAM
);
// ...
var chars = SendMessage(hToolbar, TB_GETBUTTONTEXTW, local_tbb.idCommand, ctypes.voidptr_t(0));
if (ctypes.Int64.compare(chars, ctypes.Int64("0")) < 0) {
throw new Error("TB_GETBUTTONTEXT returned a failure (negative value)");
}
if (ctypes.Int64.comare(chars, ctypes.Int64("32768")) > 0) {
throw new Error("TB_GETBUTTONTEXT returned unreasonably large number > 32KiB");
}
chars = parseInt(chars.toString());
In C# the following code returns 2:
double d = 2.9;
int i = (int)d;
Debug.WriteLine(i);
In Javascript, however, the only way of converting a "double" to an "int" that I'm aware of is by using Math.round/floor/toFixed etc. Is there a way of converting to an int in Javascript without rounding? I'm aware of the performance implications of Number() so I'd rather avoid converting it to a string if at all possible.
Use parseInt().
var num = 2.9
console.log(parseInt(num, 10)); // 2
You can also use |.
var num = 2.9
console.log(num | 0); // 2
I find the "parseInt" suggestions to be pretty curious, because "parseInt" operates on strings by design. That's why its name has the word "parse" in it.
A trick that avoids a function call entirely is
var truncated = ~~number;
The double application of the "~" unary operator will leave you with a truncated version of a double-precision value. However, the value is limited to 32 bit precision, as with all the other JavaScript operations that implicitly involve considering numbers to be integers (like array indexing and the bitwise operators).
edit — In an update quite a while later, another alternative to the ~~ trick is to bitwise-OR the value with zero:
var truncated = number|0;
Similar to C# casting to (int) with just using standard lib:
Math.trunc(1.6) // 1
Math.trunc(-1.6) // -1
Just use parseInt() and be sure to include the radix so you get predictable results:
parseInt(d, 10);
There is no such thing as an int in Javascript. All Numbers are actually doubles behind the scenes* so you can't rely on the type system to issue a rounding order for you as you can in C or C#.
You don't need to worry about precision issues (since doubles correctly represent any integer up to 2^53) but you really are stuck with using Math.floor (or other equivalent tricks) if you want to round to the nearest integer.
*Most JS engines use native ints when they can but all in all JS numbers must still have double semantics.
A trick to truncate that avoids a function call entirely is
var number = 2.9
var truncated = number - number % 1;
console.log(truncated); // 2
To round a floating-point number to the nearest integer, use the addition/subtraction trick. This works for numbers with absolute value < 2 ^ 51.
var number = 2.9
var rounded = number + 6755399441055744.0 - 6755399441055744.0; // (2^52 + 2^51)
console.log(rounded); // 3
Note:
Halfway values are rounded to the nearest even using "round half to even" as the tie-breaking rule. Thus, for example, +23.5 becomes +24, as does +24.5. This variant of the round-to-nearest mode is also called bankers' rounding.
The magic number 6755399441055744.0 is explained in the stackoverflow post "A fast method to round a double to a 32-bit int explained".
// Round to whole integers using arithmetic operators
let trunc = (v) => v - v % 1;
let ceil = (v) => trunc(v % 1 > 0 ? v + 1 : v);
let floor = (v) => trunc(v % 1 < 0 ? v - 1 : v);
let round = (v) => trunc(v < 0 ? v - 0.5 : v + 0.5);
let roundHalfEven = (v) => v + 6755399441055744.0 - 6755399441055744.0; // (2^52 + 2^51)
console.log("number floor ceil round trunc");
var array = [1.5, 1.4, 1.0, -1.0, -1.4, -1.5];
array.forEach(x => {
let f = x => (x).toString().padStart(6," ");
console.log(`${f(x)} ${f(floor(x))} ${f(ceil(x))} ${f(round(x))} ${f(trunc(x))}`);
});
As #Quentin and #Pointy pointed out in their comments, it's not a good idea to use parseInt() because it is designed to convert a string to an integer. When you pass a decimal number to it, it first converts the number to a string, then casts it to an integer. I suggest you use Math.trunc(), Math.floor(), ~~num, ~~v , num | 0, num << 0, or num >> 0 depending on your needs.
This performance test demonstrates the difference in parseInt() and Math.floor() performance.
Also, this post explains the difference between the proposed methods.
What about this:
if (stringToSearch.IndexOfAny( ".,;:?!".ToCharArray() ) == -1) { ... }
I think that the easiest solution is using the bitwise not operator twice:
const myDouble = -66.7;
console.log(myDouble); //-66.7
const myInt = ~~myDouble;
console.log(myInt); //-66
const myInt = ~~-myDouble;
console.log(myInt); //66
Are there any side effects if i convert a string to a number like below..
var numb=str*1;
If I check with the below code it says this is a number..
var str="123";
str=str*1;
if(!isNaN(str))
{
alert('Hello');
}
Please let me know if there are any concerns in using this method..
When you use parseFloat, or parseInt, the conversion is less strict. 1b5 -> 1.
Using 1*number or +number to convert will result in NaN when the input is not valid number. Though unlike parseInt, floating point numbers will be parsed correctly.
Table covering all possible relevant options.
//Variables // parseInt parseFloat + 1* /1 ~~ |0 ^1 >>0 >>>0
var a = '123,',// 123 123 NaN 0 & <<0 0
b = '1.e3',// 1 1000 1000 1000 1000
c = '1.21',// 1 1.21 1.21 1 1
d = '0020',// 16 20 20 20 20
e = '0x10',// 16 0 16 16 16
f = '3e9', // 3 3000000000 <-- -1294967296 3000000000
g = '3e10',// 3 30000000000 <-- -64771072 4230196224
h = 3e25 ,// 3 3e+25 3e+25 0 0
i = '3e25',// 3 3e+25 3e+25 0 0
j = 'a123',// NaN NaN NaN 0 0
k = ' 1 ',// 1 1 1 1 1
l = ' ',// NaN NaN 0 0 0
m = '.1 ',// NaN 0.1 0.1 1 1
n = '1. ',// 1 1 1 1 1
o = '1e999',// 1 Infinity Infinity 0 0
p = '1e-999',// 1 0 0 0 0
q = false ,// NaN NaN 0 0 0
r = void 0,// NaN NaN NaN 0 0
_ = function(){return 1;}, /* Function _ used below */
s={valueOf:_},//NaN NaN 1 1 1
t={toString:_};// 1 1 1 1 1
// Intervals: (-1e+20, +1e20) (-∞,+∞) (-∞,+∞) (-2³¹,+2³¹) [0, 2³²)
// In FF9 and Chrome 17, Infinity === Math.pow(2, 1024), approx. 1.7976e+308
// In FF9 and Chrome 17, bitwise operators always return 0 after about ±1e+25
Notes on number conversion methods:
The number conversion always fail if the first character, after trimming white-space, is not a number.
parseInt returns an integer representation of the first argument. When the radix (second argument) is omitted, the radix depends on the given input.
0_ = octal (base-8), 0x_ = hexadecimal (base-16). Default: base-10.
parseInt ignores any non-digit characters, even if the argument was actually a number: See h, i.
To avoid unexpected results, always specify the radix, usually 10: parseInt(number, 10).
parseFloat is the most tolerant converter. It always interpret input as base-10, regardless of the prefix (unlike parseInt). For the exact parsing rules, see here.
The following methods will always fail to return a meaningful value if the string contains any non-number characters. (valid examples: 1.e+0 .1e-1)
+n, 1*n, n*1, n/1 and Number(n) are equivalent.
~~n, 0|n, n|0, n^1, 1^n, n&n, n<<0 and n>>0 are equivalent. These are signed bitwise operations, and will always return a numeric value (zero instead of NaN).
n>>>0 is also a bitwise operation, but does not reserve a sign bit. Consequently, only positive numbers can be represented, and the upper bound is 232 instead of 231.
When passed an object, parseFloat and parseInt will only look at the .toString() method. The other methods first look for .valueOf(), then .toString(). See q - t.
NaN, "Not A Number":typeof NaN === 'number'
NaN !== NaN. Because of this awkwardness, use isNaN() to check whether a value is NaN.
When to use which method?
parseFloat( x ) when you want to get as much numeric results as possible (for a given string).
parseFloat( (x+'').replace(/^[^0-9.-]+/,'') ) when you want even more numeric results.
parseInt( x, 10 ) if you want to get integers.
+x, 1*x .. if you're only concerned about getting true numeric values of a object, rejecting any invalid numbers (as NaN).
~~, 0| .. if you want to always get a numeric result (zero for invalid).
>>>0 if negative numbers do not exists.
The last two methods have a limited range. Have a look at the footer of the table.
The shortest way to test whether a given parameter is a real number is explained at this answer:
function isNumber(n) {
return typeof n == 'number' && !isNaN(n - n);
}