How to do decimal Bitwise Operation in C# from Javascript's Code - javascript

I'm translating a library from javascript to C#, and felt under this case:
// Javascript
var number = 3144134277.518717 | 0;
console.log(number); // -> -1150833019
From what I read on other post, it might of been used to round values, but in this case the value isn't what I expect it to be (if it was to be rounded) and I can't reproduce in C# the same behavior with:
// C#
3144134277.5187168 | 0 // -> Operator '|' cannot be applied to operands
// of type 'double' and 'int'
// or
Convert.ToInt64(3144134277.5187168) | 0 // -> 3144134278
Thanks for any help!

The way | works in javaScript is detailed in the spec, but primarily what you're seeing is that | implicitly converts its operands to 32-bit ints before doing the bitwise OR (which is a no-op because the second arg is 0). So really what you're seeing is the result of the ToInt32 operation:
Let number be ? ToNumber(argument). (You can largely ignore this bit.)
If number is NaN, +0, -0, +∞, or -∞, return +0.
Let int be the mathematical value that is the same sign as number and whose magnitude is floor(abs(number)).
Let int32bit be int modulo 2^32.
If int32bit ≥ 2^31, return int32bit - 2^32; otherwise return int32bit.
So in C#, I think that's roughly:
double value = 3144134277.5187168;
bool negative = value < 0;
long n = Convert.ToInt64(Math.Floor(Math.Abs(value)));
n = n % 4294967296;
n = n > 2147483648 ? n - 4294967296 : n;
int i = (int)n;
i = negative ? -i : i;
Console.WriteLine(i); // -1150833019
...written verbosely for clarity. Note that the sign isn't added back to the result in quite the same place as the spec; when I did that, it didn't work correctly, which probably has to do with differing definitions between the spec's "modulo" and C#'s % operator.
And just double-checking, those steps with -3144134277.5187168 give you 1150833019, which is as it's supposed to be as well.

Related

How to implement unsigned right shift for BigInt in JavaScript?

I tried this sort of implementation, but it doesn't appear to be working.
function urs32(n, amount) {
const mask = (1 << (32 - amount)) - 1
return (n >> amount) & mask
}
function flip32(n) {
const mask = (1 << 32) - 1
return ~n & mask
}
log(~0b10101010 >>> 0, urs32(~0b10101010, 0))
log(~0b10101010 >>> 0, flip32(0b10101010))
function log(a, b) {
console.log(a.toString(2), b.toString(2))
}
I would expect for a to equal b in both cases, if done right. Basically I am trying to flip 32-bits (so 1's become 0s, 0's become 1s). I see that 1 << 32 === 0, so to get the value, I do 2 ** 32, but still doesn't work.
How do you implement the equivalent of ~n >>> 0 on a BigInt?
Basically what I am trying to do is create the countLeadingOnes functions (out of the countLeadingZeroes functions), like so:
const LEADING_ZERO_BIT_TABLE = makeLeadingZeroTable()
function makeLeadingZeroTable() {
let i = 0
const table = new Uint8Array(256).fill(0)
while (i < 256) {
let count = 8
let index = i
while (index > 0) {
index = (index / 2) | 0
count--
}
table[i] = count
i++
}
return table
}
function countLeadingZeroes32JS(n)
{
let accum = LEADING_ZERO_BIT_TABLE[n >>> 24];
if (accum === 8) {
accum += LEADING_ZERO_BIT_TABLE[(n >>> 16)]
}
if (accum === 16) {
accum += LEADING_ZERO_BIT_TABLE[(n >>> 8)]
}
if (accum === 24) {
accum += LEADING_ZERO_BIT_TABLE[ n ]
}
return accum;
}
function countLeadingZeroes16JS(n)
{
let accum = LEADING_ZERO_BIT_TABLE[n >>> 8]
if (accum === 8) {
accum += LEADING_ZERO_BIT_TABLE[n]
}
return accum;
}
function countLeadingZeroes8JS(n)
{
return LEADING_ZERO_BIT_TABLE[n]
}
console.log('countLeadingZeroes32JS', countLeadingZeroes32JS(0b10100010001000100010001000100010))
console.log('countLeadingZeroes32JS', countLeadingZeroes32JS(0b00100010001000100010001000100010))
console.log('countLeadingZeroes32JS', countLeadingZeroes32JS(0b00000010001000100010001000100010))
console.log('countLeadingZeroes16JS', countLeadingZeroes16JS(0b1010001000100010))
console.log('countLeadingZeroes16JS', countLeadingZeroes16JS(0b0010001000100010))
console.log('countLeadingZeroes16JS', countLeadingZeroes16JS(0b0000001000100010))
console.log('countLeadingZeroes16JS', countLeadingZeroes16JS(0b0000000000100010))
console.log('countLeadingZeroes8JS', countLeadingZeroes8JS(0b10100010))
console.log('countLeadingZeroes8JS', countLeadingZeroes8JS(0b00100010))
console.log('countLeadingZeroes8JS', countLeadingZeroes8JS(0b00000010))
function countLeadingOnes32JS(n) {
return countLeadingZeroes32JS(~n >>> 0)
}
function countLeadingOnes16JS(n) {
return countLeadingZeroes16JS(~n >>> 0)
}
function countLeadingOnes8JS(n) {
return countLeadingZeroes8JS(~n >>> 0)
}
console.log('countLeadingOnes32JS', countLeadingZeroes32JS(0b00100010001000100010001000100010))
console.log('countLeadingOnes32JS', countLeadingZeroes32JS(0b11100010001000100010001000100010))
console.log('countLeadingOnes32JS', countLeadingZeroes32JS(0b11111100001000100010001000100010))
console.log('countLeadingOnes16JS', countLeadingOnes16JS(0b0100001000100010))
console.log('countLeadingOnes16JS', countLeadingOnes16JS(0b1111110000100010))
console.log('countLeadingOnes16JS', countLeadingOnes16JS(0b1111111111000010))
console.log('countLeadingOnes8JS', countLeadingOnes8JS(0b01000010))
console.log('countLeadingOnes8JS', countLeadingOnes8JS(0b11000010))
console.log('countLeadingOnes8JS', countLeadingOnes8JS(0b11111100))
But it appears that ~n >>> 0 doesn't work on 32-bit integers. How to get this working properly?
How to implement unsigned right shift for BigInt in JavaScript?
Unsigned right-shift is difficult to define meaningfully for arbitrary-size integers, so before you (or anyone) can implement it, you'll have to decide how you want it to behave.
That said, considering the rest of this question, I don't see why you would even need this.
I would expect for a to equal b in both cases
Why would it? Unsigned right-shift and bit flipping are different operations and produce different results.
I see that 1 << 32 === 0
Nope, 1 << 32 === 1. JavaScript (like x86 CPUs) performs an implicit &31 on the shift amount, so since 32 & 31 === 0, ... << 32 is the same as ... << 0.
How do you implement the equivalent of ~n >>> 0 on a BigInt?
The equivalent of ~n is ~n. (That's not a typo. It's literally the same thing.)
The equivalent of ... >>> 0 is BigInt.asUintN(32, ...). (Note that neither the Number version nor the BigInt version shifts anything, so this doesn't answer your headline question "how to implement USR for BigInt".)
it appears that ~n >>> 0 doesn't work on 32-bit integers.
It sure does work. In fact, it only works on 32-bit integers.
The >>> 0 part is completely unnecessary though, you could just drop it.
The reason why this line:
console.log('countLeadingOnes32JS', countLeadingZeroes32JS(0b00100010001000100010001000100010))
isn't producing the number of leading ones is because the function it's calling is ...Zeroes...; an apparent copy-paste bug.
The reason why countLeadingOnes16JS isn't working correctly is because ~ in JavaScript always flips 32 bits. Since a 16-bit number's 32-bit representation has (at least) 16 leading zeros, those all become ones after flipping, and countLeadingZeroes16JS gets an input that's far bigger than it can handle: LEADING_ZERO_BIT_TABLE[n >>> 8] looks up an element that doesn't exist in the table, because the result of n >>> 8 is a 24-bit number in this case, not an 8-bit number. The solution is to use a mask after flipping; a valid implementation of clo16 might be:
function countLeadingOnes16(n) {
return countLeadingZeroes16(~n & 0xFFFF);
}
No BigInts and no >>> 0 required.
countLeadingOnes8 is similar.
You may want to read https://en.wikipedia.org/wiki/Two%27s_complement (or some other description of that concept) to understand what's going on with bitwise operations on negative numbers.
You may also want to learn how to debug your own code. There's a range of techniques: for example, you could have:
inserted console.log statements for intermediate results,
or stepped through execution in a debugger,
or simply evaluated small snippets in the console,
any of which would have made it very easy for you to see what's happening on the path from input number to end result.
For anyone else reading this: there's Math.clz32, which is highly efficient because it gets compiled to a machine instruction, so implementing countLeadingZeros by hand is unnecessary and wasteful. For smaller widths, just subtract: function clz8(n) { return Math.clz32(n) - 24; }

How to convert number start with 0 to string equivalent of the value?

I want to convert a number start with 0 to string equivalent of the value.
If I run
var num = 12;
var int = num.toString();
console.log(int);
it logs 12 as expected but if I apply the toString() to a number start with 0 like,
var num = 012;
var int = num.toString();
console.log(int);
it logs 10, why?
Number starting with 0 is interpreted as octal (base-8).
In sloppy mode (the default) numbers starting with 0 are interpreted as being written in octal (base 8) instead of decimal (base 10). If has been like that from the first released version of Javascript, and has this syntax in common with other programming languages. It is confusing, and have lead to many hard to detect buggs.
You can enable strict mode by adding "use strict" as the first non-comment in your script or function. It removes some of the quirks. It is still possible to write octal numbers in strict mode, but you have to use the same scheme as with hexadecimal and binary: 0o20 is the octal representation of 16 decimal.
The same problem can be found with the function paseInt, that takes up to two parameters, where the second is the radix. If not specified, numbers starting with 0 will be treated as octal up to ECMAScript 5, where it was changed to decimal. So if you use parseInt, specify the radix to be sure that you get what you expected.
"use strict";
// Diffrent ways to write the same number:
const values = [
0b10000, // binary
0o20, // octal
16, // decimal,
0x10 // hexadecimal
];
console.log("As binary:", values.map( value => value.toString(2)).join());
console.log("As decimal:", values.join());
console.log("As ocal", values.map( value => value.toString(8)).join());
console.log("As hexadecimal:", values.map( value => value.toString(16)).join());
console.log("As base36:", values.map( value => value.toString(36)).join());
All you have to do is add String to the front of the number that is
var num = 12;
var int = String(num);
console.log(int);
And if you want it to look like this 0012 all you have to do is
var num = 12;
var int = String(num).padStart(4, '0');
console.log(int);

Syntax of "return +(test) && recursion" [duplicate]

I was wondering what the = +_ operator means in JavaScript. It looks like it does assignments.
Example:
hexbin.radius = function(_) {
if (!arguments.length)
return r;
r = +_;
dx = r * 2 * Math.sin(Math.PI / 3);
dy = r * 1.5;
return hexbin;
};
r = +_;
+ tries to cast whatever _ is to a number.
_ is only a variable name (not an operator), it could be a, foo etc.
Example:
+"1"
cast "1" to pure number 1.
var _ = "1";
var r = +_;
r is now 1, not "1".
Moreover, according to the MDN page on Arithmetic Operators:
The unary plus operator precedes its operand and evaluates to its
operand but attempts to converts it into a number, if it isn't
already. [...] It can convert string representations of integers and
floats, as well as the non-string values true, false, and null.
Integers in both decimal and hexadecimal ("0x"-prefixed) formats are
supported. Negative numbers are supported (though not for hex). If it
cannot parse a particular value, it will evaluate to NaN.
It is also noted that
unary plus is the fastest and preferred way of converting something into a number
It is not an assignment operator.
_ is just a parameter passed to the function.
hexbin.radius = function(_) {
// ^ It is passed here
// ...
};
On the next line r = +_; + infront casts that variable (_) to a number or integer value and assigns it to variable r
DO NOT CONFUSE IT WITH += operator
=+ are actually two operators = is assignment and + and _ is variable name.
like:
i = + 5;
or
j = + i;
or
i = + _;
My following codes will help you to show use of =+ to convert a string into int.
example:
y = +'5'
x = y +5
alert(x);
outputs 10
use: So here y is int 5 because of =+
otherwise:
y = '5'
x = y +5
alert(x);
outputs 55
Where as _ is a variable.
_ = + '5'
x = _ + 5
alert(x)
outputs 10
Additionally,
It would be interesting to know you could also achieve same thing with ~ (if string is int string (float will be round of to int))
y = ~~'5' // notice used two time ~
x = y + 5
alert(x);
also outputs 10
~ is bitwise NOT : Inverts the bits of its operand. I did twice for no change in magnitude.
It's not =+. In JavaScript, + means change it into number.
+'32' returns 32.
+'a' returns NaN.
So you may use isNaN() to check if it can be changed into number.
It's a sneaky one.
The important thing to understand is that the underscore character here is actually a variable name, not an operator.
The plus sign in front of that is getting the positive numerical value of underscore -- ie effectively casting the underscore variable to be an int. You could achieve the same effect with parseInt(), but the plus sign casting is likely used here because it's more concise.
And that just leaves the equals sign as just a standard variable assignment.
It's probably not deliberately written to confuse, as an experienced Javascript programmer will generally recognise underscore as a variable. But if you don't know that it is definitely very confusing. I certainly wouldn't write it like that; I'm not a fan of short meaningless variable names at the best of times -- If you want short variable names in JS code to save space, use a minifier; don't write it with short variables to start with.
= +_ will cast _ into a number.
So
var _ = "1",
r = +_;
console.log(typeof r)
would output number.
I suppose you mean r = +_;? In that case, it's conversion of the parameter to a Number. Say _ is '12.3', then +'12.3' returns 12.3. So in the quoted statement +_ is assigned to r.
_ is just a a variable name, passed as a parameter of function hexbin.radius , and + cast it into number
Let me make a exmple same like your function .
var hexbin = {},r ;
hexbin.radius = function(_) {
if (!arguments.length)
return r;
console.log( _ , typeof _ )
r = +_;
console.log( r , typeof r , isNaN(r) );
}
and run this example function .. which outputs
hexbin.radius( "1");
1 string
1 number false
hexbin.radius( 1 );
1 number
1 number false
hexbin.radius( [] );
[] object
0 number false
hexbin.radius( 'a' );
a string
NaN number true
hexbin.radius( {} );
Object {} object
NaN number true
hexbin.radius( true );
true boolean
1 number false
It Will assign new value to left side variable a number.
var a=10;
var b="asg";
var c=+a;//return 10
var d=-a;//return -10
var f="10";
var e=+b;
var g=-f;
console.log(e);//NAN
console.log(g);//-10
Simply put, +_ is equivalent to using the Number() constructor.
In fact, it even works on dates:
var d = new Date('03/27/2014');
console.log(Number(d)) // returns 1395903600000
console.log(+d) // returns 1395903600000
DEMO:
http://jsfiddle.net/dirtyd77/GCLjd/
More information can also be found on MDN - Unary plus (+) section:
The unary plus operator precedes its operand and evaluates to its operand but attempts to converts it into a number, if it isn't already. Although unary negation (-) also can convert non-numbers, unary plus is the fastest and preferred way of converting something into a number, because it does not perform any other operations on the number. It can convert string representations of integers and floats, as well as the non-string values true, false, and null. Integers in both decimal and hexadecimal ("0x"-prefixed) formats are supported. Negative numbers are supported (though not for hex). If it cannot parse a particular value, it will evaluate to NaN.
+_ is almost equivalent of parseFloat(_) . Observe that parseInt will stop at non numeric character such as dot, whereas parshFloat will not.
EXP:
parseFloat(2.4) = 2.4
vs
parseInt(2.4) = 2
vs
+"2.4" = 2.4
Exp:
var _ = "3";
_ = +_;
console.log(_); // will show an integer 3
Very few differences:
Empty string "" evaluates to a 0, while parseInt() evaluates to NaN
For more info look here: parseInt vs unary plus - when to use which
In this expression:
r = +_;
'+' acts here as an unary operator that tries to convert the value of the right operand. It doesn't convert the operand but the evaluated value. So _ will stay "1" if it was so originally but the r will become pure number.
Consider these cases whether one wants to apply the + for numeric conversion
+"-0" // 0, not -0
+"1" //1
+"-1" // -1
+"" // 0, in JS "" is converted to 0
+null // 0, in JS null is converted to 0
+undefined // NaN
+"yack!" // NaN
+"NaN" //NaN
+"3.14" // 3.14
var _ = "1"; +_;_ // "1"
var _ = "1"; +_;!!_ //true
var _ = "0"; +_;!!_ //true
var _ = null; +_;!!_ //false
Though, it's the fastest numeric converter I'd hardly recommend one to overuse it if make use of at all. parseInt/parseFloat are good more readable alternatives.

Converting a double to an int in Javascript without rounding

In C# the following code returns 2:
double d = 2.9;
int i = (int)d;
Debug.WriteLine(i);
In Javascript, however, the only way of converting a "double" to an "int" that I'm aware of is by using Math.round/floor/toFixed etc. Is there a way of converting to an int in Javascript without rounding? I'm aware of the performance implications of Number() so I'd rather avoid converting it to a string if at all possible.
Use parseInt().
var num = 2.9
console.log(parseInt(num, 10)); // 2
You can also use |.
var num = 2.9
console.log(num | 0); // 2
I find the "parseInt" suggestions to be pretty curious, because "parseInt" operates on strings by design. That's why its name has the word "parse" in it.
A trick that avoids a function call entirely is
var truncated = ~~number;
The double application of the "~" unary operator will leave you with a truncated version of a double-precision value. However, the value is limited to 32 bit precision, as with all the other JavaScript operations that implicitly involve considering numbers to be integers (like array indexing and the bitwise operators).
edit — In an update quite a while later, another alternative to the ~~ trick is to bitwise-OR the value with zero:
var truncated = number|0;
Similar to C# casting to (int) with just using standard lib:
Math.trunc(1.6) // 1
Math.trunc(-1.6) // -1
Just use parseInt() and be sure to include the radix so you get predictable results:
parseInt(d, 10);
There is no such thing as an int in Javascript. All Numbers are actually doubles behind the scenes* so you can't rely on the type system to issue a rounding order for you as you can in C or C#.
You don't need to worry about precision issues (since doubles correctly represent any integer up to 2^53) but you really are stuck with using Math.floor (or other equivalent tricks) if you want to round to the nearest integer.
*Most JS engines use native ints when they can but all in all JS numbers must still have double semantics.
A trick to truncate that avoids a function call entirely is
var number = 2.9
var truncated = number - number % 1;
console.log(truncated); // 2
To round a floating-point number to the nearest integer, use the addition/subtraction trick. This works for numbers with absolute value < 2 ^ 51.
var number = 2.9
var rounded = number + 6755399441055744.0 - 6755399441055744.0; // (2^52 + 2^51)
console.log(rounded); // 3
Note:
Halfway values are rounded to the nearest even using "round half to even" as the tie-breaking rule. Thus, for example, +23.5 becomes +24, as does +24.5. This variant of the round-to-nearest mode is also called bankers' rounding.
The magic number 6755399441055744.0 is explained in the stackoverflow post "A fast method to round a double to a 32-bit int explained".
// Round to whole integers using arithmetic operators
let trunc = (v) => v - v % 1;
let ceil = (v) => trunc(v % 1 > 0 ? v + 1 : v);
let floor = (v) => trunc(v % 1 < 0 ? v - 1 : v);
let round = (v) => trunc(v < 0 ? v - 0.5 : v + 0.5);
let roundHalfEven = (v) => v + 6755399441055744.0 - 6755399441055744.0; // (2^52 + 2^51)
console.log("number floor ceil round trunc");
var array = [1.5, 1.4, 1.0, -1.0, -1.4, -1.5];
array.forEach(x => {
let f = x => (x).toString().padStart(6," ");
console.log(`${f(x)} ${f(floor(x))} ${f(ceil(x))} ${f(round(x))} ${f(trunc(x))}`);
});
As #Quentin and #Pointy pointed out in their comments, it's not a good idea to use parseInt() because it is designed to convert a string to an integer. When you pass a decimal number to it, it first converts the number to a string, then casts it to an integer. I suggest you use Math.trunc(), Math.floor(), ~~num, ~~v , num | 0, num << 0, or num >> 0 depending on your needs.
This performance test demonstrates the difference in parseInt() and Math.floor() performance.
Also, this post explains the difference between the proposed methods.
What about this:
if (stringToSearch.IndexOfAny( ".,;:?!".ToCharArray() ) == -1) { ... }
I think that the easiest solution is using the bitwise not operator twice:
const myDouble = -66.7;
console.log(myDouble); //-66.7
const myInt = ~~myDouble;
console.log(myInt); //-66
const myInt = ~~-myDouble;
console.log(myInt); //66

What does ~~ ("double tilde") do in Javascript?

I was checking out an online game physics library today and came across the ~~ operator. I know a single ~ is a bitwise NOT, would that make ~~ a NOT of a NOT, which would give back the same value, wouldn't it?
It removes everything after the decimal point because the bitwise operators implicitly convert their operands to signed 32-bit integers. This works whether the operands are (floating-point) numbers or strings, and the result is a number.
In other words, it yields:
function(x) {
if(x < 0) return Math.ceil(x);
else return Math.floor(x);
}
only if x is between -(231) and 231 - 1. Otherwise, overflow will occur and the number will "wrap around".
This may be considered useful to convert a function's string argument to a number, but both because of the possibility of overflow and that it is incorrect for use with non-integers, I would not use it that way except for "code golf" (i.e. pointlessly trimming bytes off the source code of your program at the expense of readability and robustness). I would use +x or Number(x) instead.
How this is the NOT of the NOT
The number -43.2, for example is:
-43.210 = 111111111111111111111111110101012
as a signed (two's complement) 32-bit binary number. (JavaScript ignores what is after the decimal point.) Inverting the bits gives:
NOT -4310 = 000000000000000000000000001010102 = 4210
Inverting again gives:
NOT 4210 = 111111111111111111111111110101012 = -4310
This differs from Math.floor(-43.2) in that negative numbers are rounded toward zero, not away from it. (The floor function, which would equal -44, always rounds down to the next lower integer, regardless of whether the number is positive or negative.)
The first ~ operator forces the operand to an integer (possibly after coercing the value to a string or a boolean), then inverts the lowest 31 bits. Officially ECMAScript numbers are all floating-point, but some numbers are implemented as 31-bit integers in the SpiderMonkey engine.
You can use it to turn a 1-element array into an integer. Floating-points are converted according to the C rule, ie. truncation of the fractional part.
The second ~ operator then inverts the bits back, so you know that you will have an integer. This is not the same as coercing a value to boolean in a condition statement, because an empty object {} evaluates to true, whereas ~~{} evaluates to false.
js>~~"yes"
0
js>~~3
3
js>~~"yes"
0
js>~~false
0
js>~~""
0
js>~~true
1
js>~~"3"
3
js>~~{}
0
js>~~{a:2}
0
js>~~[2]
2
js>~~[2,3]
0
js>~~{toString: function() {return 4}}
4
js>~~NaN
0
js>~~[4.5]
4
js>~~5.6
5
js>~~-5.6
-5
In ECMAScript 6, the equivalent of ~~ is Math.trunc:
Returns the integral part of a number by removing any fractional digits. It does not round any numbers.
Math.trunc(13.37) // 13
Math.trunc(42.84) // 42
Math.trunc(0.123) // 0
Math.trunc(-0.123) // -0
Math.trunc("-1.123")// -1
Math.trunc(NaN) // NaN
Math.trunc("foo") // NaN
Math.trunc() // NaN
The polyfill:
function trunc(x) {
return x < 0 ? Math.ceil(x) : Math.floor(x);
}
The ~ seems to do -(N+1). So ~2 == -(2 + 1) == -3 If you do it again on -3 it turns it back: ~-3 == -(-3 + 1) == 2 It probably just converts a string to a number in a round-about way.
See this thread: http://www.sitepoint.com/forums/showthread.php?t=663275
Also, more detailed info is available here: http://dreaminginjavascript.wordpress.com/2008/07/04/28/
Given ~N is -(N+1), ~~N is then -(-(N+1) + 1). Which, evidently, leads to a neat trick.
Just a bit of a warning. The other answers here got me into some trouble.
The intent is to remove anything after the decimal point of a floating point number, but it has some corner cases that make it a bug hazard. I'd recommend avoiding ~~.
First, ~~ doesn't work on very large numbers.
~~1000000000000 == -727279968
As an alternative, use Math.trunc() (as Gajus mentioned, Math.trunc() returns the integer part of a floating point number but is only available in ECMAScript 6 compliant JavaScript). You can always make your own Math.trunc() for non-ECMAScript-6 environments by doing this:
if(!Math.trunc){
Math.trunc = function(value){
return Math.sign(value) * Math.floor(Math.abs(value));
}
}
I wrote a blog post on this for reference: http://bitlords.blogspot.com/2016/08/the-double-tilde-x-technique-in.html
Converting Strings to Numbers
console.log(~~-1); // -1
console.log(~~0); // 0
console.log(~~1); // 1
console.log(~~"-1"); // -1
console.log(~~"0"); // 0
console.log(~~"1"); // 1
console.log(~~true); // 1
console.log(~~false); // 0
~-1 is 0
if (~someStr.indexOf("a")) {
// Found it
} else {
// Not Found
}
source
~~ can be used as a shorthand for Math.trunc()
~~8.29 // output 8
Math.trunc(8.29) // output 8
Here is an example of how this operator can be used efficiently, where it makes sense to use it:
leftOffset = -(~~$('html').css('padding-left').replace('px', '') + ~~$('body').css('margin-left').replace('px', '')),
Source:
See section Interacting with points
Tilde(~) has an algorihm -(N+1)
For examle:
~0 = -(0+1) = -1
~5 = -(5+1) = -6
~-7 = -(-7+1) = 6
Double tilde is -(-(N+1)+1)
For example:
~~5 = -(-(5+1)+1) = 5
~~-3 = -(-(-3+1)+1) = -3
Triple tilde is -(-(-(N+1)+1)+1)
For example:
~~~2 = -(-(-(2+1)+1)+1) = -3
~~~3 = -(-(-(3+1)+1)+1) = -4
Same as Math.abs(Math.trunc(-0.123)) if you want to make sure the - is also removed.
In addition to truncating real numbers, ~~ can also be used as an operator for updating counters in an object. The ~~ applied to an undefined object property will resolve to zero, and will resolve to the same integer if that counter property already exists, which you then increment.
let words=["abc", "a", "b", "b", "bc", "a", "b"];
let wordCounts={};
words.forEach( word => wordCounts[word] = ~~wordCounts[word] + 1 );
console.log("b count == " + wordCounts["b"]); // 3
The following two assignments are equivalent.
wordCounts[word] = (wordCounts[word] ? wordCounts[word] : 0) + 1;
wordCounts[word] = ~~wordCounts[word] + 1;

Categories