If x is a floating point number, does
y = x & 1
have a purpose other than checking whether x is odd?
I just read Using bitwise OR 0 to floor a number and wonder, are there other interesting bitwise manipulations?
If x is logically floating point (not intended to be an integer value before the test), & 1 has exactly one purpose: To determine if the truncated value of x is odd (it's a combined truncation and bitwise test). It's not saying if x is odd (-2.9 is neither odd nor even, nor is -3.9, and they give opposite results), and it's not just truncating (because it throws away all but one bit of data); the test is intrinsically combining both effects, and as such, is not useful for anything else when x is an arbitrary floating point value.
As other answer(s) mention, bitwise operations have other legitimate uses, e.g. cryptography, or repurposing an integer as a vector of boolean flags, but that's only relevant for logically integer values; performing floating point math and then relying on specific integer values after truncation, without rounding, is a terrible idea, and will bite you when the result ends up being X.9999999999999999999999994 when you expected it to be (and under "grade school" math, it would be) X+1.
You can use bitwise operators to encode options and flags.
ie. Encoding car features
automaticWindows = 0b1; // 1
manualTransmission = 0b10; // 2
hasAC = 0b100; // 4
hasHeater = 0b1000; // 8
If my car has automatic window and AC, but nothing else, then i'd do this:
myCarOptions = automaticWindows | hasAC;
Then to check if some random car has AC I can do:
if (randomCarOption & hasAC) {
// do something... like turn on AC
}
At the end of the day bitwise operators simply allow you to do logic with the various bits which is the core of how computing works.
Related
I've noticed that people use different ways to calculate the mid point of an ordered array and its sub arrays. This is often used in binary search.
The first method seems a bit better as it is simpler. Does the second way offer any advantages?
const mid = Math.round((left + right) / 2);
and
const mid = left + Math.round((right - left) / 2);
and ( per answer )
const mid = ( left + right ) >>> 1;
First of all, I don't think that Math.round is what you will find more often, because it will round upwards any integer + 0.5. E.g., Math.round(3.5) === 4. This is not the most common way to find the integer midpoint. The main reason is that alternative ways involving integers (as opposed to floating point numbers), all round downwards:
(left + right) >>> 1 (>>> is the unsigned right shift)
~~((left + right) * 0.5) (~~ is a way to cast to integer)
in other languages: integer division by 2 of left + right
These ways that involve integers are probably faster than calling Math.round(), check it on your browser. Furthermore, JavaScript is an exception among languages, in that it doesn't differentiate, except within operations, between integers and floats. Most languages will explicitly use integers for array indices, and so the right shift above is for them obviously the fastest way to divide by 2. This has also influenced the literature, so you will find ⌊(left+right)/2⌋ much more often than ⌈(left+right)/2⌉.
If you insist on using the Math library, this means using Math.floor((left + right) / 2).
The only reason not to use integer operations could be that integers are signed 32 bit integers in JavaScript, while floats are 64 bit IEEE 754 floats, which have 53 significant bits (including an implied 1 bit). If your array indices can exceed 2147483637 (231-1), you cannot use integer operations. This situation is unlikely, though, also because the size of JavaScript arrays cannot exceed 4294967296 (232) anyway.
As for your question, the second form is not only a bit longer on screen, it also implies one more math operation. The second form would have an advantage over the first if indices could get close to Number.MAX_SAFE_INTEGER, which is 9007199254740991 (253-1) (because the subtraction would allow indices to be closer to that limit), but as we have seen this is not possible for JavaScript arrays.
I'm developing a 3D space game, which is using alot of math formulas, navigation, ease effects, rotations, huge distances between planets, objects mass, and so on...
My Question is what would be the best way in doing so using math. Should I calculate everything as integers and obtain really large integers(over 20 digits), or use small numbers with decimals.
In my experience, math when using digits with decimals is not accurate, causing strange behavior when using large numbers with decimals.
I would avoid using decimals. They have known issues with precision: http://floating-point-gui.de/
I would recommend using integers, though if you need to work with very large integers I would suggest using a big number or big integer library such as one of these:
http://jsfromhell.com/classes/bignumber
https://silentmatt.com/biginteger/
The downside is you have to use these number objects and their methods rather than the primitive Number type and standard JS operators, but you'll have a lot more flexibility with operating on large numbers.
Edit:
As le_m pointed out, another downside is speed. The library methods won't run as fast as the native operators. You'll have to test for yourself to see if the performance is acceptable.
Use the JavaScript Number Object
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number
Number.MAX_SAFE_INTEGER
The maximum safe integer in JavaScript (2^53 - 1).
Number.MIN_SAFE_INTEGER
The minimum safe integer in JavaScript (-(253 - 1)).
var biggestInt = 9007199254740992;
var smallestInt = -9007199254740992;
var biggestNum = Number.MAX_VALUE;
var smallestNum = Number.MIN_VALUE;
var infiniteNum = Number.POSITIVE_INFINITY;
var negInfiniteNum = Number.NEGATIVE_INFINITY;
var notANum = Number.NaN;
console.log(biggestInt); // http://www.miniwebtool.com/scientific-notation-to-decimal-converter/?a=1.79769313&b=308
console.log(smallestInt); // http://www.miniwebtool.com/scientific-notation-to-decimal-converter/?a=5&b=-32
console.log(biggestNum);
console.log(smallestNum);
console.log(infiniteNum);
console.log(negInfiniteNum);
console.log(notANum);
debugger;
I can only imagine that this is a sign of a bigger problem with your application complicating something that could be very simple.
Please read numerical literals
http://www.ecma-international.org/ecma-262/5.1/#sec-7.8.3
Once the exact MV for a numeric literal has been determined, it is
then rounded to a value of the Number type. If the MV is 0, then the
rounded value is +0; otherwise, the rounded value must be the Number
value for the MV (as specified in 8.5), unless the literal is a
DecimalLiteral and the literal has more than 20 significant digits, in
which case the Number value may be either the Number value for the MV
of a literal produced by replacing each significant digit after the
20th with a 0 digit or the Number value for the MV of a literal
produced by replacing each significant digit after the 20th with a 0
digit and then incrementing the literal at the 20th significant digit
position. A digit is significant if it is not part of an ExponentPart
and
it is not 0;
or there is a nonzero digit to its left and there is a nonzero digit, not in the ExponentPart, to its right.
Clarification
I should add that the Number Object wrapper supposedly offers precision to 100 (Going above this number will give you a RangeType error) significant digits in some browsers, however most environments currently only implement the precision to the required 21 significant digits.
Reading through OPs original question, I believe skyline provided the best answer by recommending a library which offers well over 100 significant digits (some of the tests that I got to pass were using 250 significant digits). In reality, it would be interesting to see someone revive one of those projects again.
The distance from our Sun to Alpha Centauri is 4.153×1018 cm. You can represent this value well with the Number datatype which stores values up to 1.7977×10308 with about 17 significant figures.
However, what if you want to model a spaceship stationed at Alpha Centauri?
Due to the limited precision of Number, you can either store the value 4153000000000000000 or 4153000000000000500, but nothing in between. This means that you would have a maximal spacial resolution of 500 cm at Alpha Centauri. Your spaceship would look really clunky.
Could we use another datatype than Number? Of course you could use a library such as BigNumber.js which provides support for nearly unlimited precision. You can park your spaceship one milimeter next to the hot core of Alpha Centauri without (numerical) issues:
pos_acentauri = new BigNumber(4153000000000000000);
pos_spaceship = pos_acentauri.add(0.1); // one milimeter from Alpha Centauri
console.log(pos_spaceship); // 4153000000000000000.1
<script src="https://cdnjs.cloudflare.com/ajax/libs/bignumber.js/2.3.0/bignumber.min.js"></script>
However, not only would the captain of that ship burn to death, but your 3D engine would be slow as hell, too. That is because Number allows for really fast arithmetic computations in constant time, whereas e. g. the BigNumber addition computation time grows with the size of the stored value.
Solution: Use Number for your 3D engine. You could use different local coordinate systems, e. g. one for Alpha Centauri and one for our solar system. Only use BigNumber for things like the HUD, game stats and so on.
The problem with BigNumber is with
Precision loss from using numeric literals with more than 15
significant digits
My solution would be a combination of BigNumber and web3.js:
var web3 = new Web3();
let one = new BigNumber("1234567890121234567890123456789012345");
let two = new BigNumber("1000000000000000000");
let three = new BigNumber("1000000000000000000");
const minus = two.times(three).minus(one);
const plus = one.plus(two.times(three));
const compare = minus.comparedTo(plus);
const results = {
minus: web3.toBigNumber(minus).toString(10),
plus: web3.toBigNumber(plus).toString(10),
compare
}
console.log(results); // {minus: "-234567890121234567890123456789012345", plus: "2234567890121234567890123456789012345", compare: -1}
I was making a calculator (something like excel in javascript) and I have found a strange behavior with ParseFloat.
parseFloat(999999999999999) //999999999999999
parseFloat(9999999999999999) //10000000000000000
parseFloat(9999999999999899) //9999999999999900
Is there a limit with parseFloat function in javascript? Following ECMA Standard there is no issue about this.
Float is not an endless container. Consider this example:
console.log(0.1 + 0.2 == 0.3) // Prints... FALSE!
Or, another case:
console.log(99999999999999999999999999999999999) // Prints 1e+35
...while 1e+35 is just 1 with 35 zeroes. Original number (9999...) is so large and precise that JS starts cutting lower digits to store at least something - the source is too big to save in float.
This actually happens because of internal float conversions made by JavaScript engine and the philosophy of float type is that higher digits are more important that lower.
Your case is somewhat similar. This is because floating point type accuracy depends on its value length. So, If your value is too big or too small, you will lose precision for lower digits.
Thus you should never trust float and never compare it with other values using '==' of '===' - it may be anything.
I am not an expert in bitwise operators, but i often see a pattern which is used by programmers of 256k demos at competitions. Instead of using Math.floor() function, double bitwise NOT operator is used ~~ ( maybe faster ? ).
Like this:
Math.floor(2.1); // 2
~~2.1 // 2
Search revealed that there are more patterns that used the same way:
2.1 | 0 // 2
2.1 >> 0 // 2
When playing with this in the dev console, i have noticed a behavior that i'm not sure i understand fully.
Math.floor(2e+21); // 2e+21
~~2e+21; // -1119879168
2e+21 | 0; // -1119879168
What is happening under the hood ?
As Felix King pointed out, the numbers are being converted to 32 bit signed integers. 2e9 is less than the maximum positive value of a signed int, so this works:
~~(2e9) //2000000000
But when you go to 2e10, it can't use all the bits, so it just takes the lowest 32 bits and converts that to an int:
~~(2e10) //-1474836480
You can verify this by using another bitwise operator and confirming that it's grabbing the lowest 32 bits:
2e10 & 0xFFFFFFFF // also -1474836480
~~(2e10 & 0xFFFFFFFF) // also -1474836480
Math.floor is built to account for large numbers, so if accuracy over a big range is important then you should use it.
Also of note: The ~~ is doing truncation, which is the same as flooring for positive numbers only. It won't work for negatives:
Math.floor(-2.1) // -3
~~(-2.1) // -2
As stated in the MDN Docs, and here I quote,
The operands of all bitwise operators are converted to signed 32-bit integers in two's complement format.
This means tha when you apply a bitwise operator, for instance ~, to 2.1 it is first converted to an integer, and only then is the operator applied. This effectively achieves the rounding down (floor) effect for positive numbers.
As to why these operators are used, instead of the much nicer to grasp Math.floor, there are two main reasons. For one, these operators may be considerably faster to achieve the same result. Besides performance, some people just want the shortest code possible. All three operators you mentioned achieve the same effect, but ~~ just happens to be the shortest, and arguably the easiest to remember.
Given that the float to integer conversion happens before the bitwise operators are applied, let's see what happens with ~~. I'll represent our target number (2, after the convertion from 2.1) using 8 bits, instead of 32, for shortness.
2: 0000 0010
~2: 1111 1101 (-3)
~~2: 0000 0010
So, you see, we apply an operator to retrieve only the integer part, but we can't apply only one bitwise not, because it would mess up the result. We revert it to the desired value applying the second operator.
Regarding your last example, take into account that the number you're testing with, 2e+21, is a relatively big number. It's a 2 followed by twenty-one zeroes. It simply doesn't fit as a 32-bit integer (the data-type it is being converted to, when you apply the bitwise operators). Just look at the difference between your number and what a 32-bit signed integer can represent.
Max. Integer: 2147483647
2e+21: 2000000000000000000000
How about binary?
Max. Integer: 01111111111111111111111111111111
2e+21: 11011000110101110010011010110111000101110111101010000000000000000000000
Quite big, huh?
What really happens under the hood is that Javascript is truncating your big number to what it can represent in 32 bits.
110110001101011100100110101101110001011 10111101010000000000000000000000
^---- Junk ----^
When we convert our truncated number to decimal, we get back what you're seeing.
Bin: 10111101010000000000000000000000
Dec: -1119879168
Conversely, Math.floor accounts for the big numbers and avoids truncating them, which is one of the possible reasons for it being slower, although accurate.
I was bored, so I started fidlling around in the console, and stumbled onto this (ignore the syntax error):
Some variable "test" has a value, which I multiply by 10K, it suddenly changes into different number (you could call it a rounding error, but that depends on how much accuracy you need). I then multiply that number by 10, and it changes back/again.
That raises a few questions for me:
How in accurate is Javascript? Has this been determined? I.e. a number that can be taken into account?
Is there a way to fix this? I.e. to do math in Javascript with complete accuracy (within the limitations of its datatype).
Should the changed number after the second operation be interpreted as 'changing back to the original number' or 'changing again, because of the inaccuracy'?
I'm not sure whether this should be a separate question, but I was actually trying to round numbers to a certain amount after the decimal point. I've researched it a bit, and have found two methods:
> Method A
function roundNumber(number, digits) {
var multiple = Math.pow(10, digits);
return Math.floor(number * multiple) / multiple;
}
> Method B
function roundNumber(number, digits) {
return Number(number.toFixed(digits));
}
Intuitively I like method B more (looks more efficient), but I don't know what going on behind the scenes so I can't really judge. Anyone have an idea on that? Or a way to benchmark this? And why is there no native round_to_this_many_decimals function? (one that returns an integer, not a string)
How in accurate is Javascript?
Javascript uses standard double precision floating point numbers, so the precision limitations are the same as for any other language that uses them, which is most languages. It's the native format used by the processor to handle floating point numbers.
Is there a way to fix this? I.e. to do math in Javascript with complete accuracy (within the limitations of its datatype).
No. The precision limitations lies in the way that the number is stored. Floating point numbers doesn't have complete accuracy, so no matter how you do the calculations you can't achieve absolute accuracy as the result goes back into a floating point number.
If you want complete accuracy then you need to use a different data type.
Should the changed number after the second operation be interpreted as
'changing back to the original number' or 'changing again, because of
the inaccuracy'?
It's changing again.
When a number is converted to text to be displayed, it's rounded to a certain number of digits. The numbers that look like they are exact aren't, it's just that the limitations in precision doesn't show up.
When the number "changes back" it's just because the rounding again hides the limitations in the precision. Each calculation adds or subtracts a small inaccuracy in the number, and sometimes it just happens to take the number closer to the number that you had originally. Eventhough it looks like it's more accurate, it's actually less accurate as each calculation adds a bit of uncertainty.
Internally, JavaScript uses 64-bit IEEE 754 floating-point numbers, which are a widely used standard and usually guarantee about 16 digits of accuracy. The error you witnessesed was on the 17th significant digit of the number and was reeeally tiny.
Is there a way to [...] do math in Javascript with complete accuracy (within the limitations of its datatype).
I would say that JavaScript's math is completely accurate within the limitations of its datatype. The error you witnessed was outside of those limitations.
Are you working with calculations that require a higher degree of precision than that?
Should the changed number after the second operation be interpreted as 'changing back to the original number' or 'changing again, because of the inaccuracy'?
The number never really became more or less accurate than the original value. It was only when the value was converted into a decimal value that a rounding error became apparent. But this was not a case of the value "changing back" to an accurate number. The rounding error was just too small to display.
And why is there no native round_to_this_many_decimals function? (one that returns an integer, not a string)
"Why is the language this way" questions are not considered very productive here, but it is easy to get around this limitation (assuming you mean numbers and not integers). This answer has 337 upvotes: +numb.toFixed(digits);, but note that if you try to display a number produced with that expression, there's no guarantee that it will actually display with only six digits. That's probably one of the reasons why JavaScript's "round to N places" function produces a string and not a number.
I came across the same few times and with further research I was able solve the little issues by using the library below
Math.js Library
Sample
import {
atan2, chain, derivative, e, evaluate, log, pi, pow, round, sqrt
} from 'mathjs'
// functions and constants
round(e, 3) // 2.718
atan2(3, -3) / pi // 0.75
log(10000, 10) // 4
sqrt(-4) // 2i
pow([[-1, 2], [3, 1]], 2) // [[7, 0], [0, 7]]
derivative('x^2 + x', 'x') // 2 * x + 1
// expressions
evaluate('12 / (2.3 + 0.7)') // 4
evaluate('12.7 cm to inch') // 5 inch
evaluate('sin(45 deg) ^ 2') // 0.5
evaluate('9 / 3 + 2i') // 3 + 2i
evaluate('det([-1, 2; 3, 1])') // -7
// chaining
chain(3)
.add(4)
.multiply(2)
.done() // 14