Best way to calculate the midpoint of an array? - javascript

I've noticed that people use different ways to calculate the mid point of an ordered array and its sub arrays. This is often used in binary search.
The first method seems a bit better as it is simpler. Does the second way offer any advantages?
const mid = Math.round((left + right) / 2);
and
const mid = left + Math.round((right - left) / 2);
and ( per answer )
const mid = ( left + right ) >>> 1;

First of all, I don't think that Math.round is what you will find more often, because it will round upwards any integer + 0.5. E.g., Math.round(3.5) === 4. This is not the most common way to find the integer midpoint. The main reason is that alternative ways involving integers (as opposed to floating point numbers), all round downwards:
(left + right) >>> 1 (>>> is the unsigned right shift)
~~((left + right) * 0.5) (~~ is a way to cast to integer)
in other languages: integer division by 2 of left + right
These ways that involve integers are probably faster than calling Math.round(), check it on your browser. Furthermore, JavaScript is an exception among languages, in that it doesn't differentiate, except within operations, between integers and floats. Most languages will explicitly use integers for array indices, and so the right shift above is for them obviously the fastest way to divide by 2. This has also influenced the literature, so you will find ⌊(left+right)/2⌋ much more often than ⌈(left+right)/2⌉.
If you insist on using the Math library, this means using Math.floor((left + right) / 2).
The only reason not to use integer operations could be that integers are signed 32 bit integers in JavaScript, while floats are 64 bit IEEE 754 floats, which have 53 significant bits (including an implied 1 bit). If your array indices can exceed 2147483637 (231-1), you cannot use integer operations. This situation is unlikely, though, also because the size of JavaScript arrays cannot exceed 4294967296 (232) anyway.
As for your question, the second form is not only a bit longer on screen, it also implies one more math operation. The second form would have an advantage over the first if indices could get close to Number.MAX_SAFE_INTEGER, which is 9007199254740991 (253-1) (because the subtraction would allow indices to be closer to that limit), but as we have seen this is not possible for JavaScript arrays.

Related

Javascript scientific notation floats vs integers explained

I am working with js numbers and have lack of experience in that. So, I would like to ask few questions:
2.2932600144518896
e+160
is this float or integer number? If it's float how can I round it to two decimals (to get 2.29)? and if it's integer, I suppose it's very large number, and I have another problem than.
Thanks
Technically, as said in comments, this is a Number.
What you can do if you want the number (not its string representation):
var x = 2.2932600144518896e+160;
var magnitude = Math.floor(Math.log10(x)) + 1;
console.log(Math.round(x / Math.pow(10, magnitude - 3)) * Math.pow(10, magnitude - 3));
What's the problem with that? Floating point operation may not be precise, so some "number" different than 0 should appear.
To have this number really "rounded", you can only achieve it through string (than you can't make any operation).
JavaScript only has one Number type so is technically neither a float or an integer.
However this isn't really relevant as the value (or rather representation of it) is not specific to JavaScript and uses E-Notation which is a standard way to write very large/small numbers.
Taking this in to account 2.2932600144518896e+160 is equivalent to 2.2932600144518896 * Math.pow(10,160) and approximately 229 followed by 158 zeroes i.e. very flippin' big.

Using a bitwise AND 1

If x is a floating point number, does
y = x & 1
have a purpose other than checking whether x is odd?
I just read Using bitwise OR 0 to floor a number and wonder, are there other interesting bitwise manipulations?
If x is logically floating point (not intended to be an integer value before the test), & 1 has exactly one purpose: To determine if the truncated value of x is odd (it's a combined truncation and bitwise test). It's not saying if x is odd (-2.9 is neither odd nor even, nor is -3.9, and they give opposite results), and it's not just truncating (because it throws away all but one bit of data); the test is intrinsically combining both effects, and as such, is not useful for anything else when x is an arbitrary floating point value.
As other answer(s) mention, bitwise operations have other legitimate uses, e.g. cryptography, or repurposing an integer as a vector of boolean flags, but that's only relevant for logically integer values; performing floating point math and then relying on specific integer values after truncation, without rounding, is a terrible idea, and will bite you when the result ends up being X.9999999999999999999999994 when you expected it to be (and under "grade school" math, it would be) X+1.
You can use bitwise operators to encode options and flags.
ie. Encoding car features
automaticWindows = 0b1; // 1
manualTransmission = 0b10; // 2
hasAC = 0b100; // 4
hasHeater = 0b1000; // 8
If my car has automatic window and AC, but nothing else, then i'd do this:
myCarOptions = automaticWindows | hasAC;
Then to check if some random car has AC I can do:
if (randomCarOption & hasAC) {
// do something... like turn on AC
}
At the end of the day bitwise operators simply allow you to do logic with the various bits which is the core of how computing works.

A safe way to divide two floating point numbers?

What is the safest way to divide two IEEE 754 floating point numbers?
In my case the language is JavaScript, but I guess this isn't important. The goal is to avoid the normal floating point pitfalls.
I've read that one could use a "correction factor" (cf) (e.g. 10 uplifted to some number, for instance 10^10) like so:
(a * cf) / (b * cf)
But I'm not sure this makes a difference in division?
Incidentally, I've already looked at the other floating point posts on Stack Overflow and I've still not found a single post on how to divide two floating point numbers. If the answer is that there is no difference between the solutions for working around floating point issues when adding and when dividing, then just answer that please.
Edit:
I've been asked in the comments which pitfalls I'm referring to, so I thought I'd just add a quick note here as well for the people who don't read the comments:
When adding 0.1 and 0.2, you would expect to get 0.3, but with floating point arithmetic you get 0.30000000000000004 (at least in JavaScript). This is just one example of a common pitfall.
The above issue is discussed many times here on Stack Overflow, but I don't know what can happen when dividing and if it differs from the pitfalls found when adding or multiplying. It might be that that there is no risks, in which case that would be a perfectly good answer.
The safest way is to simply divide them. Any prescaling will either do nothing, or increase rounding error, or cause overflow or underflow.
If you prescale by a power of two you may cause overflow or underflow, but will otherwise make no difference in the result.
If you prescale by any other number, you will introduce additional rounding steps on the multiplications, which may lead to increased rounding error on the division result.
If you simply divide, the result will be the closest representable number to the ratio of the two inputs.
IEEE 754 64-bit floating point numbers are incredibly precise. A difference in one part in almost 10^16 can be represented.
There are a few operations, such as floor and exact comparison, that make even extremely low significance bits matter. If you have been reading about floating point pitfalls you should have already seen examples. Avoid those. Round your output to an appropriate number of decimal places. Be careful adding numbers of very different magnitude.
The following program demonstrates the effects of using each power of 10 from 10 through 1e20 as scale factor. Most get the same result as not multiplying, 6.0, which is also the rational number arithmetic result. Some get a slightly larger result.
You can experiment with different division problems by changing the initializers for a and b. The program prints their exact values, after rounding to double.
import java.math.BigDecimal;
public class Test {
public static void main(String[] args) {
double mult = 10;
double a = 2;
double b = 1.0 / 3.0;
System.out.println("a=" + new BigDecimal(a));
System.out.println("b=" + new BigDecimal(b));
System.out.println("No multiplier result="+(a/b));
for (int i = 0; i < 20; i++) {
System.out.println("mult="+mult + " result="+((a * mult) / (b * mult)));
mult *= 10;
}
}
}
Output:
a=2
b=0.333333333333333314829616256247390992939472198486328125
No multiplier result=6.0
mult=10.0 result=6.000000000000001
mult=100.0 result=6.000000000000001
mult=1000.0 result=6.0
mult=10000.0 result=6.000000000000001
mult=100000.0 result=6.000000000000001
mult=1000000.0 result=6.0
mult=1.0E7 result=6.000000000000001
mult=1.0E8 result=6.0
Floating point division will produce exactly the same "pitfalls" as addition or multiplication operations, and no amount of pre-scaling will fix it - the end result is the end result and it's the internal representation of that in IEEE-754 that causes the "problem".
The solution is to completely forget about these precision issues during calculations themselves, and to perform rounding as late as possible, i.e. only when displaying the results of the calculation, at the point at which the number is converted to a string using the .toFixed() function provided precisely for that purpose.
.tofixed() is not a good solution to divide float numbers.
Using javascript try : 4.11 / 100 and you will be surprised.
4.11 / 100 = 0.041100000000000005
Not all browsers get the same results.
Right solution is to convert float to integer:
parseInt(4.11 * Math.pow(10, 10)) / (100 * Math.pow(10, 10)) = 0.0411

Summing numbers javascript [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 9 years ago.
'-15.48' - '43'
Just wrote this in console and result is the following:
-58.480000000000004
Why Is it so? And what to do to get correct result?
Because all floating point math is like this and is based on the IEEE 754 standard. JavaScript uses 64-bit floating point representation, which is the same as Java's double.
to fix it you may try:
(-15.48 - 43).toFixed(2);
Fiddle demo
use: toFixed()
var num = 5.56789;
var n=num.toFixed(2);
result:5.57
http://en.wikipedia.org/wiki/Machine_epsilon
Humans count in decimal numbers, machines mostly use binary. 10 == 2x5; 2 and 5 are mutually prime numbers. That trivial fact has an unpleasant consequence though.
In most calculations (except lucky degenerate cases) with "simple" decimal numbers, that include division, the result would be an indefinite repeating binary number.
In most calculations (except lucky degenerate cases) with "simple" binary numbers, that include division, the result would be an indefinite repeating decimal number.
One can check this using pen and pencil as described http://en.wikipedia.org/wiki/Repeating_decimal#Every_rational_number_is_either_a_terminating_or_repeating_decimal
That means that almost every time you see the the result of floating point calculations on your screen - as a finite number - the computer somewhat cheats at you and shows you some approximation instead of the real result.
That means that almost every time you store the results of the calculation in the variable - which has a finite size, not any larger than the computer's available memory - the computer somewhat cheats at you and retains some approximation instead of the real result.
The typical gotchas then may include.
Program is accounting for the sum of some looong sequence, an infinite loop of x AVG := AVG + X[i]; kind where 0 < X[i] < const. If the loop would run long enough, you would see that at some point AVG no more changes the value, all the elements fro mthatpoint further on are just discarded.
Program calculates some value twice using different formulas and then makes a safety check like Value_1 == Value_2. For theoretical mathematics the values are equal, for the real computers they are not.

Math.pow() not giving precise answer

I've used Math.pow() to calculate the exponential value in my project.
Now, For specific values like Math.pow(3,40), it returns 12157665459056929000.
But when i tried the same value using a scientific Calculator, it returns 12157665459056928801.
Then i tried to traverse the loop till the exponential value :
function calculateExpo(base,power){
base = parseInt(base);
power = parseInt(power);
var output = 1;
gameObj.OutPutString = ''; //base + '^' + power + ' = ';
for(var i=0;i<power;i++){
output *= base;
gameObj.OutPutString += base + ' x ';
}
// to remove the last comma
gameObj.OutPutString = gameObj.OutPutString.substring(0,gameObj.OutPutString.lastIndexOf('x'));
gameObj.OutPutString += ' = ' + output;
return output;
}
This also returns 12157665459056929000.
Is there any restriction to Int type in JS ?
This behavior is highly dependent on the platform you are running this code at. Interestingly even the browser matters even on the same very machine.
<script>
document.write(Math.pow(3,40));
</script>
On my 64-bit machine Here are the results:
IE11: 12157665459056928000
FF25: 12157665459056929000
CH31: 12157665459056929000
SAFARI: 12157665459056929000
52 bits of JavaScript's 64-bit double-precision number values are used to store the "fraction" part of a number (the main part of the calculations performed), while 11 bits are used to store the "exponent" (basically, the position of the decimal point), and the 64th bit is used for the sign. (Update: see this illustration: http://en.wikipedia.org/wiki/File:IEEE_754_Double_Floating_Point_Format.svg)
There are slightly more than 63 bits worth of significant figures in the base-two expansion of 3^40 (63.3985... in a continuous sense, and 64 in a discrete sense), so hence it cannot be accurately computed using Math.pow(3, 40) in JavaScript. Only numbers with 52 or fewer significant figures in their base-two expansion (and a similar restriction on their order of magnitude fitting within 11 bits) have a chance to be represented accurately by a double-precision floating point value.
Take note that how large the number is does not matter as much as how many significant figures are used to represent it in base two. There are many numbers as large or larger than 3^40 which can be represented accurately by JavaScript's 64-bit double-precision number values.
Note:
3^40 = 1010100010111000101101000101001000101001000111111110100000100001 (base two)
(The length of the largest substring beginning and ending with a 1 is the number of base-two significant figures, which in this case is the entire string of 64 digits.)
Haskell (ghci) gives
Prelude> 3^40
12157665459056928801
Erlang gives
1> io:format("~f~n", [math:pow(3,40)]).
12157665459056929000.000000
2> io:format("~p~n", [crypto:mod_exp(3,40,trunc(math:pow(10,21)))]).
12157665459056928801
JavaScript
> Math.pow(3,40)
12157665459056929000
You get 12157665459056929000 because it uses IEEE floating point for computation. You get 12157665459056928801 because it uses arbitrary precision (bignum) for computation.
JavaScript can only represent distinct integers to 253 (or ~16 significant digits). This is because all JavaScript numbers have an internal representation of IEEE-754 base-2 doubles.
As a consequence, the result from Math.pow (even if was accurate internally) is brutally "rounded" such that the result is still a JavaScript integer (as it is defined to return an integer per the specification) - and the resulting number is thus not the correct value, but the closest integer approximation of it JavaScript can handle.
I have put underscores above the digits that don't [entirely] make the "significant digit" cutoff so it can be see how this would affect the results.
................____
12157665459056928801 - correct value
12157665459056929000 - closest JavaScript integer
Another way to see this is to run the following (which results in true):
12157665459056928801 == 12157665459056929000
From the The Number Type section in the specification:
Note that all the positive and negative integers whose magnitude is no greater than 253 are representable in the Number type ..
.. but not all integers with large magnitudes are representable.
The only way to handle this situation in JavaScript (such that information is not lost) is to use an external number encoding and pow function. There are a few different options mentioned in https://stackoverflow.com/questions/287744/good-open-source-javascript-math-library-for-floating-point-operations and Is there a decimal math library for JavaScript?
For instance, with big.js, the code might look like this fiddle:
var z = new Big(3)
var r = z.pow(40)
var str = r.toString()
// str === "12157665459056928801"
Can't say I know for sure, but this does look like a range problem.
I believe it is common for mathematics libraries to implement exponentiation using logarithms. This requires that both values are turned into floats and thus the result is also technically a float. This is most telling when I ask MySQL to do the same calculation:
> select pow(3, 40);
+-----------------------+
| pow(3, 40) |
+-----------------------+
| 1.2157665459056929e19 |
+-----------------------+
It might be a courtesy that you are actually getting back a large integer.

Categories