Convert Excel Formula to Javascript - javascript

I have this formula on excel
=(1130000000000*F11^1.85)/(F19^1.85*(F13*(1-2/F15))^4.8655)
when
F11 = q
F19 = 150 (Constant)
F13 = d
F15 = sdr
I convert it to this
Math.round((1130000000000*Math.pow(q,1.85))/(Math.pow(150,1.85)*Math.pow(d*(1-2/sdr)),4.8655))
but the results are wrong
when
q = 120
d = 200
sdr = 17
the result should be 8.76
but I am getting long numbers
any help ?
Thanks

From YUI blog
JavaScript has a single number type: IEEE 754 Double Precision floating point. Having a single number type is one of JavaScript’s best features. Multiple number types can be a source of complexity, confusion, and error. A single type is simplifying and stabilizing.
Unfortunately, a binary floating point type has some significant disadvantages. The worst is that it cannot accurately represent decimal fractions, which is a big problem because humanity has been doing commerce in decimals for a long, long time. There would be advantages to switching to a binary-based number system, but that is not going to happen. As a consequence, 0.1 + 0.2 === 0.3 is false, which is the source of a lot of confusion.
http://www.yuiblog.com/blog/2009/03/10/when-you-cant-count-on-your-numbers/

Completely ignore my previous answer, although it is a problem which you will definitely encounter should you continue with these large numbers, your actual issue is a misplaced bracket in your code. If you use this (simply replaced variable names with values):
Math.round((1130000000000*Math.pow(120,1.85))/(Math.pow(150,1.85)*Math.pow((200*(1-2/17)),4.8655)))
It returns 9 (a rounded 8.76), the bracket you misplaced is around the 1-2 mark.
For future reference, the largest number Javascript can comprehend without fault is +/- 9007199254740992.
References:
What is JavaScript's highest integer value that a Number can go to without losing precision?
ECMA-262 - The Number Type

Related

Will I possibly loose any decimal digits (precision) when multiplying Number.MAX_SAFE_INTEGER by Math.random()?

Will I possibly loose any decimal digits (precision) when multiplying Number.MAX_SAFE_INTEGER by Math.random() in JavaScript?
I presume I won't but it'd be nice to have a credible explanation as to why 😎
Edited, In layman terms, we're dealing with two IEEE 754 double-precision floating-point numbers, one is the maximal integer (for double-precision), the other one is fractional with quite a few digits after a decimal point. What if (say) I first converted them to quadruple-precision format, then multiplied, and then converted the product back to double-precision, would the result be any different?
const max = Number.MAX_SAFE_INTEGER;
const random = Math.random();
console.log(`\
MAX_SAFE_INTEGER: ${max}, \
random: ${random}, \
product: ${max * random}`);
For more elaborate examples, I use it to generate BigInt random numbers.
Your implementation should be safe - in theory, all numbers between 0 and MAX_SAFE_INTEGER should have a possibility of appearing, if the engine implementing Math.random uses a completely unbiased algorithm.
But an absolutely unbiased algorithm is not guaranteed by the specification - the numbers chosen are meant to be psuedo random, not truly, completely random. (does such a thing even exist? it's debatable...) Modern versions V8 and some other implementations use an algorithm with a period on the order of 2 ** 128, larger than MAX_SAFE_INTEGER (2 ** 53 - 1) - but it'd be completely plausible for other implementations (especially older ones) to have a much smaller period, resulting in certain integers within the range being picked much more often than others.
If this is important for your script (which is pretty unlikely in most situations, I'd think), you might consider using a higher-quality random generatior than Math.random - but it's almost certainly not worth worrying about.
What if (say) I first converted them to quadruple-precision format, then multiplied, and then converted the product back to double-precision, would the result be any different?
It could be in cases where the rounding behaves differently between multiplying two doubles vs converting quadruple to double, but the main problem remains the same. The spacing between representable doubles in the range from 2n to 2n+1 is 2n−52. So between 252 and 253 only whole numbers can be represented, between 251 and 252 only every 0.5 can be represented, etc.
If you want more precision you could try decimal.js. The library is included on that documentation page so you can try these out in your console.
Number.MAX_SAFE_INTEGER*.9
8106479329266892
new Decimal(Number.MAX_SAFE_INTEGER).mul(new Decimal(0.9)).toString()
"8106479329266891.9"
Both answers are correct, but I couldn't help running this little experiment in C#, where double is the same thing as Number in JavaScript (fiddle):
using System;
public class Program
{
public static void Main()
{
const double MAX_SAFE_INT = 9007199254740991;
Decimal maxD = Convert.ToDecimal(MAX_SAFE_INT.ToString());
var rng = new Random(Environment.TickCount);
for (var i = 0; i < 1000; i++)
{
double random = rng.NextDouble();
double product = MAX_SAFE_INT * random;
// converting via string to workaround the "15 significant digits" limitation for Decimal(Double)
Decimal randomD = Decimal.Parse(String.Format("{0:F18}", random));
Decimal productD = maxD * randomD;
double converted = Convert.ToDouble(productD);
if (Math.Floor(converted) != Math.Floor(product))
{
Console.WriteLine($"{maxD}, {randomD, 22}, products: decimal {productD, 32}, converted {converted, 20}, original {product, 20}");
}
}
}
}
As far as I'm concerned, I'm still getting the desired distribution of the random numbers within the 0 - 9007199254740991 range.
Here is a JavaScript playground code to check for possible recurrences.

Large numbers - Math in JavaScript

I'm developing a 3D space game, which is using alot of math formulas, navigation, ease effects, rotations, huge distances between planets, objects mass, and so on...
My Question is what would be the best way in doing so using math. Should I calculate everything as integers and obtain really large integers(over 20 digits), or use small numbers with decimals.
In my experience, math when using digits with decimals is not accurate, causing strange behavior when using large numbers with decimals.
I would avoid using decimals. They have known issues with precision: http://floating-point-gui.de/
I would recommend using integers, though if you need to work with very large integers I would suggest using a big number or big integer library such as one of these:
http://jsfromhell.com/classes/bignumber
https://silentmatt.com/biginteger/
The downside is you have to use these number objects and their methods rather than the primitive Number type and standard JS operators, but you'll have a lot more flexibility with operating on large numbers.
Edit:
As le_m pointed out, another downside is speed. The library methods won't run as fast as the native operators. You'll have to test for yourself to see if the performance is acceptable.
Use the JavaScript Number Object
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number
Number.MAX_SAFE_INTEGER
The maximum safe integer in JavaScript (2^53 - 1).
Number.MIN_SAFE_INTEGER
The minimum safe integer in JavaScript (-(253 - 1)).
var biggestInt = 9007199254740992;
var smallestInt = -9007199254740992;
var biggestNum = Number.MAX_VALUE;
var smallestNum = Number.MIN_VALUE;
var infiniteNum = Number.POSITIVE_INFINITY;
var negInfiniteNum = Number.NEGATIVE_INFINITY;
var notANum = Number.NaN;
console.log(biggestInt); // http://www.miniwebtool.com/scientific-notation-to-decimal-converter/?a=1.79769313&b=308
console.log(smallestInt); // http://www.miniwebtool.com/scientific-notation-to-decimal-converter/?a=5&b=-32
console.log(biggestNum);
console.log(smallestNum);
console.log(infiniteNum);
console.log(negInfiniteNum);
console.log(notANum);
debugger;
I can only imagine that this is a sign of a bigger problem with your application complicating something that could be very simple.
Please read numerical literals
http://www.ecma-international.org/ecma-262/5.1/#sec-7.8.3
Once the exact MV for a numeric literal has been determined, it is
then rounded to a value of the Number type. If the MV is 0, then the
rounded value is +0; otherwise, the rounded value must be the Number
value for the MV (as specified in 8.5), unless the literal is a
DecimalLiteral and the literal has more than 20 significant digits, in
which case the Number value may be either the Number value for the MV
of a literal produced by replacing each significant digit after the
20th with a 0 digit or the Number value for the MV of a literal
produced by replacing each significant digit after the 20th with a 0
digit and then incrementing the literal at the 20th significant digit
position. A digit is significant if it is not part of an ExponentPart
and
it is not 0;
or there is a nonzero digit to its left and there is a nonzero digit, not in the ExponentPart, to its right.
Clarification
I should add that the Number Object wrapper supposedly offers precision to 100 (Going above this number will give you a RangeType error) significant digits in some browsers, however most environments currently only implement the precision to the required 21 significant digits.
Reading through OPs original question, I believe skyline provided the best answer by recommending a library which offers well over 100 significant digits (some of the tests that I got to pass were using 250 significant digits). In reality, it would be interesting to see someone revive one of those projects again.
The distance from our Sun to Alpha Centauri is 4.153×1018 cm. You can represent this value well with the Number datatype which stores values up to 1.7977×10308 with about 17 significant figures.
However, what if you want to model a spaceship stationed at Alpha Centauri?
Due to the limited precision of Number, you can either store the value 4153000000000000000 or 4153000000000000500, but nothing in between. This means that you would have a maximal spacial resolution of 500 cm at Alpha Centauri. Your spaceship would look really clunky.
Could we use another datatype than Number? Of course you could use a library such as BigNumber.js which provides support for nearly unlimited precision. You can park your spaceship one milimeter next to the hot core of Alpha Centauri without (numerical) issues:
pos_acentauri = new BigNumber(4153000000000000000);
pos_spaceship = pos_acentauri.add(0.1); // one milimeter from Alpha Centauri
console.log(pos_spaceship); // 4153000000000000000.1
<script src="https://cdnjs.cloudflare.com/ajax/libs/bignumber.js/2.3.0/bignumber.min.js"></script>
However, not only would the captain of that ship burn to death, but your 3D engine would be slow as hell, too. That is because Number allows for really fast arithmetic computations in constant time, whereas e. g. the BigNumber addition computation time grows with the size of the stored value.
Solution: Use Number for your 3D engine. You could use different local coordinate systems, e. g. one for Alpha Centauri and one for our solar system. Only use BigNumber for things like the HUD, game stats and so on.
The problem with BigNumber is with
Precision loss from using numeric literals with more than 15
significant digits
My solution would be a combination of BigNumber and web3.js:
var web3 = new Web3();
let one = new BigNumber("1234567890121234567890123456789012345");
let two = new BigNumber("1000000000000000000");
let three = new BigNumber("1000000000000000000");
const minus = two.times(three).minus(one);
const plus = one.plus(two.times(three));
const compare = minus.comparedTo(plus);
const results = {
minus: web3.toBigNumber(minus).toString(10),
plus: web3.toBigNumber(plus).toString(10),
compare
}
console.log(results); // {minus: "-234567890121234567890123456789012345", plus: "2234567890121234567890123456789012345", compare: -1}

Buffer reading and writing floats

When I write a float to a buffer, it does not read back the same value:
> var b = new Buffer(4);
undefined
> b.fill(0)
undefined
> b.writeFloatBE(3.14159,0)
undefined
> b.readFloatBE(0)
3.141590118408203
>
(^C again to quit)
>
Why?
EDIT:
My working theory is that because javascript stores all numbers as double precision, it's possible that the buffer implementation does not properly zero the other 4 bytes of the double when it reads the float back in:
> var b = new Buffer(4)
undefined
> b.fill(0)
undefined
> b.writeFloatBE(0.1,0)
undefined
> b.readFloatBE(0)
0.10000000149011612
>
I think it's telling that we have zeros for 7 digits past the decimal (well, 8 actually) and then there's garbage. I think there's a bug in the node buffer code that reads these floats. That's what I think. This is node version 0.10.26.
Floating point numbers ("floats") are never a fully-accurate representation of a number; this is a common feature that is seen across multiple languages, not just JavaScript / NodeJS. For example, I encountered something similar in C# when using float instead of double.
Double-precision floating point numbers are more accurate and should better meet your expectations. Try changing the above code to write to the buffer as a double instead of a float:
var b = new Buffer(8);
b.fill(0);
b.writeDoubleBE(3.14159, 0);
b.readDoubleBE(0);
This will return:
3.14159
EDIT:
Wikipedia has some pretty good articles on floats and doubles, if you're interested in learning more:
http://en.wikipedia.org/wiki/Floating_point
http://en.wikipedia.org/wiki/Double-precision_floating-point_format
SECOND EDIT:
Here is some code that illustrates the limitation of the single-precision vs. double-precision float formats, using typed arrays. Hopefully this can act as proof of this limitation, as I'm having a hard time explaining in words:
var floats32 = new Float32Array(1),
floats64 = new Float64Array(1),
n = 3.14159;
floats32[0] = n;
floats64[0] = n;
console.log("float", floats32[0]);
console.log("double", floats64[0]);
This will print:
float 3.141590118408203
double 3.14159
Also, if my understanding is correct, single-precision floating point numbers can store up to 7 total digits (significant digits), not 7 digits after the decimal point. This means that they should be accurate up to 7 total significant digits, which lines up with your results (3.14159 has 6 significant digits, 3.141590118408203 => first 7 digits => 3.141590 => 3.141590 === 3.14159).
readFloat in node is implemented in c++ and bytes are interpreted exactly the way your compiler stores/reads them. I doubt there is a bug here. What I think is that "7 digits" is incorrect assumption for float. This answer suggest 6 digits (and it's the value of std::numeric_limits<float>::digits10 ) so the result of readFloatBE is within expected error

Math.pow() not giving precise answer

I've used Math.pow() to calculate the exponential value in my project.
Now, For specific values like Math.pow(3,40), it returns 12157665459056929000.
But when i tried the same value using a scientific Calculator, it returns 12157665459056928801.
Then i tried to traverse the loop till the exponential value :
function calculateExpo(base,power){
base = parseInt(base);
power = parseInt(power);
var output = 1;
gameObj.OutPutString = ''; //base + '^' + power + ' = ';
for(var i=0;i<power;i++){
output *= base;
gameObj.OutPutString += base + ' x ';
}
// to remove the last comma
gameObj.OutPutString = gameObj.OutPutString.substring(0,gameObj.OutPutString.lastIndexOf('x'));
gameObj.OutPutString += ' = ' + output;
return output;
}
This also returns 12157665459056929000.
Is there any restriction to Int type in JS ?
This behavior is highly dependent on the platform you are running this code at. Interestingly even the browser matters even on the same very machine.
<script>
document.write(Math.pow(3,40));
</script>
On my 64-bit machine Here are the results:
IE11: 12157665459056928000
FF25: 12157665459056929000
CH31: 12157665459056929000
SAFARI: 12157665459056929000
52 bits of JavaScript's 64-bit double-precision number values are used to store the "fraction" part of a number (the main part of the calculations performed), while 11 bits are used to store the "exponent" (basically, the position of the decimal point), and the 64th bit is used for the sign. (Update: see this illustration: http://en.wikipedia.org/wiki/File:IEEE_754_Double_Floating_Point_Format.svg)
There are slightly more than 63 bits worth of significant figures in the base-two expansion of 3^40 (63.3985... in a continuous sense, and 64 in a discrete sense), so hence it cannot be accurately computed using Math.pow(3, 40) in JavaScript. Only numbers with 52 or fewer significant figures in their base-two expansion (and a similar restriction on their order of magnitude fitting within 11 bits) have a chance to be represented accurately by a double-precision floating point value.
Take note that how large the number is does not matter as much as how many significant figures are used to represent it in base two. There are many numbers as large or larger than 3^40 which can be represented accurately by JavaScript's 64-bit double-precision number values.
Note:
3^40 = 1010100010111000101101000101001000101001000111111110100000100001 (base two)
(The length of the largest substring beginning and ending with a 1 is the number of base-two significant figures, which in this case is the entire string of 64 digits.)
Haskell (ghci) gives
Prelude> 3^40
12157665459056928801
Erlang gives
1> io:format("~f~n", [math:pow(3,40)]).
12157665459056929000.000000
2> io:format("~p~n", [crypto:mod_exp(3,40,trunc(math:pow(10,21)))]).
12157665459056928801
JavaScript
> Math.pow(3,40)
12157665459056929000
You get 12157665459056929000 because it uses IEEE floating point for computation. You get 12157665459056928801 because it uses arbitrary precision (bignum) for computation.
JavaScript can only represent distinct integers to 253 (or ~16 significant digits). This is because all JavaScript numbers have an internal representation of IEEE-754 base-2 doubles.
As a consequence, the result from Math.pow (even if was accurate internally) is brutally "rounded" such that the result is still a JavaScript integer (as it is defined to return an integer per the specification) - and the resulting number is thus not the correct value, but the closest integer approximation of it JavaScript can handle.
I have put underscores above the digits that don't [entirely] make the "significant digit" cutoff so it can be see how this would affect the results.
................____
12157665459056928801 - correct value
12157665459056929000 - closest JavaScript integer
Another way to see this is to run the following (which results in true):
12157665459056928801 == 12157665459056929000
From the The Number Type section in the specification:
Note that all the positive and negative integers whose magnitude is no greater than 253 are representable in the Number type ..
.. but not all integers with large magnitudes are representable.
The only way to handle this situation in JavaScript (such that information is not lost) is to use an external number encoding and pow function. There are a few different options mentioned in https://stackoverflow.com/questions/287744/good-open-source-javascript-math-library-for-floating-point-operations and Is there a decimal math library for JavaScript?
For instance, with big.js, the code might look like this fiddle:
var z = new Big(3)
var r = z.pow(40)
var str = r.toString()
// str === "12157665459056928801"
Can't say I know for sure, but this does look like a range problem.
I believe it is common for mathematics libraries to implement exponentiation using logarithms. This requires that both values are turned into floats and thus the result is also technically a float. This is most telling when I ask MySQL to do the same calculation:
> select pow(3, 40);
+-----------------------+
| pow(3, 40) |
+-----------------------+
| 1.2157665459056929e19 |
+-----------------------+
It might be a courtesy that you are actually getting back a large integer.

Why does adding two decimals in Javascript produce a wrong result? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Is JavaScript’s Math broken?
Why does JS screw up this simple math?
console.log(.1 + .2) // 0.3000000000000004
console.log(.3 + .6) // 0.8999999999999999
The first example is greater than the correct result, while the second is less. ???!! How do you fix this? Do you have to always convert decimals into integers before performing operations? Do I only have to worry about adding (* and / don't appear to have the same problem in my tests)?
I've looked in a lot of places for answers. Some tutorials (like shopping cart forms) pretend the problem doesn't exist and just add values together. Gurus provide complex routines for various math functions or mention JS "does a poor job" in passing, but I have yet to see an explanation.
It's not a JS problem but a more general computer one. Floating number can't store properly all decimal numbers, because they store stuff in binary
For example:
0.5 is store as b0.1
but 0.1 = 1/10 so it's 1/16 + (1/10-1/16) = 1/16 + 0.0375
0.0375 = 1/32 + (0.0375-1/32) = 1/32 + 00625 ... etc
so in binary 0.1 is 0.00011...
but that's endless.
Except the computer has to stop at some point. So if in our example we stop at 0.00011 we have 0.09375 instead of 0.1.
Anyway the point is, that doesn't depend on the language but on the computer. What depends on the language is how you display numbers. Usually, the language rounds numbers to an acceptable representation. Apparently JS doesn't.
So what you have to do (the number in memory is accurate enough) is just to tell somehow to JS to round "nicely" number when converting them to text.
You may try the sprintf function which give you a fine control of how to display a number.
From The Floating-Point Guide:
Why don’t my numbers, like 0.1 + 0.2
add up to a nice round 0.3, and
instead I get a weird result like
0.30000000000000004?
Because internally, computers use a
format (binary floating-point) that
cannot accurately represent a number
like 0.1, 0.2 or 0.3 at all.
When the code is compiled or
interpreted, your “0.1” is already
rounded to the nearest number in that
format, which results in a small
rounding error even before the
calculation happens.
The site has detailed explanations as well as information on how to fix the problem (and how to decide whether it is a problem at all in your case).
This is not a javascript only limitation, it applies to all floating point calculations. The problem is that 0.1 and 0.2 and 0.3 are not exactly representable as javascript (or C or Java etc) floats. Thus the output you are seeing is due to that inaccuracy.
In particular only certain sums of powers of two are exactly representable. 0.5 = =0.1b = 2^(-1), 0.25=0.01b=(2^-2), 0.75=0.11b = (2^-1 + 2^-2) are all OK. But 1/10 = 0.000110001100011..b can only be expressed as an infinite sum of powers of 2, which the language chops off at some point. Its this chopping that is causing these slight errors.
This is normal for all programming languages because not all decimal values can be represented exactly in binary. See What Every Computer Scientist Should Know About Floating-Point Arithmetic
It has to do with how computers handle floating numbers. You can read more about it here: http://docs.sun.com/source/806-3568/ncg_goldberg.html

Categories