Method to compute decimal power - javascript

I've written a JS math library for very large numbers that preserves precision and can be used for any base. The entire library is custom because none of the default math functions can handle numbers of these magnitude.
This means the power handler has to be custom as well. Simple enough for integer powers, but the only method for calculating decimal powers I've found is to do do sqrt(1^0*num^1) and (0+1)/2 and repeating the process until the second number equals the decimal portion of the power. Is this the basic concept behind most decimal powers or is there a non guess and check method?

Decimal powers use logarithms, since bx = ex ln b. Naturally this requires arbitrary-precision logarithm and exponentiation capability, but choosing a suitable logarithm base can make this manageable.

Related

How works precision Float of numbers in Javascript?

Hello guys I'm working on javascript file about astronomical calculations so I need more precisions.
I have a lot operations for each line and I want like result more precision (also ten or twenty decimal after comma).
In JavaScript if I do not declare number of decimal (for example using .ToFixed(n)), how many positions after the comma, the language consider during the calculation?
For example:
var a= 1.123456789*3.5; // result 3.9320987615
var b= a/0.321;
var c= b*a;
// c => 48.16635722800571
It will be the result?
Like using all decimal after comma for each variables of javascript does approximations ?
Why in other questions users suggests to use decimal.js or other libraries?
I'm sorry if my question seems stupid but for me is important.
I hope you can help me.
Sorry for my english !
Javascript uses IEEE-754 double-precision, that means it only calculates numbers to around 15 decimal digits of precision, any more than that gets cut off. If you need more precision you have to use decimal.js or another similar library. Other questions recommend decimal.js or other libraries is because it is easy to put into your program and can provide as much precision as you want.
The reason it isn't implemented in the computer by default is that it takes a lot more effort for the computer to calculate to 20 digits over 15 digits because the computer is built to compute only to 15 decimal digits. If you want to read more on it I would recommend reading Arbitrary Precision Arithmetic on Wikipedia.
Simply because the toFixed() method converts a number into a string, keeping a specified number of decimals. So, the returned value will be a string not a number.

Native support for 64 bit floating point in Javascript

As the question here states, even the newest Ecmascript 8 has no support for 64bit integers.
But the stage 3 proposal for bigInt looks promising and I expect it will be added to the Js spec sometime soon.
However, Even according to the proposal, we have to use a special constructor for declaring big numbers. (Q1) What is the technical reason behind, being unable to represent big numbers in the general way?
let bigNum = 2 ** 64 // Why can't JS do this without losing precision? (at least in future)
I know that JavaScript represents all numbers using IEEE-754 double-precision (64 bit) floating points and that this causes the problem.
(Q2) Why can't Javascript represent all numbers using some other standard which doesn't lose precision?
(Q3) What complications would arise if Javascript actually did that?
Edit: As stated by T.J, we can use a suffix instead of a constructor, but I still feel that is not exactly the general notation
(Q1) What is the technical reason behind, being unable to represent big numbers in the general way?
Breaking the web. The fundamentals of JavaScript numbers cannot be changed now, 20-odd years after they were originally defined. There's also the issue of performance: JavaScript's current numbers (IEEE-754 binary double [64-bit] precision) are very fast floating point thanks to being built into CPUs and math coprocessors. The cost of that speed is precision; the cost of arbitrary precision (or a dramatically larger precise range) is performance.
Someday in the future, perhaps JavaScript will get IEEE-754 64-bit or even 128-bit decimal floating point numbers (see here and here), if those formats (introduced in 2008) diffuse into the ecosystem and get hardware support. But that's speculation on my part. :-)
(Q2) Why can't Javascript represent all numbers using some other standard which doesn't lose precision?
See Q1. :-)
(Q3) What complications would arise if Javascript actually did that?
See Q1. :-)
Even according to the proposal, we have to use a special constructor for declaring big numbers.
If you want 64-bit specifically. If you just want BigInts, the proposal includes a new notation, the n suffix, for that: 2n is a BigInt 2. So with BigInts, your example would be
let bigNum = 2n ** 64n;

How does number precision affect performance in JavaScript, or does it?

In JavaScript, there's only one type for all different kinds of numbers. Does the amount of decimals in the numbers used (precision) affect performance especially in JavaScript? If it does, how?
How about saving numbers in MongoDB: Do precise numbers take more space than less precise ones?
Generally no. There are some possible performance implications when a number doesn't fit in a 31b signed int.
A tour of V8: object representation explains
According to the spec, all numbers in JavaScript are 64-bit floating point doubles. We frequently work with integers though, so V8 represents numbers with 31-bit signed integers whenever possible (the low bit is always 0; this helps the garbage collector distinguish numbers from pointers). So objects with the fast small integers elements kind only contain this type of number. If we want to store a fractional number or a larger integer or a special value like -0, then we need to upgrade the array to fast doubles. This involves a potentially expensive copy-and-convert operation, but it doesn't happen often in practice. fast doubles objects are still pretty fast because all of the numbers are stored in an unboxed representation. If we want to store any other kind of value, e.g., a string or an object, we must upgrade to a general array of fast elements.
Does the amount of decimals in the numbers used (precision) affect performance especially in JavaScript? If it does, how?
No. The number type in JavaScript is a 64-bit floating-point value with base 2 and always has the same precision. The computer works on that data bit by bit and it doesn't matter whether that data represents something that looks simple to a human like 1.0 or something seemingly complicated like 123423.5645632. In fact, for base 2 floats, the 'human' values are just as 'hard', because 1.1 is truly represented by a much longer number (sth. like 1.10000000000000054). All this doesn't matter, because the computer truly operates on 64 ones and zeros. There are always some arcane exceptions in floating point, but those usually don't matter in practice.
How about saving numbers in MongoDB: Do precise numbers take more space than less precise ones?
Decimal numbers are stored as doubles (64bit), and it doesn't matter whether that is 1.0 or 1.1221234423. Again, the number of bits is constant for these data types.
The same is true for ints, but MongoDB has support for both 32- and 64-bit ints. So NumberLong is indeed larger than regular 32-bit ints and just as large as doubles.

What's the first number that JavaScript can't represent accurately?

I'm working with currency values, so it's important to calculate accurately.
My current code breaks down a string into tokens then evaluates them. For decimal values, it first converts them to integers, does the calculation then converts back to a decimal.
For example if I had the expression
"0.1 * 0.2"
The first step would be to break it down into the tokens 0.1, * and 0.2. It then does some other malarky and figures it needs to multiple 0.1 and 0.2 together. The calculation would be
1 * 2 / 100
The calculation is done as integers to prevent JavaScript rounding error, i.e.
0.1 * 0.2 == 0.020000000000000004
My colleges argument is that by converting to a float from a string initially you've already lost precision. So my question is what's the upper bound and lower bound either side of 0 where a number cannot be represented by JavaScript exactly? So that I can check for this and handle it, if that's the right approach.
The problem you describe isn't a bounds problem. IEEE-754 double-precision binary floating point numbers can represent the value 0.5 perfectly, but cannot represent, say, 0.1 perfectly. Note that those have the same number of digits. The issue isn't the number of places of precision, it's the fact that the number type uses a different number base than we do. It uses base 2, rather than our base 10.
Just as we can't accurately represent 1 / 3 in our base 10 system, certain numbers cannot be accurately represented in IEEE-754's base 2 system.
In 2008, the IEEE came out with a revision adding a new format to IEEE-754 (it defines several formats; the "double-precision binary" one used by JS is just one of them) called "decimal64" which uses base 10 rather than base 2, for applications that need to handle rounding the same way we do (financial apps and such). That may start seeping into programming languages and such; for now, IEEE-754 single-precision and double-precision are the typical ones used, and others not based on the recent IEEE-754 standard like C#'s decimal.
In the meantime, there are "big decimal" libraries for JavaScript, such as big.js (haven't used it, no affiliation). If you search for "bignumber in JavaScript" or "JavaScript exact floating point" you should find multiple options.
I dont think that there is any such number which you can say that it is the first number that JavaScript can't represent accurately. It is all about decimal numbers and loss of precision.
Also to add there is no decimal data type in JavaScript - the only numeric data type is floating-point. JavaScript uses 64-bit floating point representation.
Floating point rounding errors. 0.1 cannot be represented as accurately in base-2 as in base-10 due to the missing prime factor of 5. Also to note that every floating point math is like this and is based on the IEEE 754 standard.
If I'm not mistaken, all JS engines uses IEEE 754 double to handle floating point numbers.
Take a look at the http://en.wikipedia.org/wiki/Double-precision_floating-point_format page, section 'Double-precision examples'. In the case numbers are close to 0, take a closer look at subnormals formula:
So, the closest to zero number, that can be represented in JavaScript floating point datatype is 21-1023 * 2-52 = 2-1022 * 2-52 = 2-1074.
Empirically:
Math.pow(2,-1074) = 5e-324
Math.pow(2,-1075) = 0
I notice you are doing currency calculations. You may find it better to use integers here. Use an integer to represent the price in cents. This eliminates problems with floating point which are more suitable for scientific calculations. This would basically be equivalent to using a BigDecimal in java.

JavaScript: Efficient integer arithmetic

I'm currently writing a compiler for a small language that compiles to JavaScript. In this language, I'd quite like to have integers, but JavaScript only supports Number, which is a double-precision floating point value. So, what's the most efficient way to implement integers in JavaScript? And how efficient is this compared to just using Number?
In particular, overflow behaviour should be consistent with other languages: for instance, adding one to INT_MAX should give INT_MIN. Integers should either be 32-bit or 64-bit.
So, what's the most efficient way to implement integers in JavaScript?
The primitive number type is as efficient as it gets. Many modern JS engines support JIT compilation, so it should be almost as efficient as native floating-point arithmetic.
In particular, overflow behaviour should be consistent with other languages: for instance, adding one to INT_MAX should give INT_MIN. Integers should either be 32-bit or 64-bit.
You can achieve the semantics of standard 32-bit integer arithmetic by noting that JavaScript converts "numbers" to 32-bit integers for bitwise operations. >>> (unsigned right shift) converts its operand to an unsigned 32-bit integer while the rest (all other shifts and bitwise AND/OR) convert their operand to a signed 32-bit integer. For example:
0xFFFFFFFF | 0 yields -1 (signed cast)
(0xFFFFFFFF + 1) | 0 yields 0 (overflow)
-1 >>> 0 yields 0xFFFFFFFF (unsigned cast)
I found this implementation of BigIntegers in Javascript: http://www-cs-students.stanford.edu/~tjw/jsbn/
Perhaps this will help?
Edit: Also, the Google Closure library implements 64-bit integers:
http://code.google.com/p/closure-library/source/browse/trunk/closure/goog/math/long.js
These are essentially just generating convenience objects though, and won't do anything for improving on the fundamental data-type efficiency.
On a modern CPU if you restrict your integer values to the range +- 2^52 then using a double will be barely less efficient than using a long.
The double IEE754 type has 53 bits of mantissa, so you can easily represent the 32-bit integer range and then some.
In any event, the rest of Javascript will be much more of a bottleneck than the individual CPU instructions used to handle arithmetic.
All Numbers are Numbers. There is no way around this. JavaScript has no byte or int type. Either deal with the limitations or use a more low level language to write your compiler in.
The only sensible option if you want to achieve this is to edit one of the JavaScript interpreters (say V8) and extend JS to allow access to native C bytes.
Well you can choose JavaScript's number type which is probably computed using primitives of your CPU or you can choose to layer a complete package of operators and functions and what not on an emulated series of bits...?
... If performance and efficiency is your concern, stick with the doubles.
The most efficient would be to use numbers, and add operations to make sure that operations on the simulated integers would give an integer result. A division for example would have to be rounded down, and a multiplication would either have to be checked for overflow or masked down to fit in the integer range.
This of course means that floating point operations in your language will be considerably faster than integer operations, defeating most of the purpose of having an integer type in the first place.
Note that ECMA-262 Edition 3 added
Number.prototype.toFixed, which takes
a precision argument telling how many
digits after the decimal point to
show. Use this method well and you
won't mind the disparity between
finite precision base 2 and the
"arbitrary" or "appropriate" precision
base 10 that we use every day.
- Brendan Eich

Categories