Translation into another number system JS - javascript

I have a string and I need to convert this string into another number system.
1959113774617397110401052 - in Decimal notation to thirty-tensary number system (10 to 36).
If i try to use this code:
var master = 1959113774617397110401052;
parseInt(master, 10).toString(36);
//8v0wc05bcz000000
It doesn't work properly.
Can you help me to know, where is my mistake and how to use this correctly.
Thank you!

The maximum integer JavaScript can safely handle is 9007199254740991. This is what we get if we call Number.MAX_SAFE_INTEGER. Your number, on the other hand, is significantly larger than this:
9007199254740991
1959113774617397110401052
Because of this, JavaScript isn't able to safely perform mathematical calculations with this number and there's no guarantee that you'll get an accurate result.
The MAX_SAFE_INTEGER constant has a value of 9007199254740991. The reasoning behind that number is that JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and can only safely represent numbers between -(253 - 1) and 253 - 1.
Safe in this context refers to the ability to represent integers exactly and to correctly compare them. For example, Number.MAX_SAFE_INTEGER + 1 === Number.MAX_SAFE_INTEGER + 2 will evaluate to true, which is mathematically incorrect. See Number.isSafeInteger() for more information.
— MDN's notes on Number.MAX_SAFE_INTEGER

You would need to use a multi-precision library like Decimal.js for integer calculations that exceed the range of signed Int32 resp. the continuous integer range representable by 64bit floats. As example:
var astr = '1959113774617397110401052'
var a =new Decimal(astr)
var out = '';
while( a > 0 ) {
var d = a.mod(36).toNumber();
a = a.divToInt(36);
if(d>9) d=d+39; // d+7 for upper case
out = String.fromCharCode(48+d)+out
}
var my_div = document.getElementById("my_div")
my_div.innerHTML += astr+" in base 36 is "+out;
<script src="https://raw.githubusercontent.com/MikeMcl/decimal.js/master/decimal.min.js"></script>
<div id="my_div"></div>

Related

Number.prototype.toString (for floats) implementation

I recently found out that you can call toString() on floating point numbers in JS with a radix, but I'm confused on its implementation. For example, 3.1459.toString(16) in JS, returns 3.2559b3d07c84c. From my testing it seems that the numeric part is a straightforward conversion to the equivalent radix, but this doesn't hold true for the fractional part? How exactly are the characters determined?
I've found what I believe is the source for that in V8. It is similar to what was posted in the comments about converting the fraction to hexadecimal, but there is additional rounding and precision differences.
If you are looking for converting a decimal fraction to a different radix system, you could multiply the fractional part with the radix, get the integer value in the radix like notation and perform the same until you get zero or get a limit of digits.
let frac10 = 0.1459,
frac16 = '',
limit = 15;
while (frac10 && --limit) {
const integer = Math.floor(frac10 *= 16);
frac16 += integer.toString(16);
frac10 -= integer;
}
console.log(frac16);

Math.min and Math.max Not Working for Large (Floating-Point) Numbers

It's relating to a problem with how JavaScript handles large (Floating-Point) numbers.
What is JavaScript's highest integer value that a number can go to without losing precision? is referring to the highest possible number. I was after a way to bypass that for getting the min and max in the example below.
var lowest = Math.min(131472982990263674, 131472982995395415);
console.log(lowest);
Wil return:
131472982990263680
To get the min and max value would it be required to write a function to suit or is there a way I can get it working with the Math.min and Math.max functions?
The closest solution I've found was this but I couldn't manage to get it working as I cannot avail of BigInt function as it's not exposed to my version.
Large numbers erroneously rounded in JavaScript
You can try to convert the numbers to BigInt:
const n1 = BigInt("131472982990263674"); // or const n1 = 131472982990263674n;
const n2 = BigInt("131472982995395415"); // or const n1 = 131472982995395415n;
Then find the min and max using this post:
const n1 = BigInt("131472982990263674");
const n2 = BigInt("131472982995395415");
function BigIntMinAndMax (...args){
return args.reduce(([min,max], e) => {
return [
e < min ? e : min,
e > max ? e : max,
];
}, [args[0], args[0]]);
};
const [min, max] = BigIntMinAndMax(n1, n2);
console.log(`Min: ${min}`);
console.log(`Max: ${max}`);
Math.min and Math.max are working just fine.
When you write 131472982990263674 in a JavaScript program, it is rounded to the nearest IEEE 754 binary64 floating-point number, which is 131472982990263680.
Written in hexadecimal or binary, you can see that 131472982990263674 = 0x1d315fb40b2157a = 0b111010011000101011111101101000000101100100001010101111010 takes 56 bits of precision to represent.
If you round that to the nearest number with only 53 bits of precision, what you get is 0b111010011000101011111101101000000101100100001010110000000 = 0x1d315fb40b21580 = 131472982990263680 (significant bits in bold).
Similarly, when you write 131472982995395415 in JavaScript, what you get back is 131472982995395410.
So when you write the code Math.min(131472982990263674, 131472982995395415), you pass the numbers 131472982990263680 and 131472982995395410 into the Math.min function.
Given that, it should come as no surprise that Math.min returns 131472982990263680.
> 131472982990263674
131472982990263680
> 131472982995395415
131472982995395410
> Math.min(131472982990263674, 131472982995395415)
131472982990263680
It's not clear what your original goal is.
Are you given two JavaScript numbers to begin with, and are you trying to find the min or max?
If so, Math.min and Math.max are the right thing.
Are you given two strings, and are you trying to order them by the numbers they represent?
If so, it depends on the notation you want to support.
If you only want to support decimal notation for integers (with no scientific notation, like 123e4), then you can simply chop leading zeros and compare the strings lexicographically with < or > in JavaScript.
> function strmin(x, y) { return x < y ? x : y }
> strmin("131472982990263674", "131472982995395415")
'131472982990263674'
If you want to support arbitrary-precision decimal notation (including non-integers and perhaps scientific notation), and you want to maintain distinctions between, for instance, 1.00000000000000001 and 1.00000000000000002, then you probably want a general arbitrary-precision decimal arithmetic library.
Are you trying to do arithmetic with integers in a range that might exceed 2⁵³, and need the computation to be exact, requiring >53 bits of precision?
If so, you may need some kind of wider-precision or arbitrary-precision arithmetic than JavaScript numbers alone provide, like bigint recently added to JavaScript.
If you only need a little more than 53 bits of precision, as is often the case inside numerical algorithms for transcendental elementary functions, there's also T.J. Dekker's algorithm for extending (say) binary64 arithmetic into double-binary64 or “double-double” arithmetic: a double-binary64 number is the sum 𝑥 + 𝑦 of two binary64 floating-point numbers 𝑥 and 𝑦, where typically 𝑥 holds the higher-order bits and 𝑦 holds the lower-order bits so together they can store 106 bits of precision.

Math.sin() Different Precision between Node.js and C#

I have a problem in precision in the last digit after the comma.The javascript code generates one less Digit in compare with the C# code.
Here is the simple Node.js code
var seed = 45;
var x = Math.sin(seed) * 0.5;
console.log(x);//0.4254517622670592
Here is the simple C# code
public String pseudorandom()
{
int seed = 45;
double num = Math.Sin(seed) * (0.5);
return num.ToString("G15");//0.42545176226705922
}
How to achieve the same precision?
The JavaScript Number type is quite complex. It looks like floating point number will probably be like IEEE 754-2008 but some aspects are left to the implementation. See http://www.ecma-international.org/ecma-262/6.0/#sec-number-objects sec 12.7.
There is a note
The output of toFixed may be more precise than toString for some
values because toString only prints enough significant digits to
distinguish the number from adjacent number values. For example,
(1000000000000000128).toString() returns "1000000000000000100", while
(1000000000000000128).toFixed(0) returns "1000000000000000128".
Hence to get full digit accuracy you need something like
seed = 45;
x = Math.sin(seed) * 0.5;
x.toFixed(17);
// on my platform its "0.42545176226705922"
Also, note the specification for how the implementation of sin and cos allow for some variety in the actual algorithm. It's only guaranteed to within +/- 1 ULP.
Using java the printing algorithm is different. Even forcing 17 digits gives the result as 0.42545176226705920.
You can check you are getting the same bit patterns using x.toString(2) and Double.doubleToLongBits(x) in Java.
return num.ToString("G15");//0.42545176226705922
actually returns "0.425451762267059" (no significant digit + 15 decimal places in this example), and not the precision shown in the comment after.
So you would use:
return num.ToString("G16");
to get "0.4254517622670592"
(for your example - where the significant digit is always 0) G16 will be 16 decimal places.

Math.pow() not giving precise answer

I've used Math.pow() to calculate the exponential value in my project.
Now, For specific values like Math.pow(3,40), it returns 12157665459056929000.
But when i tried the same value using a scientific Calculator, it returns 12157665459056928801.
Then i tried to traverse the loop till the exponential value :
function calculateExpo(base,power){
base = parseInt(base);
power = parseInt(power);
var output = 1;
gameObj.OutPutString = ''; //base + '^' + power + ' = ';
for(var i=0;i<power;i++){
output *= base;
gameObj.OutPutString += base + ' x ';
}
// to remove the last comma
gameObj.OutPutString = gameObj.OutPutString.substring(0,gameObj.OutPutString.lastIndexOf('x'));
gameObj.OutPutString += ' = ' + output;
return output;
}
This also returns 12157665459056929000.
Is there any restriction to Int type in JS ?
This behavior is highly dependent on the platform you are running this code at. Interestingly even the browser matters even on the same very machine.
<script>
document.write(Math.pow(3,40));
</script>
On my 64-bit machine Here are the results:
IE11: 12157665459056928000
FF25: 12157665459056929000
CH31: 12157665459056929000
SAFARI: 12157665459056929000
52 bits of JavaScript's 64-bit double-precision number values are used to store the "fraction" part of a number (the main part of the calculations performed), while 11 bits are used to store the "exponent" (basically, the position of the decimal point), and the 64th bit is used for the sign. (Update: see this illustration: http://en.wikipedia.org/wiki/File:IEEE_754_Double_Floating_Point_Format.svg)
There are slightly more than 63 bits worth of significant figures in the base-two expansion of 3^40 (63.3985... in a continuous sense, and 64 in a discrete sense), so hence it cannot be accurately computed using Math.pow(3, 40) in JavaScript. Only numbers with 52 or fewer significant figures in their base-two expansion (and a similar restriction on their order of magnitude fitting within 11 bits) have a chance to be represented accurately by a double-precision floating point value.
Take note that how large the number is does not matter as much as how many significant figures are used to represent it in base two. There are many numbers as large or larger than 3^40 which can be represented accurately by JavaScript's 64-bit double-precision number values.
Note:
3^40 = 1010100010111000101101000101001000101001000111111110100000100001 (base two)
(The length of the largest substring beginning and ending with a 1 is the number of base-two significant figures, which in this case is the entire string of 64 digits.)
Haskell (ghci) gives
Prelude> 3^40
12157665459056928801
Erlang gives
1> io:format("~f~n", [math:pow(3,40)]).
12157665459056929000.000000
2> io:format("~p~n", [crypto:mod_exp(3,40,trunc(math:pow(10,21)))]).
12157665459056928801
JavaScript
> Math.pow(3,40)
12157665459056929000
You get 12157665459056929000 because it uses IEEE floating point for computation. You get 12157665459056928801 because it uses arbitrary precision (bignum) for computation.
JavaScript can only represent distinct integers to 253 (or ~16 significant digits). This is because all JavaScript numbers have an internal representation of IEEE-754 base-2 doubles.
As a consequence, the result from Math.pow (even if was accurate internally) is brutally "rounded" such that the result is still a JavaScript integer (as it is defined to return an integer per the specification) - and the resulting number is thus not the correct value, but the closest integer approximation of it JavaScript can handle.
I have put underscores above the digits that don't [entirely] make the "significant digit" cutoff so it can be see how this would affect the results.
................____
12157665459056928801 - correct value
12157665459056929000 - closest JavaScript integer
Another way to see this is to run the following (which results in true):
12157665459056928801 == 12157665459056929000
From the The Number Type section in the specification:
Note that all the positive and negative integers whose magnitude is no greater than 253 are representable in the Number type ..
.. but not all integers with large magnitudes are representable.
The only way to handle this situation in JavaScript (such that information is not lost) is to use an external number encoding and pow function. There are a few different options mentioned in https://stackoverflow.com/questions/287744/good-open-source-javascript-math-library-for-floating-point-operations and Is there a decimal math library for JavaScript?
For instance, with big.js, the code might look like this fiddle:
var z = new Big(3)
var r = z.pow(40)
var str = r.toString()
// str === "12157665459056928801"
Can't say I know for sure, but this does look like a range problem.
I believe it is common for mathematics libraries to implement exponentiation using logarithms. This requires that both values are turned into floats and thus the result is also technically a float. This is most telling when I ask MySQL to do the same calculation:
> select pow(3, 40);
+-----------------------+
| pow(3, 40) |
+-----------------------+
| 1.2157665459056929e19 |
+-----------------------+
It might be a courtesy that you are actually getting back a large integer.

javascript float from/to bits

I am trying to perform something that is brain-dead simple in any other language but not javascript: get the bits out of float (and the other way around).
In C/C++ it would be something like
float a = 3.1415;
int b = *((int*)&a);
and vise-versa
int a = 1000;
float b = *((float*)&a);
In C# you can use the BitConverter
...floatBits or something alike in Java... Even in VB6 for Christ's sake you can memcpy a float32 into an int32. How on earth can I translate between and int and a float in javascript?
function DoubleToIEEE(f)
{
var buf = new ArrayBuffer(8);
(new Float64Array(buf))[0] = f;
return [ (new Uint32Array(buf))[0] ,(new Uint32Array(buf))[1] ];
}
You certainly don't get anything low-level like that in JavaScript. It would be extremely dangerous to allow recasting and pointer-frobbing in a language that has to be safe for untrusted potential-attacker web sites to use.
If you want to get a 32-bit IEEE754 representation of a single-precision value in a Number (which remember is not an int either; the only number type you get in JavaScript is double), you will have to make it yourself by fiddling the sign, exponent and mantissa bits together. There's example code here.
function FloatToIEEE(f)
{
var buf = new ArrayBuffer(4);
(new Float32Array(buf))[0] = f;
return (new Uint32Array(buf))[0];
}
Unfortunately, this doesn't work with doubles and in old browsers.
JavaScript uses double (IEEE 754) to represent all numbers
double consists of [sign, exponent(11bit), mantissa(52bit)] fields.
Value of number is computed using formula (-1)^sign * (1.mantissa) * 2^(exponent - 1023). (1.mantissa - means that we take bits of mantissa add 1 at the beginning and tread that value as number, e.g. if mantissa = 101 we get number 1.101 (bin) = 1 + 1/2 + 1/8 (dec) = 1.625 (dec).
We can get value of sign bit testing if number is greater than zero. There is a small issue with 0 here because double have +0 and -0 values, but we can distinguish these two by computing 1/value and checking if value is +Inf or -Inf.
Since 1 <= 1.mantissa < 2 we can get value of exponent using Math.log2 e.g. Math.floor(Math.log2(666.0)) = 9 so exponent is exponent - 1023 = 9 and exponent = 1032, which in binary is (1032).toString(2) = "10000001000"
After we get exponent we can scale number to zero exponent without changing mantissa, value = value / Math.pow(2, Math.floor(Math.log2(666.0))), now value represents number (-1)^sign * (1.mantissa). If we ignore sign and multiply that by 2^52 we get integer value that have same bits as 1.mantissa: ((666 / Math.pow(2, Math.floor(Math.log2(666)))) * Math.pow(2, 52)).toString(2) = "10100110100000000000000000000000000000000000000000000" (we must ignore leading 1).
After some string concat's you will get what you want
This is only proof of concept, we didn't discuss denormalized numbers or special values such as NaN - but I think it can be expanded to account for these cases too.
#bensiu answers is fine, but if find yourself using some old JS interpreter you can use this approach.
Like the other posters have said, JavaScript is loose typed, so there is no differentiation in data types from float to int or vice versa.
However, what you're looking for is
float to int:
Math.floor( 3.9 ); // result: 3 (truncate everything past .) or
Math.round( 3.9 ); // result: 4 (round to nearest whole number)
Depending on which you'd like. In C/C++ it would essentially be using Math.floor to convert to integer from float.
int to float:
var a = 10;
a.toFixed( 3 ); // result: 10.000

Categories