Project Euler #16 not working? - javascript

I was certain i had my code right, however it returns 1364 rather than the correct answer 1366.
I tested my code for 2^15 and it returned the correct answer 26. I also tried a few other numbers, all of which gave me a correct answer.
I know my code is very elementary, i don't know much Javascript and i am open to alternate solutions, however i am looking for an explanation as to why my code does not work.
//adds digits of this number
sum(Math.pow(2,1000));
function sum(x) {
//the upper limit of the number
var c = Math.floor(Math.log10(x));
//the sum
var s = 0;
//the ith digit
var a;
for (var i = c; i >= 0; i--) {
//this works, test by removing the slashes before console.log(a)
a = Math.floor((x % Math.pow(10,i+1)) / Math.pow(10,i));
s += a;
//console.log(a);
}
console.log(s);
}

Expanding on guest271314's comment:
Javascript numbers have IEEE 754 double-precision floating point behavior. This means that Javascript is unable to accurately represent integers above 253.
> Number.isSafeInteger(Math.pow(2, 15))
< true
> Number.isSafeInteger(Math.pow(2, 1000))
< false
To solve this problem in Javascript, you will need a different representation of 21000. Since the only operations you care about are exponentiation and reading out digits base 10, one possible representation is a decimal string, implementing multiplication yourself.

Related

Translation into another number system JS

I have a string and I need to convert this string into another number system.
1959113774617397110401052 - in Decimal notation to thirty-tensary number system (10 to 36).
If i try to use this code:
var master = 1959113774617397110401052;
parseInt(master, 10).toString(36);
//8v0wc05bcz000000
It doesn't work properly.
Can you help me to know, where is my mistake and how to use this correctly.
Thank you!
The maximum integer JavaScript can safely handle is 9007199254740991. This is what we get if we call Number.MAX_SAFE_INTEGER. Your number, on the other hand, is significantly larger than this:
9007199254740991
1959113774617397110401052
Because of this, JavaScript isn't able to safely perform mathematical calculations with this number and there's no guarantee that you'll get an accurate result.
The MAX_SAFE_INTEGER constant has a value of 9007199254740991. The reasoning behind that number is that JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and can only safely represent numbers between -(253 - 1) and 253 - 1.
Safe in this context refers to the ability to represent integers exactly and to correctly compare them. For example, Number.MAX_SAFE_INTEGER + 1 === Number.MAX_SAFE_INTEGER + 2 will evaluate to true, which is mathematically incorrect. See Number.isSafeInteger() for more information.
— MDN's notes on Number.MAX_SAFE_INTEGER
You would need to use a multi-precision library like Decimal.js for integer calculations that exceed the range of signed Int32 resp. the continuous integer range representable by 64bit floats. As example:
var astr = '1959113774617397110401052'
var a =new Decimal(astr)
var out = '';
while( a > 0 ) {
var d = a.mod(36).toNumber();
a = a.divToInt(36);
if(d>9) d=d+39; // d+7 for upper case
out = String.fromCharCode(48+d)+out
}
var my_div = document.getElementById("my_div")
my_div.innerHTML += astr+" in base 36 is "+out;
<script src="https://raw.githubusercontent.com/MikeMcl/decimal.js/master/decimal.min.js"></script>
<div id="my_div"></div>

Math.sin() Different Precision between Node.js and C#

I have a problem in precision in the last digit after the comma.The javascript code generates one less Digit in compare with the C# code.
Here is the simple Node.js code
var seed = 45;
var x = Math.sin(seed) * 0.5;
console.log(x);//0.4254517622670592
Here is the simple C# code
public String pseudorandom()
{
int seed = 45;
double num = Math.Sin(seed) * (0.5);
return num.ToString("G15");//0.42545176226705922
}
How to achieve the same precision?
The JavaScript Number type is quite complex. It looks like floating point number will probably be like IEEE 754-2008 but some aspects are left to the implementation. See http://www.ecma-international.org/ecma-262/6.0/#sec-number-objects sec 12.7.
There is a note
The output of toFixed may be more precise than toString for some
values because toString only prints enough significant digits to
distinguish the number from adjacent number values. For example,
(1000000000000000128).toString() returns "1000000000000000100", while
(1000000000000000128).toFixed(0) returns "1000000000000000128".
Hence to get full digit accuracy you need something like
seed = 45;
x = Math.sin(seed) * 0.5;
x.toFixed(17);
// on my platform its "0.42545176226705922"
Also, note the specification for how the implementation of sin and cos allow for some variety in the actual algorithm. It's only guaranteed to within +/- 1 ULP.
Using java the printing algorithm is different. Even forcing 17 digits gives the result as 0.42545176226705920.
You can check you are getting the same bit patterns using x.toString(2) and Double.doubleToLongBits(x) in Java.
return num.ToString("G15");//0.42545176226705922
actually returns "0.425451762267059" (no significant digit + 15 decimal places in this example), and not the precision shown in the comment after.
So you would use:
return num.ToString("G16");
to get "0.4254517622670592"
(for your example - where the significant digit is always 0) G16 will be 16 decimal places.

Buffer reading and writing floats

When I write a float to a buffer, it does not read back the same value:
> var b = new Buffer(4);
undefined
> b.fill(0)
undefined
> b.writeFloatBE(3.14159,0)
undefined
> b.readFloatBE(0)
3.141590118408203
>
(^C again to quit)
>
Why?
EDIT:
My working theory is that because javascript stores all numbers as double precision, it's possible that the buffer implementation does not properly zero the other 4 bytes of the double when it reads the float back in:
> var b = new Buffer(4)
undefined
> b.fill(0)
undefined
> b.writeFloatBE(0.1,0)
undefined
> b.readFloatBE(0)
0.10000000149011612
>
I think it's telling that we have zeros for 7 digits past the decimal (well, 8 actually) and then there's garbage. I think there's a bug in the node buffer code that reads these floats. That's what I think. This is node version 0.10.26.
Floating point numbers ("floats") are never a fully-accurate representation of a number; this is a common feature that is seen across multiple languages, not just JavaScript / NodeJS. For example, I encountered something similar in C# when using float instead of double.
Double-precision floating point numbers are more accurate and should better meet your expectations. Try changing the above code to write to the buffer as a double instead of a float:
var b = new Buffer(8);
b.fill(0);
b.writeDoubleBE(3.14159, 0);
b.readDoubleBE(0);
This will return:
3.14159
EDIT:
Wikipedia has some pretty good articles on floats and doubles, if you're interested in learning more:
http://en.wikipedia.org/wiki/Floating_point
http://en.wikipedia.org/wiki/Double-precision_floating-point_format
SECOND EDIT:
Here is some code that illustrates the limitation of the single-precision vs. double-precision float formats, using typed arrays. Hopefully this can act as proof of this limitation, as I'm having a hard time explaining in words:
var floats32 = new Float32Array(1),
floats64 = new Float64Array(1),
n = 3.14159;
floats32[0] = n;
floats64[0] = n;
console.log("float", floats32[0]);
console.log("double", floats64[0]);
This will print:
float 3.141590118408203
double 3.14159
Also, if my understanding is correct, single-precision floating point numbers can store up to 7 total digits (significant digits), not 7 digits after the decimal point. This means that they should be accurate up to 7 total significant digits, which lines up with your results (3.14159 has 6 significant digits, 3.141590118408203 => first 7 digits => 3.141590 => 3.141590 === 3.14159).
readFloat in node is implemented in c++ and bytes are interpreted exactly the way your compiler stores/reads them. I doubt there is a bug here. What I think is that "7 digits" is incorrect assumption for float. This answer suggest 6 digits (and it's the value of std::numeric_limits<float>::digits10 ) so the result of readFloatBE is within expected error

numbers and toFixed , toPrecision in Javascript?

Regarding the famous issue of 1.01+1.02 which is 2.0300000000000002
one of the workarounds is to use toFixed : e.g.
(1.01+1.02).toFixed(2) --->"2.03"
But I saw a solution with toPrecision
parseFloat((1.01+1.02).toPrecision(10))-->"2.03"
But lets have a look at n in
toFixed(n)
toPrecision(n)
How would I know what is n ?
0.xxxxxxxxxxx
+
0.yyyyyyyyyyyyy
---------------------
0.zzzzzzzzzzzzzzzzzzzzzzzzz
^
|
-----??????------
each number being added can have a different decimal digits...
for example :
1.0002+1.01+1.03333--> 3.0435300000000005
how would I calculate the n here ? what is the best practice for this (specific) issue ?
For addition as in this situation I would check the number of decimal places in each operand.
In the simplest of situations the number of decimal places in the operand with the greatest number of decimal places is the value of n.
Once you have this, use which ever method you like to truncate your value. Then get rid of trailing zeros.
You may encounter trailing zeros in situations such as 1.06 + 1.04, the first step would take you to 1.10 then truncating the zero would give 1.1
In your last example 1.0002+1.01+1.03333 greatest number of decimal places is 5 so you are left with 3.04353 and there are no trailing zeros to truncate.
This returns the expected output:
function add(){
// Initialize output and "length" properties
var length = 0;
var output = 0;
// Loop through all arguments supplied to this function (So: 1,4,6 in case of add(1,4,6);)
for(var i = 0; i < arguments.length; i++){
// If the current argument's length as string is longer than the previous one (or greater than 0 in case of the first argument))
if(arguments[0].toString().length > length){
// Set the current length to the argument's length (+1 is to account for the decimal point taking 1 character.)
length = arguments[0].toString().length +1;
}
// Add the current character to the output with a precision specified by the longest argument.
output = parseFloat((output+arguments[i]).toPrecision(length));
}
// Do whatever you with with the result, here. Usually, you'd 'return output;'
console.log(output);
}
add(); // Returns 0
add(1,2,3); // Returns 6
add(1.01,2.01,3.03); // Returns 6.05
add(1.01,2.0213,3.3333); // Returns 6.3646
add(11.01,2.0213,31.3333); // Returns 44.3646
parseFloat even gets rid of trailing zero's for you.
This function accepts as many numbers as parameters as you wish, then adds these together taking the numbers' string length into account, when adding them. The precision used in the addition is dynamically modified to fit the "currently added" argument's length.
Fiddle
If you're doing calculations, you have a couple of choices:
multiply the numbers by eg 100, to convert to integers, then do the calculations, then convert back again
do the calculations, dont worry about the rounding errors, then round the result at display time
If you're dealing with money/currencies, the first option is probably not a bad option. If you're just doing scientific maths, I would personally not worry about it, and just round the results at display time, eg to 6 significant figures which is the default for my c++ compiler (gcc; not sure if it is in the c++ standards or not, but if you print 1.234567890 in gcc c++, the output is 1.23457, and the problem is avoided)
var a = 216.57421;
a.toPrecision(1); // => '200' because 216 with 1 < 5;
a.toPrecision(2); // => '220' because 216 with 6 >= 5;
a.toFixed(1); // => 216.6 because 7 >= 5;
a.toFixed(2); // => 216.57 because 4 < 5;

Mysterious 2 arising in the output of my JavaScript decimal to binary converter?

I am trying to implement a fairly simple Decimal to Binary converter using the following recursive function:
function dectobin(d) {
if (0 >= d) { return 0; }
else if (1 == d) { return 1; }
else {
return 10 * dectobin(Math.floor(d / 2)) + (d % 2);
}
}
Now the problem is that when I test it with 70007, there seems to be some overflow of 1 at the last recursion when the last entry is pop off the stack. So once dectobin(35003) returns with 100010001011101, it is scaled by 10 to 1000100010111010, and a 1 is suppose to be added. Except, instead of adding 1, it adds a 2 so the answer becomes: 1000100010111012. Now I have checked my logic and math and found not mistake in that, so I have a feeling this is an internal structure of the language that is causing this error. So if anyone can help me out here and explain to me what's the issue that would be most gratifying. Thanks in advance.
For the record: you can convert a decimal to binary (string) using:
(70007).toString(2)
There's no need to multiply by 10. Furthermore 0 >= d will never be met.
Rewriting your function to:
function dectobin(d) {
function recurse(dd) {
return dd > 1 ? recurse(Math.floor(dd/2))+''+dd%2 : dd;
}
return recurse(d);
}
Should deliver the right result
Numbers more than 15 digits long are not reliable in JavaScript due to the internal representation of numbers as IEEE doubles.
E.g.
10001000101110110+1 == 10001000101110112
10001000101110110+2 == 10001000101110112
10001000101110110+3 == 10001000101110112
10001000101110110+4 == 10001000101110114
10001000101110110+5 == 10001000101110116
They're all true; so you might wanna check out some of the BigNumber libraries to make your code work.
I could be wrong about this, but this could be a precision error arising from the fact that JS numbers are represented as IEEE doubles, and the number that you're describing is so large that it might actually be out of the range of integer values representable exactly with IEEE doubles. If that's the case, you should probably consider having your function return a string representation of the binary representation rather than a numerical representation of the 1s and 0s.
According to this earlier answer, the largest integer representable exactly by an IEEE double (with no gaps in the representable integers) is 2^53, which is about 10^14. The number you're describing the error with has more than 14 digits, so I would suspect that this is the problem.
Hope this helps!

Categories