It's very close but just one number off. If you can change anything here to make it better it'd be appreciated. I'm comparing my number with Math.E to see if I'm close.
var e = (function() {
var factorial = function(n) {
var a = 1;
for (var i = 1; i <= n; i++) {
a = a * i;
}
return a;
};
for (var k = 0, b = []; k < 18; k++) {
b.push(b.length ? b[k - 1] + 1 / factorial(k) : 1 / factorial(k));
}
return b[b.length - 1];
})();
document.write(e);document.write('<br />'+ Math.E);
My number: 2.7182818284590455
Math.E: 2.718281828459045
Work from higher numbers to lower numbers to minimize cancellation:
var e = 1;
for(var k = 17; k > 0; --k) {
e = 1 + e/k;
}
return e;
Evaluating the Taylor polynomial by Horner's rule even avoids the factorial and allows you to use more terms (won't make a difference beyond 17, though).
As far as I can see your number is the same as Math.E and even has a better precision.
2.7182818284590455
2.718281828459045
What is the problem after all?
With javascript, you cannot calculate e this way due to the level of precision of javascript computations. See http://www.javascripter.net/faq/accuracy.htm for more info.
To demonstrate this problem please take a look at the following fiddle which calculates e with n starting at 50000000, incrementing n by 1 every 10 milliseconds:
http://jsfiddle.net/q8xRs/1/
I like using integer values to approximate real ones.
Possible approximations of e in order of increasing accuracy are:
11/4
87/32
23225/8544
3442297523731/1266350489376
That last one is fairly accurate, equating to:
2.7182818284590452213260834432
which doesn't diverge from wikipedia's value till the 18th:
2.71828182845904523536028747135266249775724709369995
So there's that, if you're interested.
Related
I'm having performance issues when trying to check whether integer n is a perfect square (sqrt is a whole number) when using BigInt. Using normal numbers below Number.MAX_SAFE_INTEGER gives reasonable performance, but attempting to use BigInt even with the same number range causes a huge performance hit.
The program solves the Battle of Hastings perfect square riddle put forth by Sam Loyd whereby my program iterates over the set of real numbers n (in this example, up to 7,000,000) to find instances where variable y is a whole number (perfect square). I'm interested in the original square root of one of the 13 perfect squares where this condition is satisfied, which is what my code generates (there's more than one).
Assuming y^2 < Number.MAX_SAFE_INTEGER which is 2^53 – 1, this can be done without BigInt and runs in ~60ms on my machine:
const limit = 7_000_000;
var a = [];
console.time('regular int');
for (let n = 1; n < limit; n++) {
if (Math.sqrt(Math.pow(n, 2) * 13 + 1) % 1 === 0)
a.push(n);
}
console.log(a.join(', '));
console.timeEnd('regular int');
Being able to use BigInt would mean I could test for numbers much higher than the inherent number variable limit 2^53 - 1, but BigInt seems inherently slower; unusably so. To test whether a BigInt is a perfect square, I have to use a third party library as Math.sqrt doesn't exist for BigInt such that I can check if the root is perfect, as all sqrt returns a floor value. I adapted functions for this from a NodeJS library, bigint-isqrt and bigint-is-perfect-square.
Thus, using BigInt with the same limit of 7,000,000 runs 35x slower:
var integerSQRT = function(value) {
if (value < 2n)
return value;
if (value < 16n)
return BigInt(Math.sqrt(Number(value)) | 0);
let x0, x1;
if (value < 4503599627370496n)
x1 = BigInt(Math.sqrt(Number(value))|0) - 3n;
else {
let vlen = value.toString().length;
if (!(vlen & 1))
x1 = 10n ** (BigInt(vlen / 2));
else
x1 = 4n * 10n ** (BigInt((vlen / 2) | 0));
}
do {
x0 = x1;
x1 = ((value / x0) + x0) >> 1n;
} while ((x0 !== x1 && x0 !== (x1 - 1n)));
return x0;
}
function perfectSquare(n) {
// Divide n by 4 while divisible
while ((n & 3n) === 0n && n !== 0n) {
n >>= 2n;
}
// So, for now n is not divisible by 2
// The only possible residual modulo 8 for such n is 1
if ((n & 7n) !== 1n)
return false;
return n === integerSQRT(n) ** 2n;
}
const limit = 7_000_000;
var a = [];
console.time('big int');
for (let n = 1n; n < limit; n++) {
if (perfectSquare(((n ** 2n) * 13n) + 1n))
a.push(n);
}
console.log(a.join(', '));
console.timeEnd('big int');
Ideally I'm interested in doing this with a much higher limit than 7 million, but I'm unsure whether I can optimise the BigInt version any further. Any suggestions?
You may be pleased to learn that some recent improvements on V8 have sped up the BigInt version quite a bit; with a recent V8 build I'm seeing your BigInt version being about 12x slower than the Number version.
A remaining challenge is that implementations of BigInt-sqrt are typically based on Newton iteration and hence need an estimate for a starting value, which should be near the final result, so about half as wide as the input, which is given by log2(X) or bitLength(X). Until this proposal gets anywhere, that can best be done by converting the BigInt to a string and taking that string's length, which is fairly expensive.
To get faster right now, #Ouroborus' idea is great. I was curious how fast it would be, so I implemented it:
(function betterAlgorithm() {
const limit = 7_000_000n;
var a = [];
console.time('better algorithm');
let m = 1n;
let m_squared = 1n;
for (let n = 1n; n < limit; n += 1n) {
let y_squared = n * n * 13n + 1n;
while (y_squared > m_squared) {
m += 1n;
m_squared = m * m;
}
if (y_squared === m_squared) {
a.push(n);
}
}
console.log(a.join(', '));
console.timeEnd('better algorithm');
})();
As a particular short-term detail, this uses += 1n instead of ++, because as of today, V8 hasn't yet gotten around to optimizing ++ for BigInts. This difference should disappear eventually (hopefully soon).
On my machine, this version takes only about 4x as much time as your original Number-based implementation.
For larger numbers, I would expect some gains from replacing the multiplications with additions (based on the observation that the delta between consecutive square numbers grows linearly), but for small-ish upper limits that appears to be a bit slower. If you want to toy around with it, this snippet describes the idea:
let m_squared = 1n; // == 1*1
let m_squared_delta = 3n; // == 2*2 - 1*1
let y_squared = 14n; // == 1*1*13+1
let y_squared_delta = 39n; // == 2*2*13+1 - 1*1*13+1
for (let n = 1; n < limit; n++) {
while (y_squared > m_squared) {
m_squared += m_squared_delta;
m_squared_delta += 2n;
}
if (y_squared === m_squared) {
a.push(n);
}
y_squared += y_squared_delta;
y_squared_delta += 26n;
}
The earliest where this could possibly pay off is when the results exceed 2n**64n; I wouldn't be surprised if it wasn't measurable before 2n**256n or so.
I'm trying to solve all the lessons on codility but I failed to do so on the following problem: Ladder by codility
I've searched all over the internet and I'm not finding a answer that satisfies me because no one answers why the max variable impacts so much the result.
So, before posting the code, I'll explain the thinking.
By looking at it I didn't need much time to understand that the total number of combinations it's a Fibonacci number, and removing the 0 from the Fibonacci array, I'd find the answer really fast.
Now, afterwards, they told that we should return the number of combinations modulus 2^B[i].
So far so good, and I decided to submit it without the var max, then I got a score of 37%.. I searched all over the internet and the 100% result was similar to mine but they added that max = Math.pow(2,30).
Can anyone explain to me how and why that max influences so much the score?
My Code:
// Powers 2 to num
function pow(num){
return Math.pow(2,num);
}
// Returns a array with all fibonacci numbers except for 0
function fibArray(num){
// const max = pow(30); -> Adding this max to the fibonaccy array makes the answer be 100%
const arr = [0,1,1];
let current = 2;
while(current<=num){
current++;
// next = arr[current-1]+arr[current-2] % max;
next = arr[current-1]+arr[current-2]; // Without this max it's 30 %
arr.push(next);
}
arr.shift(); // remove 0
return arr;
}
function solution(A, B) {
let f = fibArray(A.length + 1);
let res = new Array(A.length);
for (let i = 0; i < A.length; ++i) {
res[i] = f[A[i]] % (pow(B[i]));
}
return res;
}
console.log(solution([4,4,5,5,1],[3,2,4,3,1])); //5,1,8,0,1
// Note that the console.log wont differ in this solution having max set or not.
// Running the exercise on Codility shows the full log with all details
// of where it passed and where it failed.
The limits for input parameters are:
Assume that:
L is an integer within the range [1..50,000];
each element of array A is an integer within the range [1..L];
each element of array B is an integer within the range [1..30].
So the array f in fibArray can be 50,001 long.
Fibonacci numbers grow exponentially; according to this page, the 50,000th Fib number has over 10,000 digits.
Javascript does not have built-in support for arbitrary precision integers, and even doubles only offer ~14 s.f. of precision. So with your modified code, you will get "garbage" values for any significant value of L. This is why you only got 30%.
But why is max necessary? Modulo math tells us that:
(a + b) % c = ([a % c] + [b % c]) % c
So by applying % max to the iterative calculation step arr[current-1] + arr[current-2], every element in fibArray becomes its corresponding Fib number mod max, without any variable exceeding the value of max (or built-in integer types) at any time:
fibArray[2] = (fibArray[1] + fibArray[0]) % max = (F1 + F0) % max = F2 % max
fibArray[3] = (F2 % max + F1) % max = (F2 + F1) % max = F3 % max
fibArray[4] = (F3 % max + F2 % max) = (F3 + F2) % max = F4 % max
and so on ...
(Fn is the n-th Fib number)
Note that as B[i] will never exceed 30, pow(2, B[i]) <= max; therefore, since max is always divisible by pow(2, B[i]), applying % max does not affect the final result.
Here is a python 100% answer that I hope offers an explanation :-)
In a nutshell; modulus % is similar to 'bitwise and' & for certain numbers.
eg any number % 10 is equivalent to the right most digit.
284%10 = 4
1994%10 = 4
FACTS OF LIFE:
for multiples of 2 -> X % Y is equivalent to X & ( Y - 1 )
precomputing (2**i)-1 for i in range(1, 31) is faster than computing everything in B when super large arrays are given as args for this particular lesson.
Thus fib(A[i]) & pb[B[i]] will be faster to compute than an X % Y style thingy.
https://app.codility.com/demo/results/trainingEXWWGY-UUR/
And for completeness the code is here.
https://github.com/niall-oc/things/blob/master/codility/ladder.py
Here is my explanation and solution in C++:
Compute the first L fibonacci numbers. Each calculation needs modulo 2^30 because the 50000th fibonacci number cannot be stored even in long double, it is so big. Since INT_MAX is 2^31, the summary of previously modulo'd numbers by 2^30 cannot exceed that. Therefore, we do not need to have bigger store and/or casting.
Go through the arrays executing the lookup and modulos. We can be sure this gives the correct result since modulo 2^30 does not take any information away. E.g. modulo 100 does not take away any information for subsequent modulo 10.
vector<int> solution(vector<int> &A, vector<int> &B)
{
const int L = A.size();
vector<int> fibonacci_numbers(L, 1);
fibonacci_numbers[1] = 2;
static const int pow_2_30 = pow(2, 30);
for (int i = 2; i < L; ++i) {
fibonacci_numbers[i] = (fibonacci_numbers[i - 1] + fibonacci_numbers[i - 2]) % pow_2_30;
}
vector<int> consecutive_answers(L, 0);
for (int i = 0; i < L; ++i) {
consecutive_answers[i] = fibonacci_numbers[A[i] - 1] % static_cast<int>(pow(2, B[i]));
}
return consecutive_answers;
}
How to write a program that print out bits of integer.
I'm trying to do something like that:
function countBits(octet)
{
var i;
var c = "";
var k = "";
i = 128;
while (i > 0)
{
c = "";
if (octet < i)
{
c = '0';
i = i / 2;
k += c
}
else
{
c = '1';
k += c
octet = octet - i;
i = i / 2;
}
}
return k;
}
But if I trying to print bits with this program I have Output:
Input: 123
Output 01111011 and infinity numbers of zero
How can I remove this bug?
P.S: I want to do this program using only loops and algorithms, NOT function like (n >>> 0).toString(2); or .map() or something like this
Your variable i is always greater than 0, so the while loop keeps running forever. When you decrement it, you halve it. That will never get to 0, however it will get to a value smaller than one, and at this point you want to stop.
Try using while (i >= 1) instead as your condition.
So i added a little logging for checking the values of i and I got this in the loop:
i = 64; i = 32; i = 16; ... i = 1; i = 0.5; i = 0.25
So yeah, do what AgataB said with regards to using another condition instead since for javascript,
1 / 2 does not give you 0.
Unlike many other programming languages, JavaScript does not define
different types of numbers, like integers, short, long, floating-point
etc.
JavaScript numbers are always stored as double precision floating
point numbers, following the international IEEE 754 standard.
Quoted from: http://www.w3schools.com/js/js_numbers.asp
Experienced codefighters, i have just started using Codefight website to learn Javascript. I have solved their task but system does not accept it. The task is to sum all integers (inidividual digit) in a number. For example sumDigit(111) = 3. What is wrong with my code? Please help me.
Code
function digitSum(n) {
var emptyArray = [];
var total = 0;
var number = n.toString();
var res = number.split("");
for (var i=0; i<res.length; i++) {
var numberInd = Number(res[i]);
emptyArray.push(numberInd);
}
var finalSum = emptyArray.reduce(add,total);
function add(a,b) {
return a + b;
}
console.log(finalSum);
//console.log(emptyArray);
//console.log(res);
}
Here's a faster trick for summing the individual digits of a number using only arithmetic:
var digitSum = function(n) {
var sum = 0;
while (n > 0) {
sum += n % 10;
n = Math.floor(n / 10);
}
return sum;
};
n % 10 is the remainder when you divide n by 10. Effectively, this retrieves the ones-digit of a number. Math.floor(n / 10) is the integer division of n by 10. You can think of it as chopping off the ones-digit of a number. That means that this code adds the ones digit to sum, chops off the ones digit (moving the tens digit down to where the ones-digit was) and repeats this process until the number is equal to zero (i.e. there are no digits left).
The reason why this is more efficient than your method is that it doesn't require converting the integer to a string, which is a potentially costly operation. Since CodeFights is mainly a test of algorithmic ability, they are most likely looking for the more algorithmic answer, which is the one I explained above.
I'm trying to solve Project Euler Problem 20, while I think to solution is quite trivial, turns out Javascript doesn't give me the correct output:
var fact = 1;
for (var i = 100; i > 0; i--) {
fact *= i;
}
var summed = 0;
while (fact > 0) {
summed += Math.floor(fact % 10);
fact = fact / 10;
}
console.log(summed); //587
http://jsfiddle.net/9uEFj/
Now, let's try solving 100! from the bottom (that is, 1 * 2 * 3 * ... * 100 instead of 100 * 99 * .. * 1 like before ):
var fact = 1;
for (var i = 1; i <= 100; i++) {
fact *= i;
}
var summed = 0;
while (fact > 0) {
summed += Math.floor(fact % 10);
fact = fact / 10;
}
console.log(summed); //659
http://jsfiddle.net/YX4bu/
What is happening here? Why different multiplication order give me different result? Also, why none of the results give me the correct result of problem 20? (648)
The product of the first 100 integers is a large number in the order of 1e158. This will be handled as a floating point number with the consequent loss of precision. See The Floating Point Guide for a fuller explanation. The results of your two multiplications match to 15 significant figures, but differ at the 16th and beyond. This is enough to throw your final result.
To do this properly you'll need to use integer arithmetic throughout - something that is well beyond Javascript's native capabilities. You'll need to handle a number of 158 digits, and write a multiplication routine yourself.
If you use a string format to store the number, the second part of your script just needs to total the digits.