I am trying to solve this kata on Codewars: https://www.codewars.com/kata/56e56756404bb1c950000992/train/javascript, and I have a method that I think should be correct, but it throws a RangeError. This is my code:
function sumDifferencesBetweenProductsAndLCMs(pairs){
return pairs.reduce((acc, curr) => {
// LCM * GCD = product
return acc + (curr[0] * curr[1] * (1 - 1 / GCD(curr)))
}, 0)
}
function GCD(pair) {
// Euclidean algorithm
let a = Math.max(...pair);
let b = Math.min(...pair);
let r = a % b;
if (r == 0) {
return b;
}
return GCD([b, r]);
}
Where am I going wrong? How else can I implement the Euclidean Algorithm?
I'm not sure I get any of the maths stuff here, but skipping zeros seems to stop it from trying to recurse forever.
Anything % 0 will give NaN and probably anything % NaN will give NaN and NaN == 0 is false, so if you start with a zero value entering GCD there's no way for the recursion to terminate.
function sumDifferencesBetweenProductsAndLCMs(pairs){
return pairs.reduce((acc, curr) => {
// LCM * GCD = product
if(Math.min(...curr) === 0) return acc; // Skip zeros
return acc + (curr[0] * curr[1] * (1 - 1 / GCD(curr)))
}, 0)
}
function GCD(pair) {
// Euclidean algorithm
let a = Math.max(...pair);
let b = Math.min(...pair);
let r = a % b;
if (r == 0) {
return b;
}
return GCD([b, r]);
}
console.log(sumDifferencesBetweenProductsAndLCMs([[15,18], [4,5], [12,60]]),840);
console.log(sumDifferencesBetweenProductsAndLCMs([[1,1], [0,0], [13,91]]),1092);
console.log(sumDifferencesBetweenProductsAndLCMs([[15,7], [4,5], [19,60]]),0);
console.log(sumDifferencesBetweenProductsAndLCMs([[20,50], [10,10], [50,20]]),1890);
console.log(sumDifferencesBetweenProductsAndLCMs([]),0);
Related
let power2 = (x,n) => {
if(n == 0) return 1;
let temp = power2(x,n/2);
if(n%2 == 1) return temp * temp * x;
return temp*temp;
}
console.log(power2(4,3));
This method has less nodes and time complexity but its giving wrong output
The problem with the original code was the fact that n / 2 will result in a real number when you need it to be treated as an integer. Bitwise operation are always performed on integers so n >> 1 will correctly yield an integer. The same goes with modulo which converts the number to an integer first that's why it worked correctly in your code.
let power2 = (x, n) => {
if (n === 0) return 1;
const temp = power2(x, (n >> 1));
if (n % 2 === 1) return temp * temp * x;
return temp * temp;
}
console.log(power2(4, 3));
If you need custom integer power function, based on recursion, consider this snippet:
// Custom pow
const myPow = (n, i) => i > 0 ? myPow(n, i - 1) * n : 1;
// Test
console.log(myPow(4, 3));
Straight out of CTCI, 8.14: Given a boolean expression consisting of the symbols 0 (false), 1 (true), & (AND), | (OR), and ^(XOR), and a desired boolean result value result, implement a function to count the number of ways of parenthesizing the expression such that it evaluates to result.
I'm attempting a brute force approach that calculates every single possible combo, if matches desired result, add it to an array(combos) and return that result length. It seems to work for most expressions, but not the 2nd example given. What do I seem to be missing?
function countEval(s, goalBool, combos = []) {
// on first call make s into array since theyre easier to work with
if (!(s instanceof Array)) {
// and turn 1s and 0s into their bool equivalent
s = s.split('').map((item) => {
if (item === '1') {
return true;
} else if (item === '0'){
return false;
} else {
return item;
}
});
}
if (s.length === 1 && s[0] === goalBool) {
combos.push(s[0]); // can be anything really
} else {
for (let i = 0; i < s.length - 2; i = i + 2) {
// splice out the next 3 items
const args = s.splice(i, 3);
// pass them to see what they evaluate too
const result = evalHelper(args[0], args[1], args[2]);
// splice that result back in s array
s.splice(i, 0, result);
// pass that array to recurse
countEval(s, goalBool, combos);
// remove said item that was just put in
s.splice(i, 1);
// and reset array for next iteration
s.splice(i, 0, ...args);
}
}
return combos.length;
}
function evalHelper(a, op, b) {
if (op === '|') {
return a || b;
} else if (op === '&') {
return a && b;
} else if (op === '^') {
return a !== b;
}
}
With the 2 examples given it works for the first one, but not the second...
console.log(countEval('1^0|0|1', false)); // 2, correct
console.log(countEval('0&0&0&1^1|0', true)); // 30, should be 10!?!?!
The Bug
Your program is not taking into account overlap.
Example
Consider your program when s = '1|1|1|1'.
In one of the depth-first search iterations, your algorithm will make the reduction s = (1|1)|1|1. Then in a deeper recursive level in the same search, your algorithm will make the reduction s = (1|1)|(1|1). Now s is fully reduced, so you increment the length of combos.
In a different depth-first search iteration, your algorithm will first make the reduction s = 1|1|(1|1). Then in a deeper recursive level in the same search, your algorithm will make the reduction s = (1|1)|(1|1). Now s is fully reduced, so you increment the length of combos.
Notice that for both cases, s was parenthesized the same way, thus your program does not take into account overlap.
A Better Solution
A lot of times, when a problem is asking the number of ways something can be done, this is usually a big indicator that dynamic programming could be a potential solution. The recurrence relation to this problem is a bit tricky.
We just need to pick a "principle" operator, then determine the number of ways the left and right side could evaluate to true or false. Then, based on the "principle" operator and the goal boolean, we can derive a formula for the number of ways the expression could evaluate to the goal boolean given that the operator we picked was the "principle" operator.
Code
function ways(expr, res, i, j, cache, spaces) {
if (i == j) {
return parseInt(expr[i]) == res ? 1 : 0;
} else if (!([i, j, res] in cache)) {
var ans = 0;
for (var k = i + 1; k < j; k += 2) {
var op = expr[k];
var leftTrue = ways(expr, 1, i, k - 1, cache);
var leftFalse = ways(expr, 0, i, k - 1, cache);
var rightTrue = ways(expr, 1, k + 1, j, cache);
var rightFalse = ways(expr, 0, k + 1, j, cache);
if (op == '|') {
if (res) {
ans += leftTrue * rightTrue + leftTrue * rightFalse + leftFalse * rightTrue;
} else {
ans += leftFalse * rightFalse;
}
} else if (op == '^') {
if (res) {
ans += leftTrue * rightFalse + leftFalse * rightTrue;
} else {
ans += leftTrue * rightTrue + leftFalse * rightFalse;
}
} else if (op == '&') {
if (res) {
ans += leftTrue * rightTrue;
} else {
ans += leftFalse * rightFalse + leftTrue * rightFalse + leftFalse * rightTrue;
}
}
}
cache[[i, j, res]] = ans;
}
return cache[[i, j, res]];
}
function countEval(expr, res) {
return ways(expr, res ? 1 : 0, 0, expr.length - 1, {});
}
How could I function named unfactorial that takes a number and returns a string representing it's base factorial in the form: n!? This would result in the un-do the factorial operation if one is possible. It should return null if no factorial was possible.
Package
A factorial operation looks like this:
5! = 5 * 4 * 3 * 2 * 1 = 120
Function Output:
unfactorial(120) // '5!'
unfactorial(150) // null
unfactorial(5040) // '7!'
My current Solution
const unfactorial = (num) => {
let d = 1
while (num > 1 && Math.round(num === num)) {
d += 1
num /= d
}
if (num === 1)
return `${d}!`
else return null
}
Here's one way you could do it using an auxiliary recursive function.
Note, it will not work for numbers that are not actual factorials. This is left as an exercise for the asker.
const unfactorial = x => {
const aux = (acc, x) => {
if (x === 1)
return acc - 1
else
return aux(acc + 1, x / acc)
}
return aux(2, x)
}
console.log(unfactorial(120)) // 5
console.log(unfactorial(5040)) // 7
How it works
Starting with the input number x and a counter, acc = 2, successively divide x by acc (then acc + 1, then acc + 2, etc) until x == 1. Then return acc - 1.
Or go bananas using only expressions. You'll probably like this, #Mr.Polywhirl
const U = f => f (f)
const Y = U (h => f => f (x => h (h) (f) (x)))
const unfactorial = Y (f => acc => x =>
x === 1 ? acc - 1 : f (acc + 1) (x / acc)
) (2)
console.log(unfactorial(120)) // 5
console.log(unfactorial(5040)) // 7
I have a recurrence series so i have converted into recursion form.
But it will show as maximum stack size reached.
The same code is working with n=4 as fn(4) and working correctly but not working with higher values. what is the problem with higher values such as n = Math.pow(10, 18)
var fn = function(n){
// take initial value of f(0) = 1 & f(1) = 1
if(n===1 || n=== 0) return 1;
//calculate on basis of initial values
else if (n === -1) return (fn(1) - 3* fn(0) - gn(-1) - 2* gn(0));
else
return (3*fn(n-1) + 2 * fn(n-2) + 2* gn(n-1) + 3* gn(n-2));
};
var gn = function(n){
// take initial value of g(0) = 1 & g(1) = 1
if (n === 1 || n === 0) return 1;
//calculate on basis of initial values
else if(n === -1) return ((gn(1) - gn(0)) / 2);
else return (gn(n-1) + 2* gn(n-2));
};
The problem with higher values, such as n = Math.pow(10, 18) is that the stack is simply not that big. (Not even close.) You can usually have a stack depth of a couple thousand, and not much more.
You should use iteration instead.
I'm still a bit new to JavaScript so if anyone care to explain how to solve this small issue.
Basically, i'm using different languages to solve codility training tasks. I've encountered small problem when working with java script, floating points. Here is the example of what I mean. Task in question is in Lesson 3, task one: CountDiv
In Java my solution works perfectly it scored 100/100. Here is the code:
class Solution {
public int solution(int A, int B, int K) {
int offset = A % K ==0?1:0;
return (B/K) - (A/K) + offset;
}
}
Written in java script code scores 75/100.
function solution(A, B, K) {
var offset;
if (A % K === 0) {
offset=1;
} else {
offset=0;
}
var result =(B/K) - (A/K) + offset;
return parseInt(result);
}
JavaScript solution fails on following test:
A = 11, B = 345, K = 17
(Returns 19, Expects 20)
I'm assuming that its something to do with how JavaScript convert floating point to Integers ?
If anyone care to explain how to write JS solution properly?
Thanks
Use parseInt on the division result.
When you use division operator, the result is coerced to floating point number, to make it integer use parseInt on it. (Thanks to #ahmacleod)
function solution(A, B, K) {
var offset = A % K === 0 ? 1 : 0;
return parseInt(B / K) - parseInt(A / K) + offset;
}
My first attempt was to make it this way:
function solution(A, B, K) {
return Math.floor(B / K) - Math.floor(A / K) + (A % K === 0 ? 1 : 0);
}
and it scores 100/100, as it was mentioned parseInt should do the trick as well.
Swift 100%
public func solution(_ A : Int, _ B : Int, _ K : Int) -> Int {
// write your code in Swift 4.2.1 (Linux)
let offset = A % K == 0 ? 1 : 0
return (B/K) - (A/K) + offset
}
function solution(A, B, K) {
return Math.floor(B / K) - Math.ceil(A / K) + 1;
}
Score 100/100
function solution(A, B, K) {
// write your code in JavaScript (Node.js 8.9.4)
let val;
if(A % K === 0 || B % K === 0) {
val = Math.floor(((B - A)/K + 1));
} else {
let x = A % K;
let y = B % K;
val = ((B - A) + (x - y))/K
}
if((B === A) && (B === K)) {
val = Math.floor(((B - A)/K + 1));
}
return val;
}