I need to find the highest possible sum of numbers in an array passed to a function that can be divided with no remainder.
I am struggling to think of a way of iterating through an array of elements adding up all the possibilities and dividing by the parameter k which is the number for the division.
I thought of using a for loop and then passing the result to a variable on each iteration.
The part I can't get my head around is how to add all the possible combinations of the numbers in the array. As I can add them sequentially from the start of the array to the last element but not in all combinations such as element at index 0, element at index 3 etc.
I am fairly new to coding, explanations of how you could tackle the iteration challenge I have would be much appreciated.
function luckyCandies(prizes, k) {
let sum = 0;
let remainder = 0;
let maxCandies = 0;
let highestNumber = 0;
prizes.sort(function(a, b) {
return b - a;
});
for (let i = 0; i < prizes.length; i++) {
sum = sum + prizes[i];
}
for (let i = 0; i < prizes.length; i++) {
if (sum % k == 0) {
sum = sum - prizes[i];
}
}
console.log(sum);
return sum;
}
Implemented this solution for your use case based on the answers in this.
In the given link the solutions are for the highest possible sum of numbers given the divisible 3 but it won't be a problem since there is a proper in detailed explanation.
const maxSumDivByNo = (A, no) => {
const K = Array(no).fill().map((v,i) => 0);
for (let x of A) {
let pre = [...K]; // create current from previous 🤔
for (let y of pre)
K[(x + y) % no] = Math.max(K[(x + y) % no], x + y); // add A[i] (ie. x) onto each previous bucket and update each current bucket to max of itself and the current sum (x + y)
}
return K[0]; // max sum of all N items of A which is evenly divisible by no 🎯
};
const A = [1, 2, 3, 4, 5];
const no = 5;
console.log(maxSumDivByNo(A, no)); // --> 15
const A1 = [1, 6, 2, 9, 5];
const no1 = 8
console.log(maxSumDivByNo(A1, no1)); // --> 16
Related
Given a JavaScript function that takes in an array of numbers as the first and the only argument.
The function then removes one element from the array, upon removal, the sum of elements at odd indices is equal to the sum of elements at even indices. The function should count all the possible unique ways in which we can remove one element at a time to achieve balance between odd sum and even sum.
Example var arr = [2, 6, 4, 2];
Then the output should be 2 because, there are two elements 6 and 2 at indices 1 and 3 respectively that makes the combinations table.
When we remove 6 from the array
[2, 4, 2] the sum at odd indexes = sum at even indexes = 4
if we remove 2
[2, 6, 4] the sum at odd indices = sum at even indices = 6
The code below works perfectly. There might be other solutions but I want to understand this one, because I feel there is a concept I have to learn here. Can someone explain the logic of this algorithm please?
const arr = [2, 6, 4, 2];
const check = (arr = []) => {
var oddTotal = 0;
var evenTotal = 0;
var result = 0;
const arraySum = []
for (var i = 0; i < arr.length; ++i) {
if (i % 2 === 0) {
evenTotal += arr[i];
arraySum[i] = evenTotal
}
else {
oddTotal += arr[i];
arraySum[i] = oddTotal
}
}
for (var i = 0; i < arr.length; ++i) {
if (i % 2 === 0) {
if (arraySum[i]*2 - arr[i] + oddTotal === (arraySum[i - 1] || 0)*2 + evenTotal) {
result = result +1
};
} else if (arraySum[i]*2 - arr[i] + evenTotal === (arraySum[i - 1] || 0)*2 + oddTotal) {
result = result +1
}
}
return result;
};
I am going through Codility questions and I am on "CountNonDivisible" question. I tried with the brute way it worked and it's not efficient at all.
I found the answers with no explanations, so if someone could take some time and walk me through this answer it would be highly appreciated.
function solution(A) {
const lenOfA = A.length
const counters = Array(lenOfA*2 + 1).fill(0)
for(let j = 0; j<lenOfA; j++) counters[A[j]]++;
return A.map(number=> {
let nonDivisor = lenOfA
for(let i = 1; i*i <= number; i++) {
if(number % i !== 0) continue;
nonDivisor -= counters[i];
if(i*i !== number) nonDivisor -= counters[number/i]
}
return nonDivisor
})
}
This is the question
Task description
You are given an array A consisting of N integers.
For each number A[i] such that 0 ≤ i < N, we want to count the number
of elements of the array that are not the divisors of A[i]. We say
that these elements are non-divisors.
For example, consider integer N = 5 and array A such that:
A[0] = 3
A[1] = 1
A[2] = 2
A[3] = 3
A[4] = 6
For the following elements:
A[0] = 3, the non-divisors are: 2, 6,
A[1] = 1, the non-divisors are: 3, 2, 3, 6,
A[2] = 2, the non-divisors are: 3, 3, 6,
A[3] = 3, the non-divisors are: 2, 6,
A[4] = 6, there aren't any non-divisors.
Write a function:
function solution(A);
that, given an array A consisting of N integers, returns a sequence of
integers representing the amount of non-divisors.
Result array should be returned as an array of integers.
For example, given:
A[0] = 3
A[1] = 1
A[2] = 2
A[3] = 3
A[4] = 6
the function should return [2, 4, 3, 2, 0], as explained above.
Write an efficient algorithm for the following assumptions:
N is an integer within the range [1..50,000];
each element of array A is an integer within the range [1..2 * N].
Here is a way I can explain the above solution.
From the challenge description, it states each element of the array are within the range [1... 2*N] where N is the length of the array; this means that no element in the array can be bigger than 2*N.
So an array of counters is created with length 2*N + 1(max index equal to max possible value in array), and every element in it is initialized to 0, except the elements which actually exists in the given array, those are set to one.
Now we want to go through all the elements in the given array, assuming every number is a nondivisor and subtracting the assumed nondivisors by the number of divisors we have in our array of counters, this will give us the actual number of nondivisors. During the loop, when an element that doesn't exist in our array is a divisor, 0 is subtracted(remember our initialized values), and when we encounter a divisor that is also in our array, we subtract by 1(remember our initialized values). This is done for every element in the array to get each of their nondivisor counts.
The solution you posted makes use of map, which is just a concise way of transforming arrays. A simple for loop can also be used an will be easier to understand. Here is a for loop variation of the solution above
function solution(A) {
const lenOfA = A.length
const counters = Array(lenOfA*2 + 1).fill(0)
for(let i = 0; i<lenOfA; i++) counters[A[i]]++;
const arrayOfNondivisors = [];
for(let i = 0; i < A.length; i++) {
let nonDivisor = lenOfA
for(let j = 1; j*j <= A[i]; j++) {
if(A[i] % j !== 0) continue;
nonDivisor -= counters[j];
if(j*j !== A[i]) nonDivisor -= counters[A[i]/j]
}
arrayOfNondivisors.push(nonDivisor);
}
return arrayOfNondivisors;
}
I have a function which takes an array of numbers as an argument. I want to return a new array with the products of each number except for the number at the current index.
For example, if arr had 5 indexes and we were creating the value for index 1, the numbers at index 0, 2, 3 and 4 would be multiplied.
Here is the code I have written:
function getProducts(arr) {
let products = [];
for(let i = 0; i < arr.length; i++) {
let product = 0;
for(let value in arr.values()) {
if(value != arr[i]) {
product *= value;
}
}
products.push(product);
}
return products;
}
getProducts([1, 7, 3, 4]);
// Output âžž [0, 0, 0, 0]
// Expected output âžž [84, 12, 28, 21]
As you can see, the desired output does not actualise. I did some experimenting and it appears that the second for loop is never really initiated, as any code I put inside the block does not execute:
function getProducts(arr) {
let products = [];
for(let i = 0; i < arr.length; i++) {
let product = 0;
for(let value in arr.values()) {
console.log('hello!');
if(value != arr[i]) {
product *= value;
}
}
products.push(product);
}
return products;
}
getProducts([1, 7, 3, 4]);
// Output âžž
// Expected Output âžž 'hello!'
What is wrong with my code?
You could take the product of all numbers and divide by the number of the index to get a product of all except the actual value.
function getProducts(array) {
var product = array.reduce((a, b) => a * b, 1);
return array.map(p => product / p);
}
console.log(getProducts([1, 7, 3, 4]));
A more reliable approach with an array with one zero. If an array has more than one zero, all products are zero.
The below approach replaces the value at index with one.
function getProducts(array) {
return array.map((_, i, a) => a.reduce((a, b, j) => a * (i === j || b), 1));
}
console.log(getProducts([1, 7, 0, 4]));
console.log(getProducts([1, 7, 3, 4]));
You simply have to change the in keyword to of keyword. Is not the same a for..in than a for..of.
arr.values() returns an iterator, which has to be iterated with the of keyword.
Also, if product = 0, then all your multiplications will return 0.
By the way this code is prone to error, because you don't check the current index, but you check if the value that you are multiplying is different than the current value. This will lead to a problem if the same number is duplicated in the array.
And, now talking about good practices, is a bit weird that first you iterate through the array with a for(var i... loop and the second time you do it with a for...in/of.
I've fixed the code for you:
function getProducts(arr) {
let products = [];
for(let i = 0; i < arr.length; i++) {
let product = 1;
for(let ii = 0; ii < arr.length; ii++) {
if(i != ii) {
product *= arr[ii];
}
}
products.push(product);
}
return products;
}
A better way to do that is get the total product and use map() to divide total with each value
function getProducts(arr){
let total = arr.reduce((ac,a) => ac * a,1);
return arr.map(x => x === 0 ? total : total/x);
}
console.log(getProducts([1, 7, 3, 4]))
Explanation: replace the number at i with 1 so it doesn't interfere with the multiplication. Also, apply the fill on a copy of a hence the [...a]
console.log( [2,3,4,5].map( (n,i,a) => [...a].fill(1,i,i+1).reduce( (a,b) => a*b ) ) )
My approach to this problem is most likely flawed, but I'm so close to finishing the solution. Given the numbers 2 and 10, I must find the least common multiple of the two numbers, plus the numbers within their range.
(2,3,4,5,6,7,8,9,10)
I've created a function that is used to return the prime factors of every number, and push them into an array. This is where I'm lost. I don't know how to reduce/filter out the excessive prime numbers.
I should end up multiplying 2*2*2*3*3*5*7, but filtering unique numbers would result me with 2*3*5*7, or 2*3*2*5*2*3*7*2*3*2*5 if I filtered the numbers before the array was flattened.
function smallestCommons(arr) {
// factorize a number function
function factorization(num) {
let primesArr = [];
// i is what we will divide the number with
for (let i = 2; i <= Math.sqrt(num); i++) {
// if number is divisible by i (with no remainder)
if (num % i === 0) {
// begin while loop that lasts as long as num is divisible by i
while (num % i === 0) {
// change the value of num to be it divided by i
num = num / i;
// push the prime number used to divide num
primesArr.push(i);
}
}
}
// if num is not the number 1 after the for loop
// push num to the array because it is also a prime number
if (num != 1) {
primesArr.push(num);
}
return primesArr;
}
// sort from lowest to highest
arr.sort((a,b) => a - b);
let range = [];
let primeFacts = [];
// push range of numbers to fullArr
for (let i = arr[0]; i <= arr[1]; i++) {
range.push(i);
}
console.log(range); // [2,3,4,5,6,7,8,9,10]
// loop for iterating through range numbers
for (let i = 0; i < range.length; i++) {
// push the prime factors of each range number
primeFacts.push(factorization(range[i]));
}
console.log(primeFacts);
// flatten the array, then return the product of numbers
return primeFacts
.reduce((newArray, arr) => newArray = [...newArray,...arr] ,[])
.reduce((product, num) => product *= num);
};
console.log(smallestCommons([2,10]));
OUTPUT
[ 2, 3, 4, 5, 6, 7, 8, 9, 10 ]
[ [ 2 ],[ 3 ],[ 2, 2 ],[ 5 ],[ 2, 3 ],[ 7 ],[ 2, 2, 2 ],[ 3, 3 ],[ 2, 5 ] ]
3628800
How do I emulate this and add it in my code? ->
Example of Table I want to emulate
Take this as a table of prime factors and their degrees:
2 3 5 7
2 1
3 1
4 2
5 1
6 2 2
7 1
8 3
9 2
10 1 1
For the LCM, take the largest degree in each column:
3 2 1 1
Multiply those powers together; that's your answer:
2^3 * 3^2 * 5^1 * 7^1
EXTENSION
To get the GCD, take the smallest degree in each column (including 0).
Do you have to use prime factorization, or do you just need the lowest common multiples? Because there is another, much more efficient algorithm for it, called the Euclidean algorithm. You can read up on it here:
http://www.programming-algorithms.net/article/42865/Least-common-multiple
http://www.programming-algorithms.net/article/43434/Greatest-common-divisor
(note: calculating the LCM via the Euclidean algorithm requires the GCD, but there's a Euclidean algorithm for that too)
Now, the above mentioned algorithm works for two numbers, but I think (haven't mathematically verified it tho, so you need to check this yourself) you can just use left reduction.
The end result would look something like this:
var getLCM = (a, b) => {
// Implement Euclidean LCM algorithm here...
};
var getTotalLCM = (numbers) => {
return numbers.reduce((totalLCM, next) => getLCM(totalLCM, next), 1);
}
var result = getTotalLCM([ 2, 3, 4, 5, 6, 7, 8, 9, 10 ]);
What getTotalLCM will do here, is it will calculate the LCM of 1 and 2 (1 because that's the initial accumulator value we passed to reduce()), which is of course 2. Then it calculates the LCM of 2 and 3, which is 6; then 6 and 4, which is 12, then 12 and 5, which is 60, then 6 and 60, which is still 60, and so on. I think this is what you're looking for?
More on how reduce() works here:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/Reduce
Solution I found. The factorization function now returns an object instead with prime key exponent value pairs. Through for in loops, and the reduce method, the LCM is outputted.
function smallestCommons(arr) {
// factorize a number function
function factorization(num) {
let primesObj = {};
// i is what we will divide the number with
for (let i = 2; i <= Math.sqrt(num); i++) {
// if number is divisible by i (with no remainder)
if (num % i === 0) {
let exponent = 0;
// begin while loop that lasts as long as num is divisible by i
while (num % i === 0) {
// change the value of num to be it divided by i
num = num / i;
exponent++;
// create key value pair where exponent is the value
primesObj[i] = exponent;
}
}
}
// if num is not the number 1 after the for loop
// push num to the array because it is also a prime number
if (num != 1) {
primesObj[num] = 1;
}
return primesObj;
}
// sort from lowest to highest
arr.sort((a,b) => a - b);
let range = [];
let primeFacts = [];
// push range of numbers to fullArr
for (let i = arr[0]; i <= arr[1]; i++) {
range.push(i);
}
console.log(range); // [2,3,4,5,6,7,8,9,10]
// loop for iterating through range numbers
for (let i = 0; i < range.length; i++) {
// push the prime factors of each range number
primeFacts.push(factorization(range[i]));
}
console.log(primeFacts);
// create a filtered object with only the largest key value pairs
let primeExponents = primeFacts.reduce((newObj,currObj)=> {
for (let prime in currObj) {
// create new key value pair when key value pair does not exist
if (newObj[prime] === undefined) {
newObj[prime] = currObj[prime];
}
// overwrite key value pair when current Object value is larger
else if (newObj[prime] < currObj[prime]) {
newObj[prime] = currObj[prime];
}
}
return newObj;
},{});
let finalArr = [];
// push appropriate amount of primes to arr according to exponent
for (let prime in primeExponents) {
for (let i = 1; i <= primeExponents[prime]; i++) {
finalArr.push(parseInt([prime]));
}
}
return finalArr.reduce((product, num) => product *= num);
};
console.log(smallestCommons([2,10]));
To go from the prime factorizations to the LCM, you need to count how many of each prime are needed at maximum. So for each factorization I would create a map of primes to counts. And I'd keep track of the highest number of each factor needed:
function lcm(primeFacts){
var maxPrimes = {}; // stores the highest number of each prime factor required
for(var i = 0; i < primeFacts.length; i++){
var map = {};
var factors = primeFacts[i];
for(var j = 0; j < factors.length; j++){
// check to see whether the factor already exists in the map
if(map[factors[j]]) map[factors[j]]++;
else map[factors[j]] = 1;
// check to make sure the max count exists
if(!maxPrimes[factors[j]]) maxPrimes[factors[j]] = 1;
if(maxPrimes[factors[j]] < map[factors[j]])
maxPrimes[factors[j]] = map[factors[j]];
}
}
Then once we have all the counts for each factor, we just multiply them out:
var multiple = 1;
for(var prime in maxPrimes){
multiple *= prime ^ maxPrimes[prime];
}
}
Would like to create a two dimensional m x n array in javascript, based on the number of columns, that is inputed as an argument in my function, the rows would be created from another argument which would be an array.
What I look to achieve - Desired Result:
var arr = [0,1,2,3,4,5,6,7,8,9]
function TwoDimensionalArray(numRows, numCols) {
//Magic happens here!
}
TwoDimensionalArray(arr, 4);
As you can see the is a 3 x 4 matrix below and a desired result
[[0,1,2,3], [4,5,6,7],[8,9]]
The input size doesn't make the difference, the number of columns is the key factor and the determinant factor.
What I have currently - Not Desired Result:
var arr = [0,1,2,3,4,5,6,7,8,9,10,11,12,13]
function TwoDimensionalArray(numRows, numColumns) {
var twoD = [];
for (var row = 0; row < numRows.length; ++row) {
var cardIndex = numRows[row]
// console.log(numRows[row]);
var columns = [];
for(var j =0; j < numColumns; ++j) {
columns[j] = cardIndex;
}
twoD[cardIndex] = columns;
}
return twoD;
};
var matrixTwoD = TwoDimensionalArray(arr, 4);
console.log(matrixTwoD);
console.log(matrixTwoD[0][0]);
console.log(matrixTwoD[0][1]);
console.log(matrixTwoD[0][2]);
console.log(matrixTwoD[0][3]);
My current code creates an array that repeats each of the elements 4 times each until the number 13 with a column size of 4: [[0,0,0,0], [1,1,1,1]....[13,13,13,13]]
Maybe am doing something wrong in my for loop or not approaching the problem correctly. But anything to point me in the right direction to get the above desire result.
Bouns
Also would anyone also be kinda to point me to additional resources for matrix algebra pertaining to this sort of problem and anything in general that would help for self study.
Thanks a bunch!
Keep it simple, slice the input Array into sections of numCols length
function TwoDimensionalArray(arr, numCols) {
var arr2d = [],
i;
if (numCols) // safety first!
for (i = 0; i < arr.length; i += numCols)
arr2d.push(arr.slice(i, i + numCols));
return arr2d;
}
if (numCols) prevents an infinite loop in the case numCols was not provided or is 0
for (i = 0; i < arr.length; i += numCols) counts up from 0 in numCols, e.g. i = 0, 4, 8, 16, ... until we reach a number greater than arr.length
arr.slice(i, i + numCols) creates a sub-Array of Array starting from (including) index i and ending at (excluding) index i + numCols, i.e. we get a numCols long Array starting with the item at index i of arr
arr2d.push appends a new item to the end of arr2d
Putting all these together, we can see that we are building a new Array arr2d from sections of the Array arr
calculate columns required and then use slice method of array.
start index = (numColumns * i)
end index = numColumns * (i + 1)
var arr = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]
function TwoDimensionalArray(numRows, numColumns) {
var columns = [];
for (var i = 0; i !== Math.ceil(numRows.length / numColumns); i++) {
columns[i] = numRows.slice((numColumns * i), numColumns * (i + 1))
//console.log((numColumns * i) + " " +numColumns * (i + 1))
}
return columns;
};
var matrixTwoD = TwoDimensionalArray(arr, 4);
console.log(matrixTwoD);
console.log(matrixTwoD[0][0]);
console.log(matrixTwoD[0][1]);
console.log(matrixTwoD[0][2]);
console.log(matrixTwoD[0][3]);