JavaScript for loop performance issue - javascript

What is the best way to loop over a collection of data. The problem i am facing is the low performance. See the example code snippets. Following two methods has the performance issue.
interface Day {
date: number;
disabled: boolean;
}
// sample data
const monthDays: Day[] = Array.from({ length: 30 }, (v: unknown, k: number) => ({ date: k + 1, disabled: false }));
const disabledDates: number[] = Array.from({ length: 30 }, (v, k) => k + 1);
//set disabled dates
// method 1
let counter = 0;
for (let day of monthDays) {
day.disabled = disabledDates.some(d => d === day.date);
counter++;
}
console.log(counter) // logs 30
// method 2
counter = 0;
for (let day of monthDays) {
for (let date of disabledDates) {
counter++;
if (day.date === date) {
day.disabled = true;
break;
}
}
}
console.log(counter); // logs 494
In the app i am working , i need to iterate on array 3 to 4 times . This results in low performance of the app. Can anyone suggest whats is the best way to loop.

Instead of checking each element again with .some you can spread your values into an Set and check with set.has() if its inside witch is much faster
Your time complexity drops from O(n^2) to O(n)
// sample data
let monthDays = Array.from({ length: 10000 }, (v, k) => ({ date: k + 1, disabled: false }));
const disabledDates = Array.from({ length: 10000 }, (v, k) => k + 1);
//set disabled dates
// method 1
let start = performance.now()
for (let day of monthDays) {
day.disabled = disabledDates.some(d => d === day.date);
}
console.log(performance.now() - start);
//reset
monthDays = Array.from({ length: 10000 }, (v, k) => ({ date: k + 1, disabled: false }));
start = performance.now();
let set = new Set([ ...disabledDates ]);
for (let day of monthDays) {
if(set.has(day.date)) {
day.disabled = true;
}else {
day.disabled = false;
}
}
console.log(performance.now() - start);

You have the exact same performance characteristics with both method 1 and method 2, however, your way of measuring them is flawed. With method 2, you count the outer and inner iterations made by the code, while with method 1, you only count the outer iterations. However, .some() still does an iteration over the data:
// sample data
const monthDays = Array.from({ length: 30 }, (v, k) => ({ date: k + 1, disabled: false }));
const disabledDates = Array.from({ length: 30 }, (v, k) => k + 1);
//set disabled dates
// method 1
let counter = 0;
for (let day of monthDays) {
day.disabled = disabledDates.some(d => {
counter++;
return d === day.date
});
counter++;
}
console.log(counter) // logs 495
See also on TypeScript Playground
Since you have two nested iterations, both methods exhibit an O(n*m) time complexity, where n is the length of monthDays and m is the length of disabledDates. This is basically quadratic complexity for close values of n and m and actually quadratic for n = m (as is the example).
The way to correct that is to eliminate the nested loops and only process the data once. This is easily achievable by first pre-computing a Set object that contains all disabledDates, that way you don't have to loop over an array only to check if they exist instead using Set#has.
// sample data
const monthDays = Array.from({ length: 30 }, (v, k) => ({ date: k + 1, disabled: false }));
const disabledDates = new Set(Array.from({ length: 30 }, (v, k) => k + 1));
for (const day of monthDays) {
day.disabled = disabledDates.has(day.date);
}
console.log(monthDays);
See also on TypeScript Playground
The one time conversion to a set has a complexity of O(m) and then each time you loop over you only have an O(n) iteration.
If you can create set once only, then repeatedly iterating monthDays don't need to include that operation. Thus the complexity of the operation becomes O(n).
If you need to create a set every time before you loop, then the complexity becomes an O(n+m) which is still better than quadratic and essentially linear.
As a note here, this assumes that .has() is a O(1) operation. This is a reasonable assumption that probably holds true most of the time. However, technically, that's not guaranteed - the specifications define a sub-linear complexity on set operations, so you might have O(log n) complexity, which means that the iteration over monthDays and looking up disabledDates will be O(n * log m). Even then, using a set is probably the easiest optimisation path. If performance is still a problem, then perhaps generating an object as a lookup table could be better:
//this should be generated
const lookup: Record<number, true> = {
1: true,
2: true,
3: true,
/*...*/
}
/*...*/
//fetch from lookup or set `false` if not in the lookup
const disabled: boolean = lookup[day.date] ?? false;

Related

the most performant way finding an item in array within a range of indexes

I'm looking for something like the built-in Array.prototype.find(), but I want to be able to search starting from a given index, without creating a new shallow copy of range of items from this array.
possibilities:
using Array.prototype.find with no starting index.
using Array.prototype.slice() and then .find. something like arr.slice(startIndedx).find(...). creating shallow copy.
writing my own find implementation using for loop that start looking from given index.
using lodash.find(). but I care about bundle size and lodash is quite heavy. I actually prefer avoiding any kind of third-party packages.
here are some performance test results :
const SIZE = 1000000
const START_I = 800000
const ItemVal = 900000
...
find: average time: 12.1ms
findSlice: average time: 2.48ms
findLodash: average time: 0.26ms
myFind: average time: 0.26ms
surprisingly enough its seems that that native .find performed worse even with starting index 0 :
const SIZE = 1000000
const START_I = 0
const ItemVal = 900000
...
find: average time: 12.61ms
findSlice: average time: 17.51ms
findLodash: average time: 1.93ms
myFind: average time: 2.17ms
for array size 1000000 , starting index of 0 and the correct position is 900000, Array.prototype.find() preformed 12.61ms vs 2.17ms of the simple for loop search(!). am I missing something?
the test code is:
const {performance} = require('perf_hooks');
const _ = require('lodash')
const SIZE = 1000000
const START_I = 0
const ItemVal = 900000
let arr = Array(SIZE).fill(0)
arr = arr.map((a, i) => i)
const myFind = (arr, func, startI) => {
for (let i = startI; i < arr.length; i++) {
if (func(arr[i])) return arr[i]
}
return -1
}
const functions = {
find: () => arr.find(a => a === ItemVal), // looking at all array - no starting index
findSlice: () => arr.slice(START_I).find(a => a === ItemVal),
findLodash: () => _.find(arr, a => a === ItemVal, START_I),
myFind: () => myFind(arr, a => a === ItemVal, START_I),
}
const repeat = 100
const test_find = () => {
for (let [name, func] of Object.entries(functions)) {
let totalTime = 0
for (let i = 0; i < repeat; i++) {
let t_current = performance.now()
func()
totalTime += performance.now() - t_current
}
console.log(`${name}: average time: ${Math.round(totalTime / repeat * 100) / 100}ms`)
}
}
test_find()
what is the best way to find an item in an array stating looking from an index? and also, how does Array.prototype.find perform worse than my own simple for loop implementation of find?

How to find all no repeat combinations form specific single arrays use Javascript

I am a Javascript beginner, I have a personal project to use program to find all the possible & no repeat combinations form specific arrays
I suppose have 3 sets of products and 10 style in each set, like this array
[1,2,3,4..10,1,2,4...8,9,10]
①②③④⑤⑥⑦⑧⑨⑩
①②③④⑤⑥⑦⑧⑨⑩
①②③④⑤⑥⑦⑧⑨⑩
totaly array length = 30
I plan to randomly separate it into 5 children, but they can't repeat the same products style
OK result:
①②③④⑤⑥ ✔
②③④⑤⑥⑦ ✔
①②③⑧⑨⑩ ✔
④⑥⑦⑧⑨⑩ ✔
①⑤⑦⑧⑨⑩ ✔
Everyone can evenly assign products that are not duplicated
NG:
①②③④⑤⑥ ✔
①②③④⑤⑦ ✔
①②⑥⑧⑨⑩ ✔
③④⑤⑦⑧⑨ ✔
⑥⑦⑧⑨⑩⑩ ✘ (because number 10 repeated)
My solution is randomly to assign 5 sets of arrays, then use "new Set(myArray[i]).size;" check the value sum is 30 or not, Use [do..white], while sum is not 30 then repeat to do the random assign function until the result is not duplicated.
like this:
function splitArray() {
do {
var productArray = [1,2,3,4,5,6,7,8,9,10,1,2,3,4,5,6,7,8,9,10,1,2,3,4,5,6,7,8,9,10]; //Just for sample, actualy this array will be more then 30
var productPerChildArray = [];
for (var i = 0; i < 5; i++) {
GroupNum = [];
for (var v = 0; v < 6; v++) {
var selectNum = productArray[Math.floor(Math.random() * productArray.length)];
GroupNum.push(selectNum);
productArray = removeItemOnce(productArray, selectNum);
}
productPerChildArray.push(GroupNum);
}
} while (checkIfArrayIsUnique(productPerChildArray));
return productPerChildArray;
}
//---------check repeat or not----------
function checkIfArrayIsUnique(myArray) {
var countRight = 0;
for (var i = 0; i < myArray.length; i++) {
countRight += new Set(myArray[i]).size;
}
return (countRight != 5*6);
}
//----------update productArray status----------
function removeItemOnce(arr, value) {
var index = arr.indexOf(value);
if (index > -1) {
arr.splice(index, 1);
}
return arr;
}
console.log(splitArray());
Seems to solve the problem, but actualy productArray is not must 30, this solution will spend to much time to try the no repeat combination. Low efficiency
I believe they have a other solution to solve the problem better than my idea
Any help would be greatly appreciated.
My approach would be: just place the next number in an array that is selected randomly - of course, filter out those that already contain the next number.
// the beginning dataset
const data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
// the size of the groups to be formed (must divide the
// size of beginning dataset without a remainder)
const ARR_LENGTH = 6
// creating the array that will be filled randomly
const getBaseArray = ({ data, arrLength }) => {
const num = parseInt(data.length / arrLength, 10)
return Array.from(Array(num), () => [])
}
// filter arrays that satisfy conditions
const conditions = ({ subArr, val }) => {
if (subArr.includes(val)) return false
if (ARR_LENGTH <= subArr.length) return false
return true
}
const getArraysFiltered = ({ val, arr }) => {
return arr
.map((e, i) => ({
origIdx: i,
subArr: e,
}))
.filter(({ subArr }) => conditions({ subArr, val }))
}
// select a random array from a list of arrays
const getRandomArrIdx = ({ arr }) => {
return Math.floor(Math.random() * arr.length)
}
// get the original array index from the filtered values
const getArrIdx = ({ val, arr }) => {
const filtered = getArraysFiltered({ val, arr })
if (!filtered.length) return -1
const randomArrIdx = getRandomArrIdx({ arr: filtered })
return filtered[randomArrIdx].origIdx
}
const getFinalArr = ({ data }) => {
// short circuit: if the data cannot be placed in
// arrays evenly, then it's a mistake (based on
// the current ruleset)
if (data.length % ARR_LENGTH) return [false]
// creating the array that will hold the numbers
const arr = getBaseArray({ data, arrLength: ARR_LENGTH })
let i = 0;
for (i; i < data.length; i++) {
const idx = getArrIdx({
val: data[i],
arr,
})
// if there's no place that the next number could be
// placed, then break (prematurely), so the algorithm
// can be restarted as early as possible
if (idx === -1) break;
arr[idx].push(data[i])
}
if (i < data.length) {
// restart algorithm if we couldn't place
// all the numbers in the dataset
return getFinalArr({ data })
} else {
return arr
}
}
// constructing the final array of arrays & logging them:
console.log('res', getFinalArr({ data }).join(' '))
console.log('res', getFinalArr({ data }).join(' '))
console.log('res', getFinalArr({ data }).join(' '))
console.log('res', getFinalArr({ data }).join(' '))
I don't know if it's more efficient, but I found this question interesting.
Now, the algorithm is
broken down into small logical steps that are
easy to follow
simple to tweak to needs
works with all sizes of data (if the groups to be made can be filled equally)

linking two objects, value to value, and filling in any missing values in the sequence

I am trying to merge two objects into 1 Array, and they must be linked by their values. The objects are 'Days' and 'Count' which is the count of the number of events that happen on a day.
Example Data
{"Day":[3,8,9,17,18,21,25,27,30,31],
"Count":[1,3,1,1,1,4,2,2,2,1]}
I can do that with this function:
var data = {{loanDates.data}}; // this query has two objects, 'Day' and 'Count' which is the sum of the number of events that has happened on each day.
return data.Day.map(function(day, key){
var count = test.Count[key];
return {
day,count
}
});
result
[{"day":3,"count":1},
{"day":8,"count":3},
{"day":9,"count":1},
{"day":17,"count":1},
{"day":18,"count":1},
{"day":21,"count":4},
{"day":25,"count":2},
{"day":27,"count":2},
{"day":30,"count":2},
{"day":31,"count":1}]
This returns the correct things, however i want all the days of the month. So for instance say the 10th day of the month doesn't exist in the object, i need a 10th day created and its count set to 0.
Outcome required
[{"day":1,"count":0},
{"day":2,"count":0},
{"day":3,"count":1},
{"day":4,"count":0},
{"day":5,"count":0},
{"day":6,"count":0},
{"day":7,"count":0},
{"day":8,"count":3},
{"day":9,"count":1},
{"day":10,"count":0},
{..................},
{"day":31,"count":0},]
I'd make an object mapping each Day to its Count from the input data, then give it keys 1-31 if it doesn't have them already. Afterwards, you can use Object.entries to map each entry to a value in the array:
const input = {
"Day":[3,8,9,17,18,21,25,27,30,31],
"Count":[1,3,1,1,1,4,2,2,2,1]
};
const countsByDay = {};
input.Day.forEach((day, i) => {
countsByDay[day] = input.Count[i];
});
for (let i = 0; i <= 31; i++) {
countsByDay[i] = countsByDay[i] || 0;
}
const output = Object.entries(countsByDay).map(
([day, count]) => ({ day, count })
);
console.log(output);
Use Array.from() to generate an array of days with count: 0, spread to an array, and override with the array of days that has count values:
const data = {
"Day":[3,8,9,17,18,21,25,27,30,31],
"Count":[1,3,1,1,1,4,2,2,2,1]
}
const result = [
...Array.from({ length: 31 }, (_, i) => ({ day: i + 1, count: 0 })),
...data.Day.map((day, i) => ({ day, count: data.Count[i] }))
]
console.log(result)

How can I easily measure the complexity of a JSON object?

If I want to compare a range of API responses for the complexity of the response (as a proxy for how much effort it likely takes to parse and validate the response) is there any existing tool or library that can do that pretty effectively? or a simple bit of code?
Ideally something that prints out a quick report showing how deep and wide the whole structure is, along with any other metrics that might be useful.
A heuristic is to simply count the number of {, }, [, and ] characters. Of course this is only a heuristic; under this method a json object like { value: "{[}{}][{{}{}]{}{}{}[}}{}{" } would be considered overly complex even though its structure is very straightforward.
let guessJsonComplexity = (json, chars='{}[]')) => {
let count = 0;
for (let char in json) if (chars.includes(char)) count++;
return count / (json.length || 1);
};
You would go with this answer if speed is very important.
You'll almost certainly need to parse the json if you want a more concise answer!
We could also consider another approach. Consider assigning a "complexity score" for every possible phenomenon that can happen in json. For example:
A string s is included; complexity score: Math.log(s.length)
A number n is included; complexity score: Math.log(n)
A boolean is included; complexity score: 1
An array is included; complexity score: average complexity of elements + 1
An object is included; complexity score: average complexity of values plus average complexity of keys + 1
We could even pick out distinct relationships, like "an object is included in an array", or "an array is included in an array", etc, if we want to consider some of these as being more "complicated" than others. We could say, for example, that negative numbers are twice as "complicated" as positive numbers, if that's how we feel.
We can also consider a "depth factor", which makes elements count for more the deeper they go.
If we define how to score all these phenomena, we can write a function that processes json and applies such a score:
let isType = (val, Cls) => val != null && val.constructor === Cls;
let getComplexity = (json, d=1.05) => {
// Here `d` is our "depth factor"
return d * (() => {
// Take the log of the length of a String
if (isType(json, String)) return Math.log(json.length);
// Take the log of (the absolute value of) any Number
if (isType(json, Number)) return Math.log(Math.abs(json));
// Booleans always have a complexity of 1
if (isType(json, Boolean)) return 1;
// Arrays are 1 + (average complexity of their child elements)
if (isType(json, Array)) {
let avg = json.reduce((o, v) => o + getComplexity(v, d), 0) / (json.length || 1);
return avg + 1;
}
// Objects are 1 + (average complexity of their keys) + (average complexity of their values)
if (isType(json, Object)) {
// `getComplexity` for Arrays will add 1 twice, so subtract 1 to compensate
return getComplexity(Object.keys(json), d) + getComplexity(Object.values(json), d) - 1;
}
throw new Error(`Couldn't get complexity for ${json.constructor.name}`);
})();
};
console.log('Simple:', getComplexity([ 'very', 'simple' ]));
console.log('Object:', getComplexity({
i: 'am',
some: 'json',
data: 'for',
testing: 'purposes'
}));
console.log('Complex:', getComplexity([
[ 111, 222, 333, 444 ],
[ 'abc', 'def', 'ghi', 'jkl' ],
[ [], [], {}, {}, 'abc', true, false ]
]));
console.log('Deep:', getComplexity([[[[[[ 'hi' ]]]]]]));
If you want to know more detailed information on children of a large json object, you can simply call getComplexity on those children as well.
Im using arbitrary values, but this is just to give you a start point.
var data1 = { "a": { "b": 2 }, "c": [{}, {}, { "d": [1, 2, 3] }] }
var data2 = { "a": { "b": 2 }, "c": [{"x":"y","z":[0,1,2,3,4,5,6,7,8,9]}, {}, { "d": [1, 2, 3] }] }
function chkComplexity(obj) {
let complexity = 0;
let depth = 1;
(function calc(obj) {
for (const key of Object.keys(obj)) {
if (typeof obj[key] !== "object") complexity += depth
if (Array.isArray(obj)) {
depth++
complexity += depth * 2
for (const item of obj) {
calc(item)
}
}
if (typeof obj[key] === "object") {
depth++
complexity += depth * 3
calc(obj[key])
}
}
})(obj);
return complexity;
}
console.log(chkComplexity(data1));
console.log(chkComplexity(data2));

How to call reduce on an array of objects to sum their properties?

Say I want to sum a.x for each element in arr.
arr = [ { x: 1 }, { x: 2 }, { x: 4 } ];
arr.reduce(function(a, b){ return a.x + b.x; }); // => NaN
I have cause to believe that a.x is undefined at some point.
The following works fine
arr = [ 1, 2, 4 ];
arr.reduce(function(a, b){ return a + b; }); // => 7
What am I doing wrong in the first example?
A cleaner way to accomplish this is by providing an initial value as the second argument to reduce:
var arr = [{x:1}, {x:2}, {x:4}];
var result = arr.reduce(function (acc, obj) { return acc + obj.x; }, 0);
console.log(result); // 7
The first time the anonymous function is called, it gets called with (0, {x: 1}) and returns 0 + 1 = 1. The next time, it gets called with (1, {x: 2}) and returns 1 + 2 = 3. It's then called with (3, {x: 4}), finally returning 7.
This also handles the case where the array is empty, returning 0.
After the first iteration your're returning a number and then trying to get property x of it to add to the next object which is undefined and maths involving undefined results in NaN.
try returning an object contain an x property with the sum of the x properties of the parameters:
var arr = [{x:1},{x:2},{x:4}];
arr.reduce(function (a, b) {
return {x: a.x + b.x}; // returns object with property x
})
// ES6
arr.reduce((a, b) => ({x: a.x + b.x}));
// -> {x: 7}
Explanation added from comments:
The return value of each iteration of [].reduce used as the a variable in the next iteration.
Iteration 1: a = {x:1}, b = {x:2}, {x: 3} assigned to a in Iteration 2
Iteration 2: a = {x:3}, b = {x:4}.
The problem with your example is that you're returning a number literal.
function (a, b) {
return a.x + b.x; // returns number literal
}
Iteration 1: a = {x:1}, b = {x:2}, // returns 3 as a in next iteration
Iteration 2: a = 3, b = {x:2} returns NaN
A number literal 3 does not (typically) have a property called x so it's undefined and undefined + b.x returns NaN and NaN + <anything> is always NaN
Clarification: I prefer my method over the other top answer in this thread as I disagree with the idea that passing an optional parameter to reduce with a magic number to get out a number primitive is cleaner. It may result in fewer lines written but imo it is less readable.
TL;DR, set the initial value
Using destructuring
arr.reduce( ( sum, { x } ) => sum + x , 0)
Without destructuring
arr.reduce( ( sum , cur ) => sum + cur.x , 0)
With Typescript
arr.reduce( ( sum, { x } : { x: number } ) => sum + x , 0)
Let's try the destructuring method:
const arr = [ { x: 1 }, { x: 2 }, { x: 4 } ]
const result = arr.reduce( ( sum, { x } ) => sum + x , 0)
console.log( result ) // 7
The key to this is setting initial value. The return value becomes first parameter of the next iteration.
Technique used in top answer is not idiomatic
The accepted answer proposes NOT passing the "optional" value. This is wrong, as the idiomatic way is that the second parameter always be included. Why? Three reasons:
1. Dangerous
-- Not passing in the initial value is dangerous and can create side-effects and mutations if the callback function is careless.
Behold
const badCallback = (a,i) => Object.assign(a,i)
const foo = [ { a: 1 }, { b: 2 }, { c: 3 } ]
const bar = foo.reduce( badCallback ) // bad use of Object.assign
// Look, we've tampered with the original array
foo // [ { a: 1, b: 2, c: 3 }, { b: 2 }, { c: 3 } ]
If however we had done it this way, with the initial value:
const bar = foo.reduce( badCallback, {})
// foo is still OK
foo // { a: 1, b: 2, c: 3 }
For the record, unless you intend to mutate the original object, set the first parameter of Object.assign to an empty object. Like this: Object.assign({}, a, b, c).
2 - Better Type Inference
--When using a tool like Typescript or an editor like VS Code, you get the benefit of telling the compiler the initial and it can catch errors if you're doing it wrong. If you don't set the initial value, in many situations it might not be able to guess and you could end up with creepy runtime errors.
3 - Respect the Functors
-- JavaScript shines best when its inner functional child is unleashed. In the functional world, there is a standard on how you "fold" or reduce an array. When you fold or apply a catamorphism to the array, you take the values of that array to construct a new type. You need to communicate the resulting type--you should do this even if the final type is that of the values in the array, another array, or any other type.
Let's think about it another way. In JavaScript, functions can be pass around like data, this is how callbacks work, what is the result of the following code?
[1,2,3].reduce(callback)
Will it return an number? An object? This makes it clearer
[1,2,3].reduce(callback,0)
Read more on the functional programming spec here: https://github.com/fantasyland/fantasy-land#foldable
Some more background
The reduce method takes two parameters,
Array.prototype.reduce( callback, initialItem )
The callback function takes the following parameters
(accumulator, itemInArray, indexInArray, entireArray) => { /* do stuff */ }
For the first iteration,
If initialItem is provided, the reduce function passes the initialItem as the accumulator and the first item of the array as the itemInArray.
If initialItem is not provided, the reduce function passes the first item in the array as the initialItem and the second item in the array as itemInArray which can be confusing behavior.
I teach and recommend always setting the initial value of reduce.
You can check out the documentation at:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/Reduce
Others have answered this question, but I thought I'd toss in another approach. Rather than go directly to summing a.x, you can combine a map (from a.x to x) and reduce (to add the x's):
arr = [{x:1},{x:2},{x:4}]
arr.map(function(a) {return a.x;})
.reduce(function(a,b) {return a + b;});
Admittedly, it's probably going to be slightly slower, but I thought it worth mentioning it as an option.
To formalize what has been pointed out, a reducer is a catamorphism which takes two arguments which may be the same type by coincidence, and returns a type which matches the first argument.
function reducer (accumulator: X, currentValue: Y): X { }
That means that the body of the reducer needs to be about converting currentValue and the current value of the accumulator to the value of the new accumulator.
This works in a straightforward way, when adding, because the accumulator and the element values both happen to be the same type (but serve different purposes).
[1, 2, 3].reduce((x, y) => x + y);
This just works because they're all numbers.
[{ age: 5 }, { age: 2 }, { age: 8 }]
.reduce((total, thing) => total + thing.age, 0);
Now we're giving a starting value to the aggregator. The starting value should be the type that you expect the aggregator to be (the type you expect to come out as the final value), in the vast majority of cases.
While you aren't forced to do this (and shouldn't be), it's important to keep in mind.
Once you know that, you can write meaningful reductions for other n:1 relationship problems.
Removing repeated words:
const skipIfAlreadyFound = (words, word) => words.includes(word)
? words
: words.concat(word);
const deduplicatedWords = aBunchOfWords.reduce(skipIfAlreadyFound, []);
Providing a count of all words found:
const incrementWordCount = (counts, word) => {
counts[word] = (counts[word] || 0) + 1;
return counts;
};
const wordCounts = words.reduce(incrementWordCount, { });
Reducing an array of arrays, to a single flat array:
const concat = (a, b) => a.concat(b);
const numbers = [
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
].reduce(concat, []);
Any time you're looking to go from an array of things, to a single value that doesn't match a 1:1, reduce is something you might consider.
In fact, map and filter can both be implemented as reductions:
const map = (transform, array) =>
array.reduce((list, el) => list.concat(transform(el)), []);
const filter = (predicate, array) => array.reduce(
(list, el) => predicate(el) ? list.concat(el) : list,
[]
);
I hope this provides some further context for how to use reduce.
The one addition to this, which I haven't broken into yet, is when there is an expectation that the input and output types are specifically meant to be dynamic, because the array elements are functions:
const compose = (...fns) => x =>
fns.reduceRight((x, f) => f(x), x);
const hgfx = h(g(f(x)));
const hgf = compose(h, g, f);
const hgfy = hgf(y);
const hgfz = hgf(z);
For the first iteration 'a' will be the first object in the array, hence a.x + b.x will return 1+2 i.e. 3.
Now in the next iteration the returned 3 is assigned to a, so a is a number now n calling a.x will give NaN.
Simple solution is first mapping the numbers in array and then reducing them as below:
arr.map(a=>a.x).reduce(function(a,b){return a+b})
here arr.map(a=>a.x) will provide an array of numbers [1,2,4] now using .reduce(function(a,b){return a+b}) will simple add these numbers without any hassel
Another simple solution is just to provide an initial sum as zero by assigning 0 to a as below:
arr.reduce(function(a,b){return a + b.x},0)
At each step of your reduce, you aren't returning a new {x:???} object. So you either need to do:
arr = [{x:1},{x:2},{x:4}]
arr.reduce(function(a,b){return a + b.x})
or you need to do
arr = [{x:1},{x:2},{x:4}]
arr.reduce(function(a,b){return {x: a.x + b.x}; })
If you have a complex object with a lot of data, like an array of objects, you can take a step by step approach to solve this.
For e.g:
const myArray = [{ id: 1, value: 10}, { id: 2, value: 20}];
First, you should map your array into a new array of your interest, it could be a new array of values in this example.
const values = myArray.map(obj => obj.value);
This call back function will return a new array containing only values from the original array and store it on values const. Now your values const is an array like this:
values = [10, 20];
And now your are ready to perform your reduce:
const sum = values.reduce((accumulator, currentValue) => { return accumulator + currentValue; } , 0);
As you can see, the reduce method executes the call back function multiple times. For each time, it takes the current value of the item in the array and sum with the accumulator. So to properly sum it you need to set the initial value of your accumulator as the second argument of the reduce method.
Now you have your new const sum with the value of 30.
I did it in ES6 with a little improvement:
arr.reduce((a, b) => ({x: a.x + b.x})).x
return number
In the first step, it will work fine as the value of a will be 1 and that of b will be 2 but as 2+1 will be returned and in the next step the value of b will be the return value from step 1 i.e 3 and so b.x will be undefined...and undefined + anyNumber will be NaN and that is why you are getting that result.
Instead you can try this by giving initial value as zero i.e
arr.reduce(function(a,b){return a + b.x},0);
I used to encounter this is my development, what I do is wrap my solution in a function to make it reusable in my environment, like this:
const sumArrayOfObject =(array, prop)=>array.reduce((sum, n)=>{return sum + n[prop]}, 0)
Just my 2 cents on setting a default value with object literal.
let arr = [{
duration: 1
}, {
duration: 3
}, {
duration: 5
}, {
duration: 6
}];
const out = arr.reduce((a, b) => {
return {
duration: a.duration + b.duration
};
}, {
duration: 0
});
console.log(out);
let temp =[{x:1},
{x:2},
{x:3},
{x:4}];
let sum = temp.map(element => element.x).reduce((a, b) => a+ b , 0)
console.log(sum);
we can used this way for sum of x
Output : 10
reduce function iterates over a collection
arr = [{x:1},{x:2},{x:4}] // is a collection
arr.reduce(function(a,b){return a.x + b.x})
translates to:
arr.reduce(
//for each index in the collection, this callback function is called
function (
a, //a = accumulator ,during each callback , value of accumulator is
passed inside the variable "a"
b, //currentValue , for ex currentValue is {x:1} in 1st callback
currentIndex,
array
) {
return a.x + b.x;
},
accumulator // this is returned at the end of arr.reduce call
//accumulator = returned value i.e return a.x + b.x in each callback.
);
during each index callback, value of variable "accumulator" is
passed into "a" parameter in the callback function. If we don't initialize "accumulator", its value will be undefined. Calling undefined.x would give you error.
To solve this, initialize "accumulator" with value 0 as Casey's answer showed above.
To understand the in-outs of "reduce" function, I would suggest you look at the source code of this function.
Lodash library has reduce function which works exactly same as "reduce" function in ES6.
Here is the link :
reduce source code
to return a sum of all x props:
arr.reduce(
(a,b) => (a.x || a) + b.x
)
You can use reduce method as bellow; If you change the 0(zero) to 1 or other numbers, it will add it to total number. For example, this example gives the total number as 31 however if we change 0 to 1, total number will be 32.
const batteryBatches = [4, 5, 3, 4, 4, 6, 5];
let totalBatteries= batteryBatches.reduce((acc,val) => acc + val ,0)
function aggregateObjectArrayByProperty(arr, propReader, aggregator, initialValue) {
const reducer = (a, c) => {
return aggregator(a, propReader(c));
};
return arr.reduce(reducer, initialValue);
}
const data = [{a: 'A', b: 2}, {a: 'A', b: 2}, {a: 'A', b: 3}];
let sum = aggregateObjectArrayByProperty(data, function(x) { return x.b; }, function(x, y) { return x + y; }, 0);
console.log(`Sum = ${sum}`);
console.log(`Average = ${sum / data.length}`);
let product = aggregateObjectArrayByProperty(data, function(x) { return x.b; }, function(x, y) { return x * y; }, 1);
console.log(`Product = ${product}`);
Just wrote a generic function from previously given solutions. I am a Java developer, so apologies for any mistakes or non-javascript standards :-)
A generic typescript function:
const sum = <T>(array: T[], predicate: (value: T, index: number, array: T[]) => number) => {
return array.reduce((acc, value, index, array) => {
return acc + predicate(value, index, array);
}, 0);
};
Example:
const s = sum(arr, (e) => e.x);
var arr = [{x:1}, {x:2}, {x:3}];
arr.map(function(a) {return a.x;})
.reduce(function(a, b) {return a + b});
console.log(arr);
//I tried using the following code and the result is the data array
//result = [{x:1}, {x:2}, {x:3}];
var arr2 = [{x:1}, {x:2}, {x:3}]
.reduce((total, thing) => total + thing.x, 0);
console.log(arr2);
// and I changed the code to like this and it worked.
// result = 6
We can use array reduce method to create new Object and we can use this option to sum or filter
const FRUITS = ["apple", "orange"]
const fruitBasket = {banana: {qty: 10, kg:3}, apple: {qty: 30, kg:10}, orange: {qty: 1, kg:3}}
const newFruitBasket = FRUITS.reduce((acc, fruit) => ({ ...acc, [fruit]: fruitBasket[fruit]}), {})
console.log(newFruitBasket)
Array reduce function takes three parameters i.e, initialValue(default
it's 0) , accumulator and current value .
By default the value of initialValue will be "0" . which is taken by
accumulator
Let's see this in code .
var arr =[1,2,4] ;
arr.reduce((acc,currVal) => acc + currVal ) ;
// (remember Initialvalue is 0 by default )
//first iteration** : 0 +1 => Now accumulator =1;
//second iteration** : 1 +2 => Now accumulator =3;
//third iteration** : 3 + 4 => Now accumulator = 7;
No more array properties now the loop breaks .
// solution = 7
Now same example with initial Value :
var initialValue = 10;
var arr =[1,2,4] ;
arr.reduce((acc,currVal) => acc + currVal,initialValue ) ;
/
// (remember Initialvalue is 0 by default but now it's 10 )
//first iteration** : 10 +1 => Now accumulator =11;
//second iteration** : 11 +2 => Now accumulator =13;
//third iteration** : 13 + 4 => Now accumulator = 17;
No more array properties now the loop breaks .
//solution=17
Same applies for the object arrays as well(the current stackoverflow question) :
var arr = [{x:1},{x:2},{x:4}]
arr.reduce(function(acc,currVal){return acc + currVal.x})
// destructing {x:1} = currVal;
Now currVal is object which have all the object properties .So now
currVal.x=>1
//first iteration** : 0 +1 => Now accumulator =1;
//second iteration** : 1 +2 => Now accumulator =3;
//third iteration** : 3 + 4 => Now accumulator = 7;
No more array properties now the loop breaks
//solution=7
ONE THING TO BARE IN MIND is InitialValue by default is 0 and can be given anything i mean {},[] and number
//fill creates array with n element
//reduce requires 2 parameter , 3rd parameter as a length
var fibonacci = (n) => Array(n).fill().reduce((a, b, c) => {
return a.concat(c < 2 ? c : a[c - 1] + a[c - 2])
}, [])
console.log(fibonacci(8))
you should not use a.x for accumulator , Instead you can do like this
`arr = [{x:1},{x:2},{x:4}]
arr.reduce(function(a,b){a + b.x},0)`

Categories