I'm trying to use the array reduce function to return a 2D array of objects. The input is a comma separated values. The first row of the string is used as the title row. I'm analyzing the solution and I don't understand the notation. Specifically I don't understand the "=> ((obj[title] =
values[index]), obj), {})" portion in the code below. I'm looking to have someone explain it to me. For me it seems like we're initializing obj to be a an object. After that I'm lost.
const CSV_to_JSON = (data, delimiter = ',') => {
const titles = data.slice(0, data.indexOf('\n')).split(delimiter);
return data
.slice(data.indexOf('\n') + 1)
.split('\n')
.map(v => {
const values = v.split(delimiter);
return titles.reduce(
(obj, title, index) => ((obj[title] = values[index]), obj)
, {});
});
};
console.log(CSV_to_JSON('col1,col2\na,b\nc,d')); // [{'col1': 'a', 'col2': 'b'}, {'col1': 'c', 'col2': 'd'}];
console.log(CSV_to_JSON('col1;col2\na;b\nc;d', ';')); // [{'col1': a', 'col2': 'b'}, {'col1': 'c', 'col2': 'd'}]
It's an (ab)use of the comma operator, which takes a list of comma-separated expressions, evaluates the first expression(s), discards them, and then the whole (...) resolves to the value of the final expression. It's usually something only to be done in automatic minification IMO, because the syntax looks confusing.
The .reduce there
return titles.reduce((obj, title, index) => ((obj[title] =
values[index]), obj), {});
is equivalent to
return titles.reduce((obj, title, index) => {
obj[title] = values[index];
return obj;
}, {});
which makes a lot more sense - it turns an array of titles (eg ['foo', 'bar']) and an array of values (eg ['fooVal', 'barVal']), and uses .reduce to transform those into a single object, { foo: 'fooVal', bar: 'barVal' }.
The first argument to the .reduce callback is the accumulator's initial value (the second argument to .reduce), or the value that was returned on the last iteration - the code above passes {} as the initial value, assigns a property to the object, and returns the object on every iteration. .reduce is the usually the most appropriate method to use to turn an array into an object, but if you're more familiar with forEach, the code is equivalent to
const obj = {};
titles.forEach((title, index) => {
obj[title] = values[index];
});
return obj;
While the comma operator can be useful when code-golfing, it's probably not something that should be used when trying to write good, readable code.
Related
I'm currently learning about the reduce method in JS, and while I have a basic understanding of it, more complex code completely throws me off. I can't seem to wrap my head around how the code is doing what it's doing. Mind you, it's not that the code is wrong, it's that I can't understand it. Here's an example:
const people = [
{ name: "Alice", age: 21 },
{ name: "Max", age: 20 },
{ name: "Jane", age: 20 },
];
function groupBy(objectArray, property) {
return objectArray.reduce((acc, obj) => {
const key = obj[property];
const curGroup = acc[key] ?? [];
return { ...acc, [key]: [...curGroup, obj] };
}, {});
}
const groupedPeople = groupBy(people, "age");
console.log(groupedPeople);
// {
// 20: [
// { name: 'Max', age: 20 },
// { name: 'Jane', age: 20 }
// ],
// 21: [{ name: 'Alice', age: 21 }]
// }
Now the reduce method as I understand it, takes an array, runs some provided function on all the elements of the array in a sequential manner, and adds the result of every iteration to the accumulator. Easy enough. But the code above seems to do something to the accumulator as well and I can't seem to understand it. What does
acc[key] ?? []
do?
Code like this make it seem like a breeze:
const array1 = [1, 2, 3, 4];
// 0 + 1 + 2 + 3 + 4
const initialValue = 0;
const sumWithInitial = array1.reduce(
(accumulator, currentValue) => accumulator + currentValue,
initialValue
);
console.log(sumWithInitial);
// Expected output: 10
But then I see code like in the first block, I'm completely thrown off. Am I just too dumb or is there something I'm missing???
Can someone please take me through each iteration of the code above while explaining how it
does what it does on each turn? Thanks a lot in advance.
You are touching on a big problem with reduce. While it is such a nice function, it often favors code that is hard to read, which is why I often end up using other constructs.
Your function groups a number of objects by a property:
const data = [
{category: 'catA', id: 1},
{category: 'catA', id: 2},
{category: 'catB', id: 3}
]
console.log(groupBy(data, 'category'))
will give you
{
catA: [{category: 'catA', id: 1}, {category: 'catA', id: 2}],
catB: [{category: 'catB', id: 3}]
}
It does that by taking apart the acc object and rebuilding it with the new data in every step:
objectArray.reduce((acc, obj) => {
const key = obj[property]; // get the data value (i.e. 'catA')
const curGroup = acc[key] ?? []; // get collector from acc or new array
// rebuild acc by copying all values, but replace the property stored
// in key with an updated array
return { ...acc, [key]: [...curGroup, obj] };
}, {});
You might want to look at spread operator (...) and coalesce operator (??)
Here is a more readable version:
objectArray.reduce((groups, entry) => {
const groupId = entry[property];
if(!groups[groupId]){
groups[groupId] = [];
}
groups[groupId].push(entry);
return groups;
}, {});
This is a good example where I would favor a good old for:
function groupBy(data, keyProperty){
const groups = {}
for(const entry of data){
const groupId = entry[keyProperty];
if(!groups[groupId]){
groups[groupId] = [];
}
groups[groupId].push(entry);
}
return groups;
}
Pretty much the same number of lines, same level of indentation, easier to read, even slightly faster (or a whole lot, depending on data size, which impacts spread, but not push).
That code is building an object in the accumulator, starting with {} (an empty object). Every property in the object will be a group of elements from the array: The property name is the key of the group, and the property value is an array of the elements in the group.
The code const curGroup = acc[key] ?? []; gets the current array for the group acc[key] or, if there isn't one, gets a new blank array. ?? is the "nullish coalescing operator." It evaluates to its first operand if that value isn't null or undefined, or its second operand if the first was null or undefined.
So far, we know that obj[property] determines the key for the object being visited, curGroup is the current array of values for that key (created as necessary).
Then return { ...acc, [key]: [...curGroup, obj] }; uses spread notation to create a new accumulator object that has all of the properties of the current acc (...acc), and then adds or replaces the property with the name in key with a new array containing any previous values that the accumulator had for that key (curGroup) plus the object being visited (obj), since that object is in the group, since we got key from obj[property].
Here's that again, related to the code via comments. I've split out the part creating a new array [...curGroup, obj] from the part creating a new accumulator object for clarity:
function groupBy(objectArray, property) {
return objectArray.reduce(
(acc, obj) => {
// Get the value for the grouping property from this object
const key = obj[property];
// Get the known values array for that group, if any, or
// a blank array if there's no property with the name in
// `key`.
const curGroup = acc[key] ?? [];
// Create a new array of known values, adding this object
const newGroup = [...curGroup, obj];
// Create and return a new object with the new array, either
// adding a new group for `key` or replacing the one that
// already exists
return { ...acc, [key]: newGroup };
},
/* The starting point, a blank object: */ {}
);
}
It's worth noting that this code is very much written with functional programming in mind. It uses reduce instead of a loop (when not using reduce, FP usually uses recursion rather than loops) and creates new objects and arrays rather than modifying existing ones.
Outside of functional programming, that code would probably be written very differently, but reduce is designed for functional programming, and this is an example of that.
Just FWIW, here's a version not using FP or immutability (more on immutability below):
function groupBy(objectArray, property) {
// Create the object we'll return
const result = {};
// Loop through the objects in the array
for (const obj of objectArray) {
// Get the value for `property` from `obj` as our group key
const key = obj[property];
// Get our existing group array, if we have one
let group = result[key];
if (group) {
// We had one, add this object to it
group.push(obj);
} else {
// We didn't have one, create an array with this object
// in it and store it on our result object
result[key] = [obj];
}
}
return result;
}
In a comment you said:
I understand the spread operator but it's use in this manner with the acc and the [key] is something I'm still confused about.
Yeah, there are a lot of things packed into return { ...acc, [key]: [...curGroup, obj] };. :-) It has both kinds of spread syntax (... isn't an operator, though it's not particularly important) plus computed property name notation ([key]: ____). Let's separate it into two statements to make it easier to discuss:
const updatedGroup = [...curGroup, obj];
return { ...acc, [key]: updatedGroup };
TL;DR - It creates and returns a new accumulator object with the contents of the previous accumulator object plus a new or updated property for the current/updated group.
Here's how that breaks down:
[...curGroup, obj] uses iterable spread. Iterable spread spreads out the contents of an iterable (such as an array) into an array literal or a function call's argument list. In this case, it's spread into an array literal: [...curGroup, obj] says "create a new array ([]) spreading out the contents of the curGroup iterable at the beginning of it (...curGroup) and adding a new element at the end (, obj).
{ ...acc, ____ } uses object property spread. Object property spread spreads out the properties of an object into a new object literal. The expression { ...acc, _____ } says "create a new object ({}) spreading out the properties of acc into it (...acc) and adding or updating a property afterward (the part I've left as just _____ for now)
[key]: updatedGroup (in the object literal) uses computed property name syntax to use the value of a variable as the property name in an object literal's property list. So instead of { example: value }, which creates a property with the actual name example, computed property name syntax puts [] around a variable or other expression and uses the result as the property name. For instance, const obj1 = { example: value }; and const key = "example"; const obj2 = { [key]: value }; both create an object with a propety called example with the value from value. The reduce code is using [key]: updatedGroup] to add or update a property in the new accumulator whose name comes from key and whose value is the new group array.
Why create a new accumulator object (and new group arrays) rather than just updating the one that the code started with? Because the code is written such that it avoids modifying any object (array or accumulator) after creating it. Instead of modifying one, it always creates a new one. Why? It's "immutable programming," writing code that only ever creates new things rather than modifying existing things. There are good reasons for immutable programming in some contexts. It reduces the possibilities of a change in code in one place from having unexpected ramifications elsewhere in the codebase. Sometimes it's necessary, because the original object is immutable (such as one from Mongoose) or must be treated as though it were immutable (such as state objects in React or Vue). In this particular code it's pointless, it's just style. None of these objects is shared anywhere until the process is complete and none of them is actually immutable. The code could just as easily use push to add objects to the group arrays and use acc[key] = updatedGroup; to add/update groups to the accumulator object. But again, while it's pointless in this code, there are good uses for immutable programming. Functional programming usually adheres to immutability (as I understand it; I haven't studied FP deeply).
I am having a little trouble differentiating between spread and rest. Could someone explain to me if either spread or rest is being used within the reduce functions return statement?
This is what I am not understanding:
return [currentValue, ...accumulator]
inside of
let str = 'KING';
const reverseStr = str => {
return str
.split('')
.reduce((accumulator, currentValue) => {
return [currentValue, ...accumulator];
}, [])
.join('');
};
Rest syntax always creates (or assigns to) a variable, eg:
const [itemOne, ...rest] = arr;
// ^^ created ^^^^
Spread syntax only results in another array (or object) expression - it doesn't put anything into variables. Here, you're using spread syntax to create a new array composed of currentValue and the values of accumulator.
return [currentValue, ...accumulator];
is like
return [currentValue, accumulator[0], accumulator[1], accumulator[2] /* ... */];
You're spreading the items of accumulator into the array that's being returned.
I found the implementation of the function in js in the Internet, This function recursively filters an array of objects, each object may have property "children" which is array of objects and that objects also may have children and so on. The function works correctly but I didn't understand it a bit.
This is my function:
getFilteredArray (array, key, searchString) {
const res = array.filter(function iter(o) {
if (o[key].toLowerCase().includes(searchString.toLowerCase())) {
return true;
}
if(o.children){
return (o.children = o.children.filter(iter)).length;
}
});
this.setState({
filteredArray: res
});
}
I don't understand in this code:
if(o.children){
return (o.children = o.children.filter(iter)).length;
}
Can we simplify this expression (o.children = o.children.filter(iter)).length; ?
Why we return the length of array not the array itself?
And function "iter" accepts one argument that is object. Why we just write o.children.filter(iter) without any arguments passed to the "iter" here? according to recursion tutorials, there are arguments always passed, if the function requires them. But here we don't pass, this is strangely.
Here's a re-write that strives for clarity and simplifies the logic a bit to remove distractions:
const recursivelyFilter = (arr, key, searchString) => {
return arr.filter(function iter(obj) {
if (obj[key].includes(searchString)) {
return true;
}
if (obj.children) {
obj.children = obj.children.filter(child => iter(child));
return obj.children.length > 0;
}
return false;
});
};
Array#filter is the meat of this code. filter accepts a callback which should return a boolean to determine whether an element will be included in the result array. It doesn't work in-place.
The base cases (terminating conditions for the recursion) are:
If the current object (a node in the tree) has a key key matching searchTerm, return true.
If the current node doesn't match searchTerm and has no children, return false. In the original code, returning undefined defaults to falsey.
The recursive case is:
If the current node has children, recursively filter them using the boolean result of the iter function. If at least one descendant of the current node passes the filter condition, include the current node in its parent's children array, otherwise remove it. The code treats the length of the new child array as a boolean to achieve this.
return (o.children = o.children.filter(iter)).length; first performs an assignment to o.children, necessary because o.children.filter returns a fresh copy of the array. After the assignment is finished, the expression resolves to the new o.children and its length property is returned. The length is then treated as truthy/falsey according to the recursive case rule described above. This amounts to:
obj.children = obj.children.filter(child => iter(child));
return obj.children.length > 0;
If we returned the array itself, everything would be treated as true because an empty array, [], evaluates to true. [].length, on the other hand, evaluates to false, which is the desired outcome.
As for o.children.filter(iter), Array#filter accepts a callback as its first parameter, which can be a function variable such as iter. Another option is creating an anonymous function directly in the argument list; this is usually how it's done. The above version adds an arrow wrapper, but it's an obviously unnecessary extra layer of indirection since the lone parameter is just passed through the wrapper. We could also use the function keyword here; whatever the case, the goal is the same, which is that we pass a function into filter to call on each element.
By the way, the function assumes that key is set on all the nodes of the nested objects in array and that obj[key].includes is defined. Clearly, the author had a very specific data structure and purpose in mind and wasn't interested in prematurely generalizing.
Here's test code illustrating its operation. Playing around with it should help your understanding.
const recursivelyFilter = (arr, key, searchString) => {
return arr.filter(function iter(obj) {
if (obj[key].includes(searchString)) {
return true;
}
if (obj.children) {
obj.children = obj.children.filter(child => iter(child));
return obj.children.length > 0;
}
return false;
});
};
const arr = [
{
foo: "bar",
children: [
{
foo: "baz",
children: [
{foo: "quux"},
{foo: "quuz"},
]
}
]
},
{
foo: "corge",
children: [
{foo: "quux"}
]
},
{
foo: "grault",
children: [{foo: "bar"}]
}
];
console.log(recursivelyFilter(arr, "foo", "quux"));
Perhaps some code changes will help you understand what is going on.
function iter(o){
if (o[key].toLowerCase().includes(searchString.toLowerCase())) {
return true;
}
if(o.children){
o.children = o.children.filter(iter);
return o.children.length;
}
}
getObject (array, key, searchString) {
const res = array.filter(iter);
this.setState({
filteredArray: res
});
}
The iter function is executed by array.filter for each element in the array, if it returns true the element is added to the result, otherwise it is excluded.
In this scenario, if the item itself isn't a direct match, but a child item is we want to keep it. The function handles that by filtering the o.children array using the same criteria. The filtered version of the array is re-assigned to o.children.
The length of the filtered array is then returned as the true/false value that the previous array.filter is looking for. If the array is empty, the length is zero, which is false so the item is excluded. Otherwise a non-zero value is returned, which is true, so the item is kept.
class A {
static getFilteredArray(array, key, searchString) {
const query = searchString.toLowerCase()
const res = array.filter(function searchText(item) {
const text = item[key].toLowerCase()
if (text.includes(query)) {
return true
}
if (item.children) { // if object has children, do same filtering for children
item.children = item.children.filter(searchText)
return item.children.length
// same as below, but shorter
// item.children = item.children.filter(function (child) {
// return searchText(child)
// })
}
})
return res
}
}
const input = [{
name: 'John',
children: [{
name: 'Alice'
}]
}]
const output1 = A.getFilteredArray(input, 'name', 'Alic')
const output2 = A.getFilteredArray(input, 'name', 'XY')
console.log('Alic => ', output1)
console.log('XY =>', output2)
The return is not for getObject. It is for the .filter() callback.
The answer is therefore simple: filter expects its callback to return a true/false value depending on weather or not you want to keep or remove the object form the resulting array. Therefore returning the length is enough since 0 is falsy and all other numbers are truthy.
The answer from ggorlen admirably explains how this function works.
But this function and ggorlen's simplification of it do something I believe a filter should never do: they mutate the initial data structure. If you examine this value before and after the call in ggorlen's example, you will notice that it changes from 2 to 1:
arr[0].children[0].children.length
And this issue is present in the original code too. As far as I can see, there is no simple way to fix this with an implementation based on Array.prototype.filter. So another implementation makes some sense. Here's what I have come up with, demonstrated with ggorlen's test case:
const getFilteredArray = (arr, test) => arr .reduce
( ( acc
, {children = undefined, ...rest}
, _ // index, ignored
, __ // array, ignored
, kids = children && getFilteredArray (children, test)
) => test (rest) || (kids && kids .length)
? acc .concat ({
... rest,
...(kids && kids .length ? {children: kids} : {})
})
: acc
, []
)
const arr = [
{foo: "bar", children: [{foo: "baz", children: [{foo: "quux"}, {foo: "quuz"}]}]},
{foo: "corge", children: [{foo: "quux"}]},
{foo: "grault", children: [{foo: "bar"}]}
];
const test= (item) => item.foo == 'quux'
console .log (
getFilteredArray (arr, test)
)
Note that I made it a bit more generic than requested, testing with an arbitrary predicate rather than testing that key property matches the searchString value. This makes the code simpler and the breakdown in logic clearer. But if you want that exact API, you can make just a minor change.
const getFilteredArray = (arr, key, searchString) => arr .reduce
( ( acc
, {children = undefined, ...rest}
, _ // index, ignored
, __ // array, ignored
, kids = children && getFilteredArray (children, key, searchString)
) => rest[key] === searchString || (kids && kids .length)
? acc .concat ({
... rest,
...(kids && kids .length ? {children: kids} : {})
})
: acc
, []
)
One thing that might be missing is that the predicate runs against a value that does not include the children. If you wanted to be able include them, it's not much more work. We'd have to pass item in place of {children = undefined, ...rest} and destructure them inside the body of the function. That would require changing the body from a single expression to a { - } one.
If I wasn't trying to closely match someone else's API, I would also change the signature to
const getFilteredArray = (test) => (arr) => arr .reduce ( /* ... */ )
This version would allow us to partially apply the predicate and get a function that we can run against different inputs.
const objectFromPairs = arr => arr.reduce((a, v) => ((a[v[0]] = v[1]), a), {});
console.log(objectFromPairs([['a', 1], ['b', 2]])); // {a: 1, b: 2}
I can't wrap my head around this. What does callback (a, v) => ((a[v[0]] = v[1]), a) do - isn't a reducer's callback supposed to be just a function, why is there assignment followed by a comma then accumulator? How do I make sense of this?
When it's (a, v) => a[v[0]] = v[1], why does it return 2? Shouldn't it return {a: 1} on the first iteration, then {b: 2}, so shouldn't we end up with {b: 2} instead of just 2?
I can't wrap my head around this.
Understandable – it uses the relatively obscure (and proscribed!) comma operator. You can expand it to the equivalent statements to make it more readable:
const objectFromPairs = arr => arr.reduce((a, v) => {
a[v[0]] = v[1];
return a;
}, {});
Still kind of an abuse of reduce, though. This is a generic function; it doesn’t have to be golfed.
const objectFromPairs = pairs => {
const object = {};
pairs.forEach(([key, value]) => {
object[key] = value;
});
return object;
};
Starting from the inside and working out:
(a, v) => ((a[v[0]] = v[1]), a)
That's a reduce callback that takes an "accumulator" parameter (a) and a "value" parameter (v). What the body of the function does is employ a comma operator expression statement to get around the need to use curly braces and an explicit return. The first subexpression in the comma operator expression, (a[v[0]] = v[1]), splits up a two-value array into a name for an object property and a value. That is, v[0] becomes a name for a property in the accumulator object, and v[1] is that property's value.
Now that's used in a .reduce() call made on an array, with {} as the initial value for the .reduce() accumulator. Thus, that function builds up properties with values taken from the source array, whose elements clearly need to be arrays themselves, because that's what the callback expects.
It looks like the key to your confusion is that comma operator. An arrow function in JavaScript only has an implied return value when its function body is just a single expression. The comma operator is a way to "cheat" a little bit: you can string together several expressions, with the overall result being the value of the last expression. For an arrow function, that's handy.
i want to create an object like this using iteratio:
opts = [{label:1, value:1}, {label:4, value:4}]
the values inside this object are inside an array portArray = [1,4]
I'm writing
const portArray = [1,4];
return {
portArray.map(value =>
({ label: value value: value }))
};
});
but it does not seem to work. I'm missing something but i dont know what. Any ideas?
The code you have provided is not valid. It contains an illegal return statement.
You can reach the desired solution with using e.g. Object.assign.
const portArray = [1, 4],
res = portArray.map(v => Object.assign({}, { label: v, value: v }));
console.log(res);
The code you provided is quite confusing, but I think I got the point of your question. Since map returns a new array, assuming you have this array: var portArray = [1,4]; you can use map like this:
function manipulateArray(data) {
return data.map((value) => {
return {label: value, value: value};
}
}
var newArray = manipulateArray(portArray);
This way newArray will equal [{label:1, value:1}, {label:4, value:4}].
You should probably read map' documentation here: MDN
You code is missing comma separating your object properties:
{
label: value, // <-- comma between properties
value: value
}
In addition, Array#map will return a new array of containing your values mapped to objects which you can store in a local variable:
const portArray = [1,4];
const newArray = portArray.map(value =>({ label: value, value: value }));
// remember this comma :) ----------------^
Side note about implicit vs explicit return value for arrow functions:
Parenthesis around the object literal following the arrow function, they are necessary so that function's body expression is returned implicitly.
Use of explicit return statement in arrow function (notice the addition of curly braces around the function body):
const newArray = portArray.map(value => {
return {
label: value,
value: value
}
};