Is there a specific reason why is better to use .map than for loops in React?
I'm working on a project where all arrays are being iterated with for loops but I'm convinced that is better and good practice to use .map because it creates a copy of the array, that to my understanding is better practice but I can't find a specific reason.
Is there a specific reason why is better to use .map than for loops in React?
If you're just iterating, map is the wrong tool. map is for mapping arrays: producing a new array based on the values from the previous array. Someone somewhere is teaching map as an iteration tool, unfortunately, doing their students a disservice. (I wish I knew who it was so I could have a word.) Never do this:
// Don't do this
myArray.map(entry => {
// ...do something with `entry`...
});
For iteration, it's a choice between a for loop, a for-of loop, and the forEach method. (Well, and a few other things; see my answer here for a thorough rundown.)
For instance, using forEach
myArray.forEach(entry => {
// ...do something with `entry`...
});
vs. using for-of:
for (const entry of myArray) {
// ...do something with `entry`...
}
(Those aren't quite equivalent. The former has to be an array. The latter can be any iterable object.)
The reason you may see map a lot in React is that you're frequently mapping things in React, in at least two ways:
Mapping from raw data to JSX elements, like this:
return (
<div>
{myData.map(({id, name}) => <div key={id}>{name}</div>)}
</div>
);
Since that's a mapping operation, with the array created by map being used to provide the contents of the outer div, map is the right choice there.
Mapping from old state to new state, like this:
const [myArray, setMyArray] = useState([]);
// ...
setMyArray(myArray.map(obj => {...obj, update: "updated value"}));
Since that again is a mapping operation, creating a new array to set as the myArray state member, map is the right choice.
...but I'm convinced that is better and good practice to use .map because it creates a copy of the array...
If you want a copy/updated version of the array, yes, map is a good choice. It's more concise than the equivalent for loop (or even for-of):
const newArray = oldArray.map(entry => /*...update...*/);
vs.
// Probably not best practice unless you have conditional logic
// in the loop body that may or may not `push` (or similar)
const newArray = [];
for (const entry of oldArray) {
newArray.push(/*...update...*/);
}
.map() maps each array value to a new value, and returns a brand new array.
In React.js context, .map() can be used to map each array item to a piece of JSX fragment.
for loop also iterates over an array, just like .map(). The major difference is that you can specify custom computation with for loop. Whereas .map() is specifically designed for mapping
Related
I have a code like below and it is working fine.
for(let i=0;i< this.Array.length ; i++){
if(this.Array[i].propertyObject.hasOwnProperty('header'))
this.Array[i].ColumnName = this.Array[i].propertyObject.header;
}
May i know how to achieve the same with Map. Thanks in advance.
May i know how to achieve the same with Map
I assume you mean map. map isn't the right tool for doing exactly what that loop does, because that loop modifies the array in place, but map creates a new array instead.
If you want a new array, perhaps also with new objects (e.g., functional programming or immutable programming):
// Replace `this.Array` with a new array
this.Array = this.Array.map(element => {
// If we need to change this element...
if (element.propertyObject.hasOwnProperty("header")) {
// ...do a shallow copy along with the replacement
element = {...element, ColumnName: element.propertyObject.header};
}
return element;
});
Note that that assumes the elements are simple objects. If they aren't, you'll need to handle constructing the replacement differently than just using {...original}.
But if you want to keep the same array as your current code does, your loop is just fine. You have other options (like forEach or for-of), but what you have is also fine. for-of is well-suited to what you're doing:
for (const element of this.Array) {
if (element.propertyObject.hasOwnProperty("header")) {
element.ColumnName = element.propertyObject.header;
}
}
Side note: In new code, you might want to use Object.hasOwn rather than Object.prototype.hasOwnProperty (with a polyfill if needed for older environments; recent versions of all modern browsers support it natively, though).
this.Array.forEach((item, index, arr)=> {
item.ColumnName = "header" in item ? item.header : item.ColumnName;
});
I am working on an Angular 9, RxJS 6 app and have a question regarding the different outcomes of piping subject values and doing unit conversion in that pipe.
Please have a look at this stackblitz. There, inside the backend.service.ts file, an observable is created that does some "unit conversion" and returns everything that is emmitted to the _commodities Subject. If you look at the convertCommodityUnits function, please notice that I commented out the working example and instead have the way I solved it initially.
My question: When you use the unsubscribe buttons on the screen and subscribe again, when using the "conversion solution" that just overrides the object without making a copy, the values in the HTML are converted multiple times, so the pipe does not use the original data that the subject provides. If you use the other code, so creating a clone of the commodity object inside convertCommodityUnits, it works like expected.
Now, I don't understand why the two ways of converting the data behave so differently. I get that one manipulates the data directly, because js does Call by sharing and one returns a new object. But the object that is passed to the convertCommodityUnits function is created by the array.prototype.map function, so it should not overwrite anything, right? I expect that RxJS uses the original, last data that was emitted to the subject to pass into the pipe/map operators, but that does not seem to be the case in the example, right?
How/Why are the values converted multiple times here?
This is more or less a follow-up question on this: Angular/RxJS update piped subject manually (even if no data changed), "unit conversion in rxjs pipe", so it's the same setup.
When you're using map you got a new reference for the array. But you don't get new objects in the newly generated array (shallow copy of the array), so you're mutating the data inside the element.
In the destructuring solution, because you have only primitive types in each object in the array, you kind of generate completely brand new elements to your array each time the conversion method is called (this is important: not only a new array but also new elements in the array => you have performed a deep copy of the array). So you don't accumulate successively the values in each subscription.
It doesn't mean that the 1-level destructuring solution like you used in the provided stackblitz demo will work in all cases. I've seen this mistake being made a lot out there, particularly in redux pattern frameworks that need you to not mutate the stored data, like ngrx, ngxs etc. If you had complex objects in your array, the 1-level destructuring would've kept untouched all the embedded objects in each element of the array. I think it's easier to describe this behavior with examples:
const obj1 = {a: 1};
const array = [{b: 2, obj: obj1}];
// after every newArray assignment in the below code,
// console.log(newArray === array) prints false to the console
let newArray = [...array];
console.log(array[0] === newArray[0]); // true
newArray = array.map(item => item);
console.log(array[0] === newArray[0]); // true
newArray = array.map(item => ({...item}));
console.log(array[0] === newArray[0]); // false
console.log(array[0].obj === newArray[0].obj); // true
newArray = array.map(item => ({
...item,
obj: {...item.obj}
}));
console.log(array[0] === newArray[0]); // false
console.log(array[0].obj === newArray[0].obj); // false
Recently I was doing some sort of filter to an array and I came up with two cases and I don't know which one to use.
In terms of performance, which is better to use.
arr.filter(someCondition).map(x => x.something)
Or
arr.map(x => {
if(someCondition) {
return x.something
}
})
One thing to notice is that I'm using react, so have undefined values in the returning array (don't return any thing inside .map), it's totally acceptable.
This involves a lot of questions like, how many elements you have in the array and how many you of then will pass the condition and that is what make be wonder which one is better.
So, considering n elements and cases where all elements pass the condition and also that no elements pass the condition, which one have better performance?
.filter().map() OR .map with if inside?
First: It's really unlikely to matter.
But: Only because it's okay in your case to have undefined in the result array, the second one will be faster.
Let's compare:
The first way, filter has to make a pass through the entire array, creating a new array with the accepted entries, then map has to make a pass through the new array and make a new array of the something properties.
The second way, map makes one pass through the array, creating a new array with the something properties (or undefined for the ones that would have been filtered out).
Of course, this assumes you aren't just offloading the burden elsewhere (e.g., if React has to do something to ignore those undefineds).
But again: It's unlikely to matter.
I would probably use a third option: Build the result array yourself:
const result = [];
for (const x of arr) {
if(someCondition) {
result[result.length] = x.something;
}
}
That still makes just one pass, but doesn't produce an array with undefined in it. (You can also shoehorn that into a reduce, but it doesn't buy you anything but complexity.)
(Don't worry about for-of requiring an iterator. It gets optimized away in "hot" code.)
You could use reduce function instead of map and filter and it wouldn't return you undefined like when you use map and if.
arr.reduce((acc, x) => (someCondition ? [...acc, x.something] : acc), [])
or
arr.reduce((acc, x) => {
if (someCondition) {
acc.push(x.something);
return acc;
} else return acc;
}, []);
As #briosheje said smaller array would be a plus. As it reduces number of rerendering in your React app where you use this array and it is unnecessary to pass undefined. Reduce function would be much more efficient, I would say.
If you are wondering why I have written 1st one using spread operator and 2nd one without it is because the execution time taken for 1st one is more compared to 2nd one. And that is due to spread operator as it is cloning 'acc'. So if you want lesser execution time go for 2nd one or else if you want lesser LOC go for 1st one
Im wondering whats the time complexity of turning iterables for example MapIterator into an array.
lets say I have this code:
const Component = ({ map }) => {
return (
<Fragment>
{Array.from(map.values()).map(item => <div key={item.key}>{item.name}</div>)}
</Fragment>
)
}
what is the time complexity for each Array.from(), is it what Im thinking O(n) or is it because its an MapIterator somehow converted from arraylike to array quicker.
My use case is that I want to save items (which need to be accessed) as a map for performance issues, but I have to run through them as an array.
for this questions purpose I can save in state or use selectors or stuff like that
what do you guys think?
You are correct that Array.from() is O(n). If you're concerned about performance, the easiest thing you can do to improve is to not iterate the values twice. Array.from() already accepts a map function as a second optional argument:
Array.from(map.values(), ({ key, name }) => <div key={key}>{name}</div>)
In my application, I have a very large array of objects on the front-end, and these objects all have some kind of long ID under the heading ["object_id"]. I'm using UnderscoreJS for all my manipulations of this list. This is a prototype for an application that will eventually be handling most of this effort on the backend.
Merging the list is a big part of the application's requirement. See, the list that I work with initially will have many distinct objects with identical object_ids. Initially I was merging them all in one go with a groupBy and a map-reduce, but now the requirements have changed and I'm merging them one at a time (about a second apart, to simulate a stream of input) into a initially empty array.
My naive implementation was something like this:
function(newObject, objects) {
var obj_id = newObject["object_id"]; //id of object to merge, let's say
var tempObject = null;
var objectToMerge = _.find(objects,
function(obj) {
return obj_id == obj["object_id"];
});
if (objectToMerge) {
tempObject = merge(objectToMerge, newObject);
objects = _.reject(objects, /*same function as findWhere*/ );
} else {
tempObject = newObject;
}
objects.push(tempObject);
return objects;
}
This is ridiculously more efficient than before, when I was remerging from the mock data "source" array every time a new object was supposed to be pushed, so it's down from what I think was O(N^2) at least to O(N), but N here is so large (for JavaScript, anyway!) I'd like to optimize it. Currently worst case, where the object_id is not redundant, is the entire list is traversed twice. So what I'd like is to do a find-and-replace, an operation which would return a new version of the list, but with the merged object in place of the old one.
I could do a map where the iterator returns a new, merged object iff the object_id is the same, but that doesn't have the short-circuit evaluation that _.find has, which means the difference between having a worst-case runtime and having that be the default runtime, and doesn't easily account for pushing the object if there wasn't a match.
I'd also like to avoid mutating the original array in place. I know objects.push(tempObject) does that very thing, but for data-binding reasons I'm ignoring that and returning the mutated list as though it were new.
It's also unavoidable that I'll have to check the array to see if the new object was merged or whether it was appended. Using closures I could keep track of a flag to see if the merge happened, but I'm trying to be as idiomatically LISPy as possible for my own sanity. Also, past a certain point, most objects will be merged, so extra runtime overheard for adding new items isn't a huge problem, as long as it is only incurred when it has to happen.