I have the following code which feels a bit redundant as I'm iterating over the same array many times, any suggestions to improve while keeping it readable would be appreciated.
The array object format is a standard one where I am checking the keys in each object to check it matches a certain condition e.g. every object contains one value, or only some of them do e.t.c [{},{},{}]
const getValue = arrayOfObjects => {
const hasA = arrayOfObjects.some(
object => arrayOfObjects.abc === 'val1'
);
const hasB = arrayOfObjects.every(
object => arrayOfObjects.abc === 'val2'
);
// the above 2 iterations are repeated about 4 more times for different checks
// then there are a few versions of the below assignment depending on the above variables
const hasC =
hasA ||
hasB;
// finally the function returns one of the values
if (hasA) {
return 'val10';
} else if (hasB) {
return 'val11';
} else if (hasD) {
return 'val12';
}
};
This sounds like a theoretical question. It sounds like you're wondering if using a few Array.prototype methods like some and every on the same array over and over has downsides or if there is a more readable way to do it.
It depends.
If the size of the array n is generally pretty small, in a practical sense, it doesn't matter. Choose what is more readable. Big O complexity comes more into play on a practical level if you're dealing with a lot of array items.
Boolean vars derived from some and every can be very readable in my opinion.
If you are in a situation where the array could be quite large, you could consider trying to do it in one step. Array.prototype.reduce would be a good tool for this: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/reduce
In general when you're making use of every and some (which I personally love, as they're very readable), you should stop and think about the necessity of the checks you're making.
To simplify the examples you had, let's say you have a bunch of fruits and you're checking whether:
There is at least one orange.
All of the fruits are apples.
It's clear that
If (1) is true, then (2) must be false
And the other way around:
If (2) is true, then (1) must be false
Then you can spare the testing one of them if the other one is true, using else if for example.
Additionally, if one of them is false - then you simply don't have to check (1) or (2), which is guaranteed to be false.
Related
I am using this handy function:
function not(a, b) {
return a.filter((value) => b.indexOf(value) === -1);
}
Then I have this
let AvailableList = not(FullList, MyList);
If I were to give example of first element in MyList and FullList it is:
appName: "MyAppName*"
description: null
permissionID: MyAppID*
permissionName: "MyAppPermName*"
__proto__: Object
So initially AvailableList is full of items like above. When user makes selection and saves it I save it into MyList. So then I need to use this not function to calculate remaining parts from MyList. But even when I move the first item of AvailableList to MyList then check like if(AvailableList === MyList) console.log(true) else console.log(false) I get false. But if I console log both of the array's first items they print identical values. I tried to compare after declaring variables using Object.assign() as well but no help.
Perhaps I can tackle this easily checking a value like permissionID, but I do not want to go that route since there will be other items later on that won't have permissionID.
If you know of a different function that can do a better job, feel free to share. Please don't share library solutions like Lodash.
*entries are masked
Recently I was doing some sort of filter to an array and I came up with two cases and I don't know which one to use.
In terms of performance, which is better to use.
arr.filter(someCondition).map(x => x.something)
Or
arr.map(x => {
if(someCondition) {
return x.something
}
})
One thing to notice is that I'm using react, so have undefined values in the returning array (don't return any thing inside .map), it's totally acceptable.
This involves a lot of questions like, how many elements you have in the array and how many you of then will pass the condition and that is what make be wonder which one is better.
So, considering n elements and cases where all elements pass the condition and also that no elements pass the condition, which one have better performance?
.filter().map() OR .map with if inside?
First: It's really unlikely to matter.
But: Only because it's okay in your case to have undefined in the result array, the second one will be faster.
Let's compare:
The first way, filter has to make a pass through the entire array, creating a new array with the accepted entries, then map has to make a pass through the new array and make a new array of the something properties.
The second way, map makes one pass through the array, creating a new array with the something properties (or undefined for the ones that would have been filtered out).
Of course, this assumes you aren't just offloading the burden elsewhere (e.g., if React has to do something to ignore those undefineds).
But again: It's unlikely to matter.
I would probably use a third option: Build the result array yourself:
const result = [];
for (const x of arr) {
if(someCondition) {
result[result.length] = x.something;
}
}
That still makes just one pass, but doesn't produce an array with undefined in it. (You can also shoehorn that into a reduce, but it doesn't buy you anything but complexity.)
(Don't worry about for-of requiring an iterator. It gets optimized away in "hot" code.)
You could use reduce function instead of map and filter and it wouldn't return you undefined like when you use map and if.
arr.reduce((acc, x) => (someCondition ? [...acc, x.something] : acc), [])
or
arr.reduce((acc, x) => {
if (someCondition) {
acc.push(x.something);
return acc;
} else return acc;
}, []);
As #briosheje said smaller array would be a plus. As it reduces number of rerendering in your React app where you use this array and it is unnecessary to pass undefined. Reduce function would be much more efficient, I would say.
If you are wondering why I have written 1st one using spread operator and 2nd one without it is because the execution time taken for 1st one is more compared to 2nd one. And that is due to spread operator as it is cloning 'acc'. So if you want lesser execution time go for 2nd one or else if you want lesser LOC go for 1st one
I wrote this solution to a problem I had, but I don't feel entirely comfortable with it. It seems to be dangerous, or at least in bad practice to use an array to redefine itself, sort of like trying to define a word but using the word in the definition. Can anyone explain either why this is wrong, or why it's ok?
let array = []
// other stuff happens to fill the array
array = array.filter(element => element !== true)
The reason I'm doing it this way is that I need that same variable name (array, for these purposes) to remain consistent throughout the function. The contents of the array might be added or removed multiple times before the array gets returned, depending on other behavior happening around it.
Let me know if this is too vague, I'll try to clarify. Thanks!
It's perfectly fine. The right side (array.filter(element => element !== true)) of the assignment will be evaluated first, generate a completely new array and only then it'll be assigned back into array.
You can loop through your array backwards and remove the items, as to not allocate another array and reassign.
for (i = array.length - 1; i >= 0; --i) {
if (!!array[i]) {
array.splice(i, 1);
}
}
This solution is:
Fast
Memory efficient
Will not interfere with any other code that has a reference to your original array.
In my application, I have a very large array of objects on the front-end, and these objects all have some kind of long ID under the heading ["object_id"]. I'm using UnderscoreJS for all my manipulations of this list. This is a prototype for an application that will eventually be handling most of this effort on the backend.
Merging the list is a big part of the application's requirement. See, the list that I work with initially will have many distinct objects with identical object_ids. Initially I was merging them all in one go with a groupBy and a map-reduce, but now the requirements have changed and I'm merging them one at a time (about a second apart, to simulate a stream of input) into a initially empty array.
My naive implementation was something like this:
function(newObject, objects) {
var obj_id = newObject["object_id"]; //id of object to merge, let's say
var tempObject = null;
var objectToMerge = _.find(objects,
function(obj) {
return obj_id == obj["object_id"];
});
if (objectToMerge) {
tempObject = merge(objectToMerge, newObject);
objects = _.reject(objects, /*same function as findWhere*/ );
} else {
tempObject = newObject;
}
objects.push(tempObject);
return objects;
}
This is ridiculously more efficient than before, when I was remerging from the mock data "source" array every time a new object was supposed to be pushed, so it's down from what I think was O(N^2) at least to O(N), but N here is so large (for JavaScript, anyway!) I'd like to optimize it. Currently worst case, where the object_id is not redundant, is the entire list is traversed twice. So what I'd like is to do a find-and-replace, an operation which would return a new version of the list, but with the merged object in place of the old one.
I could do a map where the iterator returns a new, merged object iff the object_id is the same, but that doesn't have the short-circuit evaluation that _.find has, which means the difference between having a worst-case runtime and having that be the default runtime, and doesn't easily account for pushing the object if there wasn't a match.
I'd also like to avoid mutating the original array in place. I know objects.push(tempObject) does that very thing, but for data-binding reasons I'm ignoring that and returning the mutated list as though it were new.
It's also unavoidable that I'll have to check the array to see if the new object was merged or whether it was appended. Using closures I could keep track of a flag to see if the merge happened, but I'm trying to be as idiomatically LISPy as possible for my own sanity. Also, past a certain point, most objects will be merged, so extra runtime overheard for adding new items isn't a huge problem, as long as it is only incurred when it has to happen.
What are the downsides to doing:
var myArray = [];
myArray[myArray.length] = val1;
myArray[myArray.length] = val2;
instead of:
var myArray = [];
myArray.push(val1);
myArray.push(val2);
I'm sure the push method is much more "acceptable", but are there any differences in functionality?
push is way faster, almost 300% faster.
Proof: http://jsperf.com/push-vs-length-test
Since arrays in JavaScript do not have holes the functionality of those two methods is equal. And yes, using .push() is much cleaner (and shorter).
I've generally thought length assignment was faster. Just found Index vs. push performance which backs that up; for my Chrome 14 browser anyway, over a single test run. However there is not much in it in Chrome.
There seems to be discrepancy on which test is faster among the varying JavaScript engines. The differences in speed may be negligible (unless an unholy amount of pushes are needed). In that case, the prudent developer should always err on the side of readability. In this case, in my opinion and the opinion of #TheifMaster is that [].push() is cleaner and it is easier to read. Maintenance of code is the most expensive part of coding.
As I tested, the first way is faster, I'm not sure why, keep researching. Also the ECMA doesn't mentioned which one is better, I think it is depending on how the browser vendor implements this.
var b = new Array();
var bd1 = new Date().getTime();
for(var i =0;i<1000000; i++){
b[b.length] = i;
};
alert(new Date().getTime()- bd1);
var a = new Array();
var ad1 = new Date().getTime();
for(var i =0;i<1000000; i++){
a.push(i);
};
alert(new Date().getTime()- ad1);
In JS there are 3 different ways you can add an element to the end of an array. All three have their different use cases.
1) a.push(v), a.push(v1,v2,v3), a.push(...[1,2,3,4]), a.push(..."test")
Push is not a very well thought function in JS. It returns the length of the resulting array. How silly. So you can never chain push() in functional programming unless you want to return the length at the very end. It should have returned a reference to the object it's called upon. I mean then it would still be possible to get the length if needed like a.push(..."idiot").length. Forget about push if you have intentions to do something functional.
2) a[a.length] = "something"
This is the biggest rival of a.push("something"). People fight over this. To me the only two differences are that
This one returns the value added to the end of the array
Only accepts single value. It's not as clever as push.
You shall use it if the returned value is of use to you.
3. a.concat(v), a.concat(v1,v2,v3), a.concat(...[1,2,3,4]), a.concat([1,2,3,4])
Concat is unbelievably handy. You can use it exactly like push. If you pass the arguments in array it will spread them to the end of the array it's called upon. If you pass them as separate arguments it will still do the same like a = a.concat([1,2,3],4,5,6); //returns [1, 2, 3, 4, 5, 6] However don't do this.. not so reliable. It's better to pass all arguments in an array literal.
Best thing with concat is it will return a reference to the resulting array. So it's perfect to use it in functional programming and chaining.
Array.prototype.concat() is my preference.
4) A new push() proposal
Actually one other thing you can do is to overwrite the Array.prototype.push() function like;
Array.prototype.push = function(...args) {
return args.reduce(function(p,c) {
p[p.length] = c;
return p
}, this)
};
so that it perfectly returns a reference to the array it's called upon.
I have an updated benchmark here: jsbench.me
Feel free to check which is faster for your current engine. arr[arr.length] was about 40% faster than arr.push() on Chromium 86.