I am little bit confused about every() and forloop. Which is the best loop for array elements with some condition. For example:
function isBigEnough(element, index, array) {
return element >= 10;
}
[12, 5, 8, 130, 44].every(isBigEnough); // false
[12, 54, 18, 130, 44].every(isBigEnough); // true
or
var arr = [12, 5, 8, 130, 44];
for(i =0 ; i<arr.length; i++) {
if(arr[i] >= 10)
console.log(arr[i])
}
Which is one best performance for check/sort out the element from array. I hope you to clear my doubts.
first thing instead of every use some function, it is more apt in your case.
secondly using asynchronous native array functions is always better in terms of code readability as well as performance when you app scales.
Plus they have additional benefit when it comes to object oriented implementation.
The best is to ignore issues like that unless you have a good reason to suspect that performance is going to be a factor.
Use the cleanest, most readable approach by default. Only consider performance when you think it's likely this is going to be a bottle-neck (and a bother). Iterating over ten elements isn't - and even then, the answer is that you need to profile the time for yourself; just try both the possibilities, measure the difference, and if it's big enough, pick the faster one. Saving 5% of run time isn't usually worth the costs.
I would go for the simplest approach and that is with the for loop.
The Every loop is something new that not all browsers are compatible with and this mean you will add more code to check this.
Therefore the for loop is the best idea to use .
Thanks
Related
http://jsperf.com/loops/67
If you look, the following loop manages some insane benchmarks:
var i=0;
var v;
for (i, v; v = arr[i++];) {
v;
}
It score ~700 million ops/sec in FF, ~20 mil in Chrome, and ~50mil in IE10. The next fastest loop manages ~100k in FF, ~6k in IE10 and barely ~2k in Chrome.
Why is it so fast? I can see the obvious differences between the other loops and how one is faster than another, but I can't come up with anything that would explain the absolutely mind blowing performance difference with this loop, 700 million to 100k is an insane gap.
Edit after answers:
Based off #Michael Gary's answer, I went back and edited the setup to include an actual real array and the results fell back to reality: http://jsperf.com/loops/70
The reason is simple. The array arr is created with this code:
var arr = new Array(10000);
So it has a length of 10000, but all of the elements are undefined. This loop doesn't work off the array length, but terminates when it encounters a "falsy" value - the assumption being that the loop will stop when v receives an undefined value as a result of trying to read past the end of the array.
But in this particular array, all ten thousand elements have the value undefined. So the loop stops when it tests the very first element of the array. In other words, it doesn't loop at all! No wonder it's fast.
But what about a more real-world case? How does this kind of loop fare with a lengthy JSON array of objects:
[
{ "id": 507674, "name": "Kolink" },
{ "id": 997356, "name": "DarkLord7854" },
{ "id": 1202830, "name": "Michael Geary" },
/* and thousands more */
]
Here you don't have the problem of the loop terminating immediately, since the array elements are all "truthy".
With modern JavaScript engines this turns out to be a fairly poor way to write a loop, as I recently found out to my extreme embarrassment.
I was one of the authors of the jQuery Cookbook: I wrote most of Chapter 5, "Faster, Simpler, More Fun". Well, the "faster" part didn't turn out so well. I was recommending a loop very much like yours for iterating over a large array of objects like the one above:
for( var item, i = -1; item = array[++i]; ) {
// do stuff with item
}
Turns out that in modern browsers, this is quite a bit slower than a conventional loop like this:
for( var i = 0, n = array.length; i < n; i++ ) {
var item = array[i];
// do stuff with item
}
Part of this is due to the fact that trying to read past the end of the array throws some JavaScript engines back into a deoptimized way of representing the array, as one of the V8 authors explained to me at Google I/O last year. Part of it may be due to the browsers optimizing the more common kinds of loops and not optimizing this less common approach.
Either way, the more conventional loop turns out to be faster in modern browsers:
http://jsperf.com/mikes-loops/2
But that's a different case from your loop. In yours, the insane performance increase is directly due to the fact that it doesn't run the loop at all. :-)
arr is initialised to an array containing 10000 lots of bugger all. Array(10000) prepares the array's length, but does not populate it in any way.
Therefore, arr[0] will be undefined, which is falsy, so the for loop terminates immediately.
Essentially, the code boils down do:
var i=0;
var v;
i,v; // doesn't do anything but access the variables
v = undefined;
i++;
I have a particular array, by which I wanted to check if two values within the array equal the value passed into the function, and if the two integers do, then pass it into a new array.
I have solved this by using two backwards while loops and caching the length as a variable, which seemed to be efficient. However, someone mentioned to me there might be a way to remove the need for one of the loops and making it much more efficient and thus optimizing the BIG O notation.
Any ideas how this could be done? This is what I have...
var intArray = [1, 3, 7, 8, 10, 4, 6, 13, 0],
newArray = [],
i = intArray.length;
function arrayCheck(k) {
while(i--) {
var z = i;
while (z--) {
if (intArray[i] + intArray[z] === k) {
newArray.push(intArray[i]);
newArray.push(intArray[z]);
}
}
}
alert(newArray);
}
arrayCheck(8);
There is an algorithm that solves this problem in linear [O(n)] time. I recommend you check out this SO answer.
Also, as others have stated, marking answers as accepted will make people more likely to answer your questions. You may wish to revisit questions you've previously asked and accept any answers that deserve it.
if you check for number N, and intArray[i] = M, then you need to find a value N-M in the array.
Build an efficient tree search to find values N-M and you can solve this in O(n+logn).
According to this JSPerf the While ! Undefined style array loop is ~10x faster in all browsers. What are the disadvantages of this style of loop?
It is actually a bug in the test case, the iterator is not being reset to zero for each test run (i.e., only the first test run rolls the full loop, next runs have the iterator already past the end of the array, thus roll zero times). See corrected test suite for the true benchmark.
(Note: I haven't inspected all test cases, others might be flawed as well)
As you can see, we're talking about 10x faster/slower on a scale of millions of operations per second, which is not significant enough to worry about.
A possible disadvantage of that style is readability to other developers, which is more important than the "performance boost".
Judge yourself, what's more readable?
// Chrome 21: 5M operations per second
var a;
while ((a = arr[i++]) !== undefined) {
someFn(a);
}
// Chrome 21: 324k operations per second
for (var i = 0; i < arr.length; i++) {
someFn(arr[i]);
}
The major disadvantage I can see is that you can't break out of the loop! You'll hit an unresponsive UI in no time with that.
disadvantages:
1. if a[i] has been used it is no longer undefined. Thus you may do more than anticipated.
2. readability, it is difficult to know the end point (unless you put some comments ;)
nothing else.
the new revision still doesn't differ that much, but if it's speed you need then comment well and test well.
If your function "someFn(a);" has a timer of more than in these tests then i'd recommend testing your own loop if it is that important.
If not always stick to tidy coding.
What's the best way to generate 5 random non duplicating integers from 0 - 20?
I'm thinking, use Math.random with floor, loop it 5 times, check for duplicates, if duplicate, random again.
What's your way?
You could generate an array of numbers from 0 to 20, shuffle it and take the first 5 elements of the resulting array.
late answer i know, but:
var a=[];
while(a.length <3) {
var n = Math.round(Math.random() * 20);
if (a.indexOf(n)==-1) a.push(n);
}
=> [14, 17, 19]
Edit: A better solution that this or the others posted here can be found in this answer to this question when it was asked back in 2008. Summarizing: Generate an array (as Darin suggests in his answer below) and shuffle it using the Knuth-Yates-Fisher shuffle. Don't use a naive shuffle, use one that's known to have good results.
That's pretty much how I'd do it, yes. I'd probably use an object to keep track of the integers I already had, since that's convenient. E.g.:
var ints = {};
Then once you've created a new random number, check it and possibly keep it:
if (!ints[number]) {
// It's a keeper
ints[number] = true;
results.push(number);
}
That is, would I be better suited to use some kind of tree or skip list data structure if I need to be calling this function a lot for individual array insertions?
You might consider whether you want to use an object instead; all JavaScript objects (including Array instances) are (highly-optimized) sets of key/value pairs with an optional prototype An implementation should (note I don't say "does") have a reasonable performance hashing algorithm. (Update: That was in 2010. Here in 2018, objects are highly optimized on all significant JavaScript engines.)
Aside from that, the performance of splice is going to vary a lot between implementations (e.g., vendors). This is one reason why "don't optimize prematurely" is even more appropriate advice for JavaScript applications that will run in multiple vendor implementations (web apps, for instance) than it is even for normal programming. Keep your code well modularized and address performance issues if and when they occur.
Here's a good rule of thumb, based on tests done in Chrome, Safari and Firefox: Splicing a single value into the middle of an array is roughly half as fast as pushing/shifting a value to one end of the array. (Note: Only tested on an array of size 10,000.)
http://jsperf.com/splicing-a-single-value
That's pretty fast. So, it's unlikely that you need to go so far as to implement another data structure in order to squeeze more performance out.
Update: As eBusiness points out in the comments below, the test performs an expensive copy operation along with each splice, push, and shift, which means that it understates the difference in performance. Here's a revised test that avoids the array copying, so it should be much more accurate: http://jsperf.com/splicing-a-single-value/19
Move single value
// tmp = arr[1][i];
// arr[1].splice(i, 1); // splice is slow in FF
// arr[1].splice(end0_1, 0, tmp);
tmp = arr[1][i];
ii = i;
while (ii<end0_1)
{
arr[1][ii] = arr[1][++ii];
cycles++;
}
arr[1][end0_1] = tmp;