For loop VS For Each in javascript [duplicate] - javascript

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
The community reviewed whether to reopen this question 2 months ago and left it closed:
Needs details or clarity Add details and clarify the problem by editing this post.
Improve this question
What is the current standard in 2017 in Javascript with for() loops vs a .forEach.
I am currently working my way through Colt Steeles "Web Dev Bootcamp" on Udemy and he favours forEach over for in his teachings. I have, however, searched for various things during the exercises as part of the course work and I find more and more recommendations to use a for-loop rather than forEach. Most people seem to state the for loop is more efficient.
Is this something that has changed since the course was written (circa 2015) or are their really pros and cons for each, which one will learn with more experience.
Any advice would be greatly appreciated.

for
for loops are much more efficient. It is a looping construct specifically designed to iterate while a condition is true, at the same time offering a stepping mechanism (generally to increase the iterator). Example:
for (var i=0, n=arr.length; i < n; ++i ) {
...
}
This isn't to suggest that for-loops will always be more efficient, just that JS engines and browsers have optimized them to be so. Over the years there have been compromises as to which looping construct is more efficient (for, while, reduce, reverse-while, etc) -- different browsers and JS engines have their own implementations that offer different methodologies to produce the same results. As browsers further optimize to meet performance demands, theoretically [].forEach could be implemented in such a way that it's faster or comparable to a for.
Benefits:
efficient
early loop termination (honors break and continue)
condition control (i<n can be anything and not bound to an array's size)
variable scoping (var i leaves i available after the loop ends)
forEach
.forEach are methods that primarily iterate over arrays (also over other enumerable, such as Map and Set objects). They are newer and provide code that is subjectively easier to read. Example:
[].forEach((val, index)=>{
...
});
Benefits:
does not involve variable setup (iterates over each element of the array)
functions/arrow-functions scope the variable to the block
In the example above, val would be a parameter of the newly created function. Thus, any variables called val before the loop, would hold their values after it ends.
subjectively more maintainable as it may be easier to identify what the code is doing -- it's iterating over an enumerable; whereas a for-loop could be used for any number of looping schemes
Performance
Performance is a tricky topic, which generally requires some experience when it comes to forethought or approach. In order to determine ahead of time (while developing) how much optimization may be required, a programmer must have a good idea of past experience with the problem case, as well as a good understanding of potential solutions.
Using jQuery in some cases may be too slow at times (an experienced developer may know that), whereas other times may be a non-issue, in which case the library's cross-browser compliance and ease of performing other functions (e.g., AJAX, event-handling) would be worth the development (and maintenance) time saved.
Another example is, if performance and optimization was everything, there would be no other code than machine or assembly. Obviously that isn't the case as there are many different high level and low level languages, each with their own tradeoffs. These tradeoffs include, but are not limited to specialization, development ease and speed, maintenance ease and speed, optimized code, error free code, etc.
Approach
If you don't have a good understanding if something will require optimized code, it's generally a good rule of thumb to write maintainable code first. From there, you can test and pinpoint what needs more attention when it's required.
That said, certain obvious optimizations should be part of general practice and not required any thought. For instance, consider the following loop:
for (var i=0; i < arr.length; ++i ){}
For each iteration of the loop, JavaScript is retrieving the arr.length, a key-lookup costing operations on each cycle. There is no reason why this shouldn't be:
for (var i=0, n=arr.length; i < n; ++i){}
This does the same thing, but only retrieves arr.length once, caching the variable and optimizing your code.

Related

Memory management in Javascript [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I learnt programming with C/C++, so memory management in Javascript is never intuitive to me.
I know that all variables are located in heap memory rather than in stack memory, so memory operations are quite expensive. If all references to a variable are not accessible anymore, it can be garbage collected, but it looks like V8 won't immediately do garbage collection? (Observed with --trace_gc)
To free memory of a global array object, this can be done by array = null, will array = [] have the same effect? (I need the variable to behave like an array even after I clear it).
From my experience, String and Number are passed to functions by value while Object and Array are passed to functions by reference. If String is very large and the function actually allows it to be passed by reference safely (only read the string), will V8 optimizes like that?
ES6 introduces let keyword for block-scope declaration, but a single use of let makes the whole function becomes slower, so I still stick to var even though let/const are closer to C/C++ which I am familiar with. (Tested using d8 built right from master branch, I am aware that V8 developers are actively working on this bug).
Trying to use Chrome DevTools to learn my code's memory management, but couldn't figure what those graphs and charts from profiler actually mean.
Basically, you do c++ when you want to manage your memory and a whole bunch of quite technical stuff yourself.
If you don't want you go for c#/Java because there is a virtual machine managing the memory.
Same goes for JavaScript the browser manage the memory, and unless you're loading a page with thousands of elements or writing a library to display / compute over thousands of data you won't have any memory problem.
Note that array = [] will affect the reference of a new empty array when array = null will dereference the existing array. Since it's null, if you try to use it won't work, so go for the array=[].
If you're still not convinced, then just use an appropriate library to do the work for you.

Why are native array functions are so much slower then loop [duplicate]

This question already has an answer here:
Why most JavaScript native functions are slower than their naive implementations?
(1 answer)
Closed 9 years ago.
The question is in the title, but here is a longer explanation.
Long time ago I learned some nice javascript functions like reduce, filter, map and so on. I really liked them and started to use them frequently (they look stylish and I thought that because they are native functions they should be faster than my old for loops).
Recently I needed to perform some heavy js computations, so I decided to check how faster are they and to my surprise they are not faster, they are much much slower (from 3 to 25 times slower)
Also I have not checked for every function by here are my jsperf tests for:
filter (25 times slower)
reduce (3 times slower)
map (3 times slower)
So why are native functions are so much slower then old loops and what was the point of creating them if they are not doing anything better.
I assume that the speed loss is due to the invocation of the function inside of them, but still it does not justify such loss. Also I can not see why the code written with these functions is more readable, not to mention, that they are not supported in every browser.
I think at some point it comes down to the fact that these native functions are more sugar than they are optimizations.
It's not the same as say using Array.prototype.splice rather than looping over and doing it yourself where the implementation is obviously going to be able to do far more under the hood (in memory) than you yourself would be able to.
At some point in time with filter, reduce and map the browser is going to have to loop over your array and perform some operation on the value contained within it (just as you do with a loop). It can't reduce the amount it has to do to achieve the same ends (it's still looping and performing an operation) but it can give you a more pleasing API and provide error checking etc that will increase the time.

Why are "continue" statements bad in JavaScript? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
In the book Javascript: The Good Parts by Douglas Crockford, this is all the author has to say about the continue Statement:
The continue statement jumps to the top of the loop. I have never seen a piece of code that was not improved by refactoring it to remove the continue statement.
This really confuses me. I know Crockford has some very opinionated views on JavaScript, but this just sounds entirely wrong to me.
First of all, continue does more than just jump to the top of a loop. By default, it also progresses to the next iteration. So isn't Crockford's statement just completely false information?
More importantly, I do not entirely understand why continue would even be considered to be bad. This post provides what seems to be the general assumption:
Why is continue inside a loop a bad idea?
Although I understand how continue may make code difficult to read in certain instances, I think it is just as likely that it can make code more readable. For instance:
var someArray=['blah',5,'stuff',7];
for(var i=0;i<someArray.length;i++){
if(typeof someArray[i]==='number'){
for(var j=0;j<someArray[i];j++){
console.log(j);
}
}
}
This could be refactored into:
var someArray=['blah',5,'stuff',7];
for(var i=0;i<someArray.length;i++){
if(typeof someArray[i]!=='number'){
continue;
}
for(var j=0;j<someArray[i];j++){
console.log(j);
}
}
continue isn't particularly beneficial in this specific example, but it does demonstrate the fact that it reduces the nesting depth. In more complex code, this could potentially increase readability.
Crockford provides no explanation as to why continue should not be used, so is there some deeper significance behind this opinion that I am missing?
The statement is ridiculous. continue can be abused, but it often helps readability.
Typical use:
for (somecondition)
{
if (!firsttest) continue;
some_provisional_work_that_is_almost_always_needed();
if (!further_tests()) continue;
do_expensive_operation();
}
The goal is to avoid 'lasagna' code, where you have deeply nested conditionals.
Edited to add:
Yes, this is ultimately subjective. Here's my metric for deciding.
Edited one last time:
This example is too simple, of course, and you can always replace nested conditionals with function calls. But then you may have to pass data into the nested functions by reference, which can create refactoring problems at least as bad as the ones you're trying to avoid.
Douglas Crockford may feel this way because he doesn't believe in assignment within a conditional. In fact, his program JSlint doesn't even let you do it, even though Javascript does. He would never write:
Example 1
while (rec = getrec())
{
if (condition1(rec))
continue;
doSomething(rec);
}
but, I'm guessing he would write something like:
Example 2
rec = getrec();
while (rec)
{
if (!condition(rec))
doSomething(rec);
rec = getrec();
}
Both of these work, but if you accidentally mix these styles you get an infinite loop:
Example 3
rec = getrec();
while (rec)
{
if (condition1(rec))
continue;
rec = getrec();
}
This could be part of why he doesn't like continues.
I am personally on the other side than the majority here. The problem is usually not with the shown continue patterns, but with more deeply nested ones, where possible code paths may become hard to see.
But even your example with one continue does not show improvement in my opinion that is justifiable. From my experience a few continue statements are a nightmare to refactor later (even for static languages better suited for automated refactoring like Java, especially when someone later puts there break too).
Thus, I would add a comment to the quote you gave:
Refactoring to remove continue statement inreases your further ability to refactor.
And inner loops are really good candidated for e.g. extract function. Such refactoring is done when the inner loop becomes complex and then continue may make it painful.
These are my honest opinions after working professionally on JavaScript projects in a team, there rules that Douglas Crockford talks about really show their merits.
Continue is an extremely useful tool for saving computation cycles in algorithms. Sure, it can be improperly used but so can every other keyword or approach. When striving for performance, it can be useful to take an inverse approach to path divergence with a conditional statement. A continue can facilitate the inverse by allowing less efficient paths to be skipped when possible.
Actually, from all the analysis it seems:
If you have shallow loops - feel free to use continue iff it improves readability (also, there may be some performance gains?).
If you have deep nested loops (which means you already have a hairball to untangle when you re-factor) avoiding continue may prove to be beneficial from a code reliability standpoint.
In defense of Douglas Crokford, I feel that his recommendations tend to lean towards defensive programming, which, in all honesty seems like a good approach for 'idiot-proofing' the code in the enterprise.
Personally, I have never heard anything bad about using the continue statement. It is true that it could (most of the time) be easily avoided, but there is no reason to not use it. I find that loops can be a lot cleaner looking and more readable with continue statements in place.

How can I write faster JavaScript? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I'm writing an HTML5 canvas visualization. According to the Chrome Developer Tools profiler, 90% of the work is being done in (program), which I assume is the V8 interpreter at work calling functions and switching contexts and whatnot.
Other than logic optimizations (e.g., only redrawing parts of the visualization that have changed), what can I do to optimize the CPU usage of my JavaScript? I'm willing to sacrifice some amount of readability and extensibility for performance. Is there a big list I'm missing because my Google skills suck? I have some ideas but I'm not sure if they're worth it:
Limit function calls
When possible, use arrays instead of objects and properties
Use variables for math operation results as much as possible
Cache common math operations such as Math.PI / 180
Use sin and cos approximation functions instead of Math.sin() and Math.cos()
Reuse objects when passing around data instead of creating new ones
Replace Math.abs() with ~~
Study jsperf.com until my eyes bleed
Use a preprocessor on my JavaScript to do some of the above operations
Update post-closure: Here are answers to what I thought I was asking. I'd like to add an answer to my own question with the following:
Efficient JavaScript - Dev.Opera
JavaScript Call Performance – Just Inline It
“I want to optimize my JS application on V8” checklist
Measure your performance, find the bottlenecks, then apply the appropriate techniques to help your specific bottlenecks. Premature optimization is fruitless and should be avoided at all costs.
Mind your DOM
– Limit repaint/reflow
Mind your recursion
– Consider iteration or memoization
Mind your loops
– Keep small, sprinkle setTimeout() liberally if needed
Loops:
Decrease amount of work per iteration
Decrease number of iterations
DOM:
Minimize property access - Cache DOM accessors/objects in local variables before performing operations - especially before loops.
If you need to access items in order frequently, copy into a regular array
Style Property:
Minimize changes on style property
Define CSS class with all changes and just change className property
Set cssText on the element directly
Group CSS changes to minimize repaint/reflow
String Matching:
If searching for simple string matches, indexOf should be used instead of regular expression matching wherever possible.
Reduce the number of replace commands you use, and try to optimise into fewer, more efficient replace commands
eval is evil:
The 'eval' method, and related constructs such as 'new Function', are extremely wasteful. They effectively require the browser to create an entirely new scripting environment (just like creating a new web page), import all variables from the current scope, execute the script, collect the garbage, and export the variables back into the original environment. Additionally, the code cannot be cached for optimisation purposes. eval and its relatives should be avoided if at all possible.
Only listen to what you need:
Adding an event listener for the BeforeEvent event is the most wasteful of all, since it causes all possible events to fire, even if they are not needed. In general, this can be several thousand events per second. BeforeEvent should be avoided at all costs, and replaced with the appropriate BeforeEvent.eventtype. Duplicate listeners can usually be replaced with a single listener that provides the functionality of several listener functions.
Timers take too much time:
Because a timer normally has to evaluate the given code in the same way as eval, it is best to have as little code as possible inside the evaluated statement. Instead of writing all of the code inside the timeout statement, put it in a separate function, and call the function from the timeout statement. This allows you to use the direct function reference instead of an evaluated string. As well as removing the inefficiency of eval, this will also help to prevent creating global variables within the evaluated code.

Do modern JavaScript JITers need array-length caching in loops?

I find the practice of caching an array's length property inside a for loop quite distasteful. As in,
for (var i = 0, l = myArray.length; i < l; ++i) {
// ...
}
In my eyes at least, this hurts readability a lot compared with the straightforward
for (var i = 0; i < myArray.length; ++i) {
// ...
}
(not to mention that it leaks another variable into the surrounding function due to the nature of lexical scope and hoisting.)
I'd like to be able to tell anyone who does this "don't bother; modern JS JITers optimize that trick away." Obviously it's not a trivial optimization, since you could e.g. modify the array while it is being iterated over, but I would think given all the crazy stuff I've heard about JITers and their runtime analysis tricks, they'd have gotten to this by now.
Anyone have evidence one way or another?
And yes, I too wish it would suffice to say "that's a micro-optimization; don't do that until you profile." But not everyone listens to that kind of reason, especially when it becomes a habit to cache the length and they just end up doing so automatically, almost as a style choice.
It depends on a few things:
Whether you've proven your code is spending significant time looping
Whether the slowest browser you're fully supporting benefits from array length caching
Whether you or the people who work on your code find the array length caching hard to read
It seems from the benchmarks I've seen (for example, here and here) that performance in IE < 9 (which will generally be the slowest browsers you have to deal with) benefits from caching the array length, so it may be worth doing. For what it's worth, I have a long-standing habit of caching the array length and as a result find it easy to read. There are also other loop optimizations that can have an effect, such as counting down rather than up.
Here's a relevant discussion about this from the JSMentors mailing list: http://groups.google.com/group/jsmentors/browse_thread/thread/526c1ddeccfe90f0
My tests show that all major newer browsers cache the length property of arrays. You don't need to cache it yourself unless you're concerned about IE6 or 7, I don't remember exactly. However, I have been using another style of iteration since those days since it gives me another benefit which I'll describe in the following example:
var arr = ["Hello", "there", "sup"];
for (var i=0, str; str = arr[i]; i++) {
// I already have the item being iterated in the loop as 'str'
alert(str);
}
You must realize that this iteration style stops if the array is allowed to contain 'falsy' values, so this style cannot be used in that case.
First of all, how is this harder to do or less legible?
var i = someArray.length;
while(i--){
//doStuff to someArray[i]
}
This is not some weird cryptic micro-optimization. It's just a basic work avoidance principle. Not using the '.' or '[]' operators more than necessary should be as obvious as not recalculating pi more than once (assuming you didn't know we already have that in the Math object).
[rantish elements yoinked]
If someArray is entirely internal to a function it's fair game for JIT optimization of its length property which is really like a getter that actually counts up the elements of the array every time you access it. A JIT could see that it was entirely locally scoped and skip the actual counting behavior.
But this involves a fair amount of complexity. Every time you do anything that mutates that Array you have to treat length like a static property and tell your array altering methods (the native code side of them I mean) to set the property manually whereas normally length just counts the items up every time it's referenced. That means every time a new array-altering method is added you have to update the JIT to branch behavior for length references of a locally scoped array.
I could see Chrome doing this eventually but I don't think it is yet based on some really informal tests. I'm not sure IE will ever have this level of performance fine-tuning as a priority. As for the other browsers, you could make a strong argument for the maintenance issue of having to branch behavior for every new array method being more trouble than its worth. At the very least, it would not get top priority.
Ultimately, accessing the length property every loop cycle isn't going to cost you a ton even in the old browsers for a typical JS loop. But I would advise getting in the habit of caching any property lookup being done more than once because with getter properties you can never be sure how much work is being done, which browsers optimize in what ways or what kind of performance costs you could hit down the road when somebody decides to move someArray outside of the function which could lead to the call object checking in a dozen places before finding what it's looking for every time you do that property access.
Caching property lookups and method returns is easy, cleans your code up, and ultimately makes it more flexible and performance-robust in the face of modification. Even if one or two JITs did make it unnecessary in circumstances involving a number of 'ifs', you couldn't be certain they always would or that your code would continue to make it possible to do so.
So yes, apologies for the anti-let-the-compiler-handle-it rant but I don't see why you would ever want to not cache your properties. It's easy. It's clean. It guarantees better performance regardless of browser or movement of the object having its property's examined to an outer scope.
But it really does piss me off that Word docs load as slowly now as they did back in 1995 and that people continue to write horrendously slow-performing java websites even though Java's VM supposedly beats all non-compiled contenders for performance. I think this notion that you can let the compiler sort out the performance details and that "modern computers are SO fast" has a lot to do with that. We should always be mindful of work-avoidance, when the work is easy to avoid and doesn't threaten legibility/maintainability, IMO. Doing it differently has never helped me (or I suspect anybody) write the code faster in the long term.

Categories