How can I write faster JavaScript? [closed] - javascript

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I'm writing an HTML5 canvas visualization. According to the Chrome Developer Tools profiler, 90% of the work is being done in (program), which I assume is the V8 interpreter at work calling functions and switching contexts and whatnot.
Other than logic optimizations (e.g., only redrawing parts of the visualization that have changed), what can I do to optimize the CPU usage of my JavaScript? I'm willing to sacrifice some amount of readability and extensibility for performance. Is there a big list I'm missing because my Google skills suck? I have some ideas but I'm not sure if they're worth it:
Limit function calls
When possible, use arrays instead of objects and properties
Use variables for math operation results as much as possible
Cache common math operations such as Math.PI / 180
Use sin and cos approximation functions instead of Math.sin() and Math.cos()
Reuse objects when passing around data instead of creating new ones
Replace Math.abs() with ~~
Study jsperf.com until my eyes bleed
Use a preprocessor on my JavaScript to do some of the above operations
Update post-closure: Here are answers to what I thought I was asking. I'd like to add an answer to my own question with the following:
Efficient JavaScript - Dev.Opera
JavaScript Call Performance – Just Inline It
“I want to optimize my JS application on V8” checklist

Measure your performance, find the bottlenecks, then apply the appropriate techniques to help your specific bottlenecks. Premature optimization is fruitless and should be avoided at all costs.
Mind your DOM
– Limit repaint/reflow
Mind your recursion
– Consider iteration or memoization
Mind your loops
– Keep small, sprinkle setTimeout() liberally if needed
Loops:
Decrease amount of work per iteration
Decrease number of iterations
DOM:
Minimize property access - Cache DOM accessors/objects in local variables before performing operations - especially before loops.
If you need to access items in order frequently, copy into a regular array
Style Property:
Minimize changes on style property
Define CSS class with all changes and just change className property
Set cssText on the element directly
Group CSS changes to minimize repaint/reflow
String Matching:
If searching for simple string matches, indexOf should be used instead of regular expression matching wherever possible.
Reduce the number of replace commands you use, and try to optimise into fewer, more efficient replace commands
eval is evil:
The 'eval' method, and related constructs such as 'new Function', are extremely wasteful. They effectively require the browser to create an entirely new scripting environment (just like creating a new web page), import all variables from the current scope, execute the script, collect the garbage, and export the variables back into the original environment. Additionally, the code cannot be cached for optimisation purposes. eval and its relatives should be avoided if at all possible.
Only listen to what you need:
Adding an event listener for the BeforeEvent event is the most wasteful of all, since it causes all possible events to fire, even if they are not needed. In general, this can be several thousand events per second. BeforeEvent should be avoided at all costs, and replaced with the appropriate BeforeEvent.eventtype. Duplicate listeners can usually be replaced with a single listener that provides the functionality of several listener functions.
Timers take too much time:
Because a timer normally has to evaluate the given code in the same way as eval, it is best to have as little code as possible inside the evaluated statement. Instead of writing all of the code inside the timeout statement, put it in a separate function, and call the function from the timeout statement. This allows you to use the direct function reference instead of an evaluated string. As well as removing the inefficiency of eval, this will also help to prevent creating global variables within the evaluated code.

Related

Why are native array functions are so much slower then loop [duplicate]

This question already has an answer here:
Why most JavaScript native functions are slower than their naive implementations?
(1 answer)
Closed 9 years ago.
The question is in the title, but here is a longer explanation.
Long time ago I learned some nice javascript functions like reduce, filter, map and so on. I really liked them and started to use them frequently (they look stylish and I thought that because they are native functions they should be faster than my old for loops).
Recently I needed to perform some heavy js computations, so I decided to check how faster are they and to my surprise they are not faster, they are much much slower (from 3 to 25 times slower)
Also I have not checked for every function by here are my jsperf tests for:
filter (25 times slower)
reduce (3 times slower)
map (3 times slower)
So why are native functions are so much slower then old loops and what was the point of creating them if they are not doing anything better.
I assume that the speed loss is due to the invocation of the function inside of them, but still it does not justify such loss. Also I can not see why the code written with these functions is more readable, not to mention, that they are not supported in every browser.
I think at some point it comes down to the fact that these native functions are more sugar than they are optimizations.
It's not the same as say using Array.prototype.splice rather than looping over and doing it yourself where the implementation is obviously going to be able to do far more under the hood (in memory) than you yourself would be able to.
At some point in time with filter, reduce and map the browser is going to have to loop over your array and perform some operation on the value contained within it (just as you do with a loop). It can't reduce the amount it has to do to achieve the same ends (it's still looping and performing an operation) but it can give you a more pleasing API and provide error checking etc that will increase the time.

JavaScript - What Level of Code Optimization can one expect?

So, I am fairly new to JavaScript coding, though not new to coding in general. When writing source code I generally have in mind the environment my code will run in (e.g. a virtual machine of some sort) - and with it the level of code optimization one can expect. (1)
In Java for example, I might write something like this,
Foo foo = FooFactory.getFoo(Bar.someStaticStuff("qux","gak",42);
blub.doSomethingImportantWithAFooObject(foo);
even if the foo object only used at this very location (thus introducing an needless variable declaration). Firstly it is my opinion that the code above is way better readable than the inlined version
blub.doSomethingImportantWithAFooObject(FooFactory.getFoo(Bar.someStaticStuff("qux","gak",42));
and secondly I know that Java compiler code optimization will take care of this anyway, i.e. the actual Java VM code will end up being inlined - so performance wise, there is no diffence between the two. (2)
Now to my actual Question:
What Level of Code Optimization can I expect in JavaScript in general?
I assume this depends on the JavaScript engine - but as my code will end up running in many different browsers lets just assume the worst and look at the worst case. Can I expect a moderate level of code optimization? What are some cases I still have to worry about?
(1) I do realize that finding good/the best algorithms and writing well organized code is more important and has a bigger impact on performance than a bit of code optimization. But that would be a different question.
(2) Now, I realize that the actual difference were there no optimization is small. But that is beside the point. There are easily features which are optimized quite efficiently, I was just kind of too lazy to write one down. Just imagine the above snippet inside a for loop which is called 100'000 times.
Don't expect much on the optimization, there won't be
the tail-recursive optimization,
loop unfolding,
inline function
etc
As javascript on client is not designed to do heavy CPU work, the optimization won't make a huge difference.
There are some guidelines for writing hi-performance javascript code, most are minor and technics, like:
Not use certain functions like eval(), arguments.callee and etc, which will prevent the js engine from generating hi-performance code.
Use native features over hand writing ones, like don't write your own containers, json parser etc.
Use local variable instead of global ones.
Never use for-each loop for array.
Use parseInt() rather than Math.floor.
AND stay away from jQuery.
All these technics are more like experience things, and may have some reasonable explanations behind. So you will have to spend some time search around or try jsPerf to help you decide which approach is better.
When you release the code, use closure compiler to take care of dead-branch and unnecessary-variable things, which will not boost up your performance a lot, but will make your code smaller.
Generally speaking, the final performance is highly depending on how well your code organized, how carefully your algorithm designed rather than how the optimizer performed.
Take your example above (by assuming FooFactory.getFoo() and Bar.someStaticStuff("qux","gak",42) is always returning the same result, and Bar, FooFactory are stateless, that someStaticStuff() and getFoo() won't change anything.)
for (int i = 0; i < 10000000; i++)
blub.doSomethingImportantWithAFooObject(
FooFactory.getFoo(Bar.someStaticStuff("qux","gak",42));
Even the g++ with -O3 flag can't make that code faster, for compiler can't tell if Bar and FooFactory are stateless or not. So these kind of code should be avoided in any language.
You are right, the level of optimization is different from JS VM to VM. But! there is a way of working around that. There are several tools that will optimize/minimize your code for you. One of the most popular ones is by Google. It's called the Closure-Compiler. You can try out the web-version and there is a cmd-line version for build-script etc. Besides that there is not much I would try about optimization, because after all Javascript is sort of fast enough.
In general, I would posit that unless you're playing really dirty with your code (leaving all your vars at global scope, creating a lot of DOM objects, making expensive AJAX calls to non-optimal datasources, etc.), the real trick with optimizing performance will be in managing all the other things you're loading in at run-time.
Loading dozens on dozens of images, or animating huge background images, and pulling in large numbers of scripts and css files can all have much greater impact on performance than even moderately-complex Javascript that is written well.
That said, a quick Google search turns up several sources on Javascript performance optimization:
http://www.developer.nokia.com/Community/Wiki/JavaScript_Performance_Best_Practices
http://www.nczonline.net/blog/2009/02/03/speed-up-your-javascript-part-4/
http://mir.aculo.us/2010/08/17/when-does-javascript-trigger-reflows-and-rendering/
As two of those links point out, the most expensive operations in a browser are reflows (where the browser has to redraw the interface due to DOM manipulation), so that's where you're going to want to be the most cautious in terms of performance. Some of that can be alleviated by being smart about what you're modifying on the fly (for example, it's less expensive to apply a class than modify inline styles ad hoc,) so separating your concerns (style from data) will be really important.
Making only the modifications you have to, in order to get the job done, (ie. rather than doing the "HULK SMASH (DOM)!" method of replacing entire chunks of pages with AJAX calls to screen-scraping remote sources, instead calling for JSON data to update only the minimum number of elements needed) and other common-sense approaches will get you a lot farther than hours of minor tweaking of a for-loop (though, again, common sense will get you pretty far, there, too).
Good luck!

Recursive function calling in JavaScript

I know you should tread lightly when making recursive calls to functions in JavaScript because your second call could be up to 10 times slower.
Eloquent JavaScript states:
There is one important problem: In most JavaScript implementations, this second version is about 10 times slow than the first one. In JavaScript, running a simple loop is a lot cheaper than calling a function multiple times.
John Resig even says this is a problem in this post.
My question is: Why is it so inefficient to use recursion? Is it just the way a particular engine is built? Will we ever see a time in JavaScript where this isn't the case?
Function calls are just more expensive than a simple loop due to all the overhead of changing the stack and setting up a new context and so on. In order for recursion to be very efficient, a language has to support some form of tail-call elimination, which basically means transforming certain kinds of recursive functions into loops. Functional languages like OCaml, Haskell and Scheme do this, but no JavaScript implementation I'm aware of does so (it would only be marginally useful unless they all did, so maybe we have a dining philosophers problem).
This is just a way the particular JS engines the browsers use are built, yes. Without tail call elimination, you have to create a new stack frame every time you recurse, whereas with a loop it's just setting the program counter back to the start of it. Scheme, for example, has this as part of the language specification, so you can use recursion in this manner without worrying about performance.
https://bugzilla.mozilla.org/show_bug.cgi?id=445363 indicates progress being made in Firefox (and Brendan Eich speaks in here about it possibly being made a part of the ECMAScript spec), but I don't think any of the current browsers have this implemented quite yet.

anonymous functions considered harmful? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
The more I delve into javascript, the more I think about the consequences of certain design decisions and encouraged practices. In this case, I am observing anonymous functions, a feature which is not only JavaScript-provided, but I see strongly used.
I think we can all agree on the following facts:
the human mind does not deal with more than 7 plus minus two entities (Miller's law)
deep indentation is considered bad programming practice, and generally points out at design issues if you indent more than three or four levels. This extends to nested entities, and it's well presented in the python Zen entry "Flat is better than nested."
the idea of having a function name is both for reference, and for easy documentation of the task it performs. We know, or can expect, what a function called removeListEntry() does. Self-documenting, clear code is important for debugging and readability.
While anonymous functions appears to be a very nice feature, its use leads to deeply nested code design. The code is quick to write, but difficult to read. Instead of being forced to invent a named context for a functionality, and flatten your hierarchy of callable objects, it encourages a "go deep one level", pushing your brain stack and quickly overflowing the 7 +/- 2 rule. A similar concept is expressed in Alan Cooper's "About Face", quoting loosely "people don't understand hierarchies". As programmers we do understand hierarchies, but our biology still limits our grasping of deep nesting.
I'd like to hear you on this point. Should anonymous functions be considered harmful, an apparent shiny syntactic sugar which we find later on to be salt, or even rat poison ?
CW as there's no correct answer.
As I see it, the problem you're facing is not anonymous functions, rather an unwillingness to factor out functionality into useful and reusable units. Which is interesting, because it's easier to reuse functionality in languages with first-class functions (and, necessarily, anonymous functions) than in languages without.
If you see a lot of deeply nested anonymous functions in your code, I would suggest that there may be a lot of common functionality that can be factored out into named higher-order functions (i.e. functions that take or return ("build") other functions). Even "simple" transformations of existing functions should be given names if they are used often. This is just the DRY principle.
Anonymous functions are more useful functionally than they are harmful legibly. I think that if you format your code well enough, you shouldn't have a problem. I don't have a problem with it, and I'm sure I can't handle 7 elements, let alone 7 + 2 :)
Actually, hierarchies help to overcome 7+/-2 rule the same way as OOP does. When you're writing or reading a class, you read its content and nothing of outside code so you are dealing with relatively small portion of entities. When you're looking at class hierarchies, you don't look inside them, meaning then again you are dealing with small number of entities.
The same if true for nested functions. By dividing your code into multiple levels of hierarchy, you keep each level small enough for human brain to comprehend.
Closures (or anonymous functions) just help to break your code into slightly different way than OOP does but they doesn't really create any hierarchies. They are here to help you to execute your code in context of other block of code. In C++ or Java you have to create a class for that, in JavaScript function is enough. Granted, standalone class is easier to understand as it is just easier for human to look at it as at standalone block. Function seems to be much smaller in size and brain sometimes think it can comprehend it AND code around it at the same time which is usually a bad idea. But you can train your brain not to do that :)
So no, I don't think anonymous functions are at all harmful, you just have to learn to deal with them, as you learnt to deal with classes.
Amusingly, JavaScript will let you name "anonymous" functions:
function f(x) {
return function add(y) {
return x+y;
};
}
I think closures have enormous benefits which should not be overlooked. For example, Apple leverages "blocks"(closures for C) with GCD to provide really easy multithreading - you don't need to setup context structs, and can just reference variables by name since they're in scope.
I think a bigger problem with Javascript is that it doesn't have block scope(blocks in this case referring to code in braces, like an if statement). This can lead to enormous complications, forcing programmers to use unnecessary closures to get around this Javascript design limitation.
I also think anonymous functions (in latest languages often referred as closures) have great benefits and make code often more readable and shorter. I sometimes am getting really nuts when I have to work with Java (where closures aren't first class language features).
If indentation and too many encapsulated function-variables are the problem then you should refactor the code to have it more modular and readable.
Regarding java-script I think that function-variables look quite ugly and make code cluttered (the encapsulated function(...){} string makes java-script code often less readable). As an example I much prefer the closure syntax of groovy ('{}' and '->' chars).
If a function is not understandable without a name, the name is probably too long.
Use comments to explain cryptic code, don't rely on names.
Who ever came up with the idea of requiring functions to be bound to identifiers did every programmer a disservice. If you've never done functional programming and you're not familiar with and comfortable with functions being first-class values, you're not a real programmer.
In fact, to counter your own argument, I would go so far as to consider functions bound to (global) names to be harmful! Check Crockford's article about private and public members and learn more.

What anti-patterns exist for JavaScript? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I find that what not to do is a harder lesson to learn than what should be done.
From my experience, what separates an expert from an intermediate is the ability to select from among various seemingly equivalent ways of doing the same thing.
So, when it comes to JavaScript what kinds of things should you not do and why?
I'm able to find lots of these for Java, but since JavaScript's typical context (in a browser) is very different from Java's I'm curious to see what comes out.
Language:
Namespace polluting by creating a large footprint of variables in the global context.
Binding event handlers in the form 'foo.onclick = myFunc' (inextensible, should be using attachEvent/addEventListener).
Using eval in almost any non-JSON context
Almost every use of document.write (use the DOM methods like document.createElement)
Prototyping against the Object object (BOOM!)
A small one this, but doing large numbers of string concats with '+' (creating an array and joining it is much more efficient)
Referring to the non-existent undefined constant
Design/Deployment:
(Generally) not providing noscript support.
Not packaging your code into a single resource
Putting inline (i.e. body) scripts near the top of the body (they block loading)
Ajax specific:
not indicating the start, end, or error of a request to the user
polling
passing and parsing XML instead of JSON or HTML (where appropriate)
Many of these were sourced from the book Learning JavaScript Design by Addy Osmati: https://www.oreilly.com/library/view/learning-javascript-design/9781449334840/ch06.html
edit: I keep thinking of more!
Besides those already mentioned...
Using the for..in construct to iterate over arrays
(iterates over array methods AND indices)
Using Javascript inline like <body onload="doThis();">
(inflexible and prevents multiple event listeners)
Using the 'Function()' constructor
(bad for the same reasons eval() is bad)
Passing strings instead of functions to setTimeout or setInterval
(also uses eval() internally)
Relying on implicit statements by not using semicolons
(bad habit to pick up, and can lead to unexpected behavior)
Using /* .. */ to block out lines of code
(can interfere with regex literals, e.g.: /* /.*/ */)
<evangelism>
And of course, not using Prototype ;)
</evangelism>
The biggest for me is not understanding the JavaScript programming language itself.
Overusing object hierarchies and building very deep inheritance chains. Shallow hierarchies work fine in most cases in JS.
Not understanding prototype based object orientation, and instead building huge amounts of scaffolding to make JS behave like traditional OO languages.
Unnecessarily using OO paradigms when procedural / functional programming could be more concise and efficient.
Then there are those for the browser runtime:
Not using good established event patterns like event delegation or the observer pattern (pub/sub) to optimize event handling.
Making frequent DOM updates (like .appendChild in a loop), when the DOM nodes can be in memory and appended in one go. (HUGE performance benefit).
Overusing libraries for selecting nodes with complex selectors when native methods can be used (getElementById, getElementByTagName, etc.). This is becoming lesser of an issue these days, but it's worth mentioning.
Extending DOM objects when you expect third-party scripts to be on the same page as yours (you will end up clobbering each other's code).
And finally the deployment issues.
Not minifying your files.
Web-server configs - not gzipping your files, not caching them sensibly.
<plug> I've got some client-side optimization tips which cover some of the things I've mentioned above, and more, on my blog.</plug>
browser detection (instead of testing whether the specific methods/fields you want to use exist)
using alert() in most cases
see also Crockford's "Javascript: The Good Parts" for various other things to avoid. (edit: warning, he's a bit strict in some of his suggestions like the use of "===" over "==" so take them with whatever grain of salt works for you)
A few things right on top of my head. I'll edit this list when I think of more.
Don't pollute global namespace. Organize things in objects instead;
Don't omit 'var' for variables. That pollutes global namespace and might get you in trouble with other such scripts.
any use of 'with'
with (document.forms["mainForm"].elements) {
input1.value = "junk";
input2.value = "junk";
}
any reference to
document.all
in your code, unless it is within special code, just for IE to overcome an IE bug. (cough document.getElementById() cough)
Bad use of brace positioning when creating statements
You should always put a brace after the statement due to automatic semicolon insertion.
For example this:
function()
{
return
{
price: 10
}
}
differs greatly from this:
function(){
return{
price: 10
}
}
Becuase in the first example, javascript will insert a semicolon for you actually leaving you with this:
function()
{
return; // oh dear!
{
price: 10
}
}
Using setInterval for potentially long running tasks.
You should use setTimeout rather than setInterval for occasions where you need to do something repeatedly.
If you use setInterval, but the function that is executed in the timer is not finished by the time the timer next ticks, this is bad. Instead use the following pattern using setTimeout
function doThisManyTimes(){
alert("It's happening again!");
}
(function repeat(){
doThisManyTimes();
setTimeout(repeat, 500);
})();
This is explained very well by Paul Irish on his 10 things I learned from the jQuery source video
Not using a community based framework to do repetitive tasks like DOM manipulation, event handling, etc.
Effective caching is seldomly done:
Don't store a copy of the library (jQuery, Prototype, Dojo) on your server when you can use a shared API like Google's Libraries API to speed up page loads
Combine and minify all your script that you can into one
Use mod_expires to give all your scripts infinite cache lifetime (never redownload)
Version your javascript filenames so that a new update will be taken by clients without need to reload/restart
(i.e. myFile_r231.js, or myFile.js?r=231)

Categories