This question already has an answer here:
Why most JavaScript native functions are slower than their naive implementations?
(1 answer)
Closed 9 years ago.
The question is in the title, but here is a longer explanation.
Long time ago I learned some nice javascript functions like reduce, filter, map and so on. I really liked them and started to use them frequently (they look stylish and I thought that because they are native functions they should be faster than my old for loops).
Recently I needed to perform some heavy js computations, so I decided to check how faster are they and to my surprise they are not faster, they are much much slower (from 3 to 25 times slower)
Also I have not checked for every function by here are my jsperf tests for:
filter (25 times slower)
reduce (3 times slower)
map (3 times slower)
So why are native functions are so much slower then old loops and what was the point of creating them if they are not doing anything better.
I assume that the speed loss is due to the invocation of the function inside of them, but still it does not justify such loss. Also I can not see why the code written with these functions is more readable, not to mention, that they are not supported in every browser.
I think at some point it comes down to the fact that these native functions are more sugar than they are optimizations.
It's not the same as say using Array.prototype.splice rather than looping over and doing it yourself where the implementation is obviously going to be able to do far more under the hood (in memory) than you yourself would be able to.
At some point in time with filter, reduce and map the browser is going to have to loop over your array and perform some operation on the value contained within it (just as you do with a loop). It can't reduce the amount it has to do to achieve the same ends (it's still looping and performing an operation) but it can give you a more pleasing API and provide error checking etc that will increase the time.
Related
Is it better to store js functions in Set or Array?
How I understand, set is binary Tree. And storing in set there must be compare function.
By "better", I mean read and write performance, search and delete, and finally for loop for all entries. I will call these functions in each frame. So It is very important to save time.
About memory, I don't care so much, because functions are not many, over 100-500 functions per array, and 5-10 array. Overall 1000-5000 functions.
I know that set is better for adding and removing, and array is better for iterating over elements. But I cant understand how it will work for functions.
In all cases of performance, you should profile both since the performance metrics can change from year to year. You're probably not going to even notice any measurable performance differences until you get into the hundreds of thousands of items.
Based on the specs you defined in your question, the simple answer is, it doesn't matter. Functions are first-class citizens in Javascript so there is no differences between how an array or set works for data and how it works for functions.
You're not going to run into any performance issues iterating over a few thousand functions every frame; This is core javascript functionality and highly optimized already. The place where you'll need to optimize is the functions themselves since the more complex they are, the longer each one will take to run.
Rather than worry about it now, just use an array, build your project and see if you even hit any performance bottlenecks. Then you can profile your code to see where those bottlenecks are happening. They most likely will not be due to the array manipulation but the code run in the functions themselves.
I recently read this article tries to explain how JavaScript's ability to manipulate functions could be used to let every computer in the world to do a small part in processing all the information on the internet. The way I understand it is this:
function map(fn, a)
{
for (i = 0; i < a.length; i++)
{
a[i] = fn(a[i]);
}
}
the function map allows you to call a function to every element in an array quickly
map( function(x){return x*2;}, a );
and JS allows you to call a function without declaring it. The premise is that if all data on the internet was stored as an array you can (somehow using map) split the task of making some specific change to every item in the array between several CPUs or all the computers of the world.
This is the part I do not understand - why do you need map or JS's array manipulation to do this? Couldn't you just send every computer a section of the array, send them the function to run on every element in the array, and have them convert the array without needing to execute map or any number of wacky function usage?
Sure, using a function as an object seems convenient, but why is this at all integral to the task of splitting tasks between CPUs?
No, you are jumping to the wrong conclusions here. Joel is not advocating to use JavaScript to "let every computer in the world to do a small part in processing all the information on the internet". He is using JavaScript as a language of choice to demonstrate the functionality of map and reduce functions (which, btw, could be defined much more generic than only for arrays). He then does leave the realm of JavaScript entirely, musing that programming languages need a certain level of abstraction (first-class functions) to be of any help:
Programming languages with first-class functions let you find more opportunities for abstraction, which means your code is smaller, tighter, more reusable, and more scalable.
That map and reduce are so useful as a concept (without any particular language implementation) is because they are absolutely generic, being able to express any kind of aggregation of data by just passing different functions. As long as those are pure, they are trivially parallelizable, and can be implemented on multi-core machines or even internet-scale clusters without changing algorithm or result.
MapReduce was how google was doing their search in the early years leveraging lots of computers.
What I don't think is clearly communicated, is if you don't do the iteration yourself using for loops and use map then you can give it a function that takes a value and produces a new value, then the map function itself can work out how to do the work in parallel.
for loops can't work that out, you'd have to hand roll your own parallel implementation. You can do parallel stuff both ways, nothing is stopping that. But it's more a question of what was is easier / simpler / less error prone
for a useful introduction to functional programming in js, you may want to have a look at https://drboolean.gitbooks.io/mostly-adequate-guide/content/
This question already has answers here:
Is a closure for dereferencing variables useful?
(3 answers)
Closed 9 years ago.
According to this answer to 'Is object empty?':
// Speed up calls to hasOwnProperty
var hasOwnProperty = Object.prototype.hasOwnProperty;
I've seen several implementations of something similar in small JavaScript libraries, like:
var slice = Array.prototype.slice;
//or
function slice(collection) {
return Array.prototype.slice.call(collection);
}
I did a quick jsperf to test this sort of thing, and caching looked a bit quicker overall than not caching, but my test could be flawed.
(I am using the word 'cache' to mean storing the method inside a variable.)
The context of this question is when a developer needs to call the native method multiple times, and what the observable difference would be.
Does caching the native method prevent the engine from having to look inside the object for the method every time the method is called, thus making caching a faster way to call native methods whenever the developer needs to call the same native method more than once?
When you're using Array.prototype.slice a lot in, say, a library, it makes sense to create a variable holding that function (var slice = Array.prototype.slice;) because the variable can be minified by a JavaScript minifier which it can't otherwise.
Assigning the function to a variable also avoids having to traverse the object's prototype chain, which might result in a slightly better performance.
Note that this is micro-optimization, which you (generally speaking) shouldn't concern yourself too much with – leave that up to a modern JavaScript engine.
Saving the value in a variable presents some optimization opportunities because if its a local variable the interpreter could do analyisis to realize that the vatiable never gets mutated. On the other hand, you always need to dereference globals like Array, since everyone could potentially change them at any time.
That said, I have no idea if this is going to matter for performance, expecially once you consider the JIT optimizations.
Usually, the biggest reason people use var slice is to keep the source code short.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I'm writing an HTML5 canvas visualization. According to the Chrome Developer Tools profiler, 90% of the work is being done in (program), which I assume is the V8 interpreter at work calling functions and switching contexts and whatnot.
Other than logic optimizations (e.g., only redrawing parts of the visualization that have changed), what can I do to optimize the CPU usage of my JavaScript? I'm willing to sacrifice some amount of readability and extensibility for performance. Is there a big list I'm missing because my Google skills suck? I have some ideas but I'm not sure if they're worth it:
Limit function calls
When possible, use arrays instead of objects and properties
Use variables for math operation results as much as possible
Cache common math operations such as Math.PI / 180
Use sin and cos approximation functions instead of Math.sin() and Math.cos()
Reuse objects when passing around data instead of creating new ones
Replace Math.abs() with ~~
Study jsperf.com until my eyes bleed
Use a preprocessor on my JavaScript to do some of the above operations
Update post-closure: Here are answers to what I thought I was asking. I'd like to add an answer to my own question with the following:
Efficient JavaScript - Dev.Opera
JavaScript Call Performance – Just Inline It
“I want to optimize my JS application on V8” checklist
Measure your performance, find the bottlenecks, then apply the appropriate techniques to help your specific bottlenecks. Premature optimization is fruitless and should be avoided at all costs.
Mind your DOM
– Limit repaint/reflow
Mind your recursion
– Consider iteration or memoization
Mind your loops
– Keep small, sprinkle setTimeout() liberally if needed
Loops:
Decrease amount of work per iteration
Decrease number of iterations
DOM:
Minimize property access - Cache DOM accessors/objects in local variables before performing operations - especially before loops.
If you need to access items in order frequently, copy into a regular array
Style Property:
Minimize changes on style property
Define CSS class with all changes and just change className property
Set cssText on the element directly
Group CSS changes to minimize repaint/reflow
String Matching:
If searching for simple string matches, indexOf should be used instead of regular expression matching wherever possible.
Reduce the number of replace commands you use, and try to optimise into fewer, more efficient replace commands
eval is evil:
The 'eval' method, and related constructs such as 'new Function', are extremely wasteful. They effectively require the browser to create an entirely new scripting environment (just like creating a new web page), import all variables from the current scope, execute the script, collect the garbage, and export the variables back into the original environment. Additionally, the code cannot be cached for optimisation purposes. eval and its relatives should be avoided if at all possible.
Only listen to what you need:
Adding an event listener for the BeforeEvent event is the most wasteful of all, since it causes all possible events to fire, even if they are not needed. In general, this can be several thousand events per second. BeforeEvent should be avoided at all costs, and replaced with the appropriate BeforeEvent.eventtype. Duplicate listeners can usually be replaced with a single listener that provides the functionality of several listener functions.
Timers take too much time:
Because a timer normally has to evaluate the given code in the same way as eval, it is best to have as little code as possible inside the evaluated statement. Instead of writing all of the code inside the timeout statement, put it in a separate function, and call the function from the timeout statement. This allows you to use the direct function reference instead of an evaluated string. As well as removing the inefficiency of eval, this will also help to prevent creating global variables within the evaluated code.
I know you should tread lightly when making recursive calls to functions in JavaScript because your second call could be up to 10 times slower.
Eloquent JavaScript states:
There is one important problem: In most JavaScript implementations, this second version is about 10 times slow than the first one. In JavaScript, running a simple loop is a lot cheaper than calling a function multiple times.
John Resig even says this is a problem in this post.
My question is: Why is it so inefficient to use recursion? Is it just the way a particular engine is built? Will we ever see a time in JavaScript where this isn't the case?
Function calls are just more expensive than a simple loop due to all the overhead of changing the stack and setting up a new context and so on. In order for recursion to be very efficient, a language has to support some form of tail-call elimination, which basically means transforming certain kinds of recursive functions into loops. Functional languages like OCaml, Haskell and Scheme do this, but no JavaScript implementation I'm aware of does so (it would only be marginally useful unless they all did, so maybe we have a dining philosophers problem).
This is just a way the particular JS engines the browsers use are built, yes. Without tail call elimination, you have to create a new stack frame every time you recurse, whereas with a loop it's just setting the program counter back to the start of it. Scheme, for example, has this as part of the language specification, so you can use recursion in this manner without worrying about performance.
https://bugzilla.mozilla.org/show_bug.cgi?id=445363 indicates progress being made in Firefox (and Brendan Eich speaks in here about it possibly being made a part of the ECMAScript spec), but I don't think any of the current browsers have this implemented quite yet.