I know you should tread lightly when making recursive calls to functions in JavaScript because your second call could be up to 10 times slower.
Eloquent JavaScript states:
There is one important problem: In most JavaScript implementations, this second version is about 10 times slow than the first one. In JavaScript, running a simple loop is a lot cheaper than calling a function multiple times.
John Resig even says this is a problem in this post.
My question is: Why is it so inefficient to use recursion? Is it just the way a particular engine is built? Will we ever see a time in JavaScript where this isn't the case?
Function calls are just more expensive than a simple loop due to all the overhead of changing the stack and setting up a new context and so on. In order for recursion to be very efficient, a language has to support some form of tail-call elimination, which basically means transforming certain kinds of recursive functions into loops. Functional languages like OCaml, Haskell and Scheme do this, but no JavaScript implementation I'm aware of does so (it would only be marginally useful unless they all did, so maybe we have a dining philosophers problem).
This is just a way the particular JS engines the browsers use are built, yes. Without tail call elimination, you have to create a new stack frame every time you recurse, whereas with a loop it's just setting the program counter back to the start of it. Scheme, for example, has this as part of the language specification, so you can use recursion in this manner without worrying about performance.
https://bugzilla.mozilla.org/show_bug.cgi?id=445363 indicates progress being made in Firefox (and Brendan Eich speaks in here about it possibly being made a part of the ECMAScript spec), but I don't think any of the current browsers have this implemented quite yet.
Related
I was learning the inner workings of V8 and found out that there is JIT compiler which, on the fly, optimizes the hot functions with inline caching technique. I have only two questions, firstly, is function considered as hot function as long as it is executed repeatedly one after another several times? Secondly, after what exact number of repeated execution function gets hot in V8?
V8 developer here. Function "hotness" is not simply determined by the number of calls to it. Instead, V8 tries to predict how useful it would be to optimize a given function by estimating the amount of time spent executing the unoptimized version of that function. The exact heuristics of how this works, which other factors are taken into account (e.g. completeness/stability of type feedback), and the threshold when optimized compilation is triggered can and do change over time.
The reason is that optimized compilation is fairly expensive, so you'd only want to do it when it's likely to pay off. ("likely" because it depends in particular on how much work the function will do in the future, and predicting the future accurately is of course impossible, so there's always some amount of guesswork and heuristics involved.)
If I write by myself the code behind the layer of abstraction of a native method, instead of using that method (which will behind scenes do the same that I write manually), have a good impact on the app performance or speed?
There are (at least) three significant advantages of using a built-in function instead of implementing it yourself:
(1) Speed - built-in functions generally invoke lower-level code provided by the browser (not in Javascript), and said code often runs significantly faster than the same code would in Javascript. Polyfills are slower than native code.
(2) Readability - if another reader of your code sees ['foo', 'bar'].join(' '), they'll immediately know what it does, and how the join method works. On the other hand, if they see something like doJoin(['foo', 'bar'], ' '), where doJoin is your own implementation of the same method, they'll have to look up the doJoin method to be sure of what's happening there.
(3) Accuracy - what if you make a mistake while writing your implementation, but the mistake is not immediately obvious? That could be a problem. In contrast, the built-in methods almost never have bugs (and, when spotted, usually get fixed).
One could also argue that there's no point spending effort on a solved problem.
Yes, there is a difference in efficiency in some cases. There are some examples in the docs for the library fast.js. To summarize what they say, you don't have to handle all of the cases laid out in the spec so sometimes you can do some things faster that the built in implementations.
I wouldn't take this as a license to make your code harder to read/maintain/reuse based on premature optimization, but yes you may gain some speed with your own implementation of a native method depending on your use case.
In JavaScript: Understanding the Weird Parts the instructor explains that memory for variables is set up during a so-called creation phase (and that undefined is assigned); then the execution phase happens. But why is this useful when we don't know what value(s) the variable will later point to?
Clearly variables can point to many different things -from e.g. a short string all the way to a deeply nested object structure -and I assume that they can vary wildly in the amount of memory they need.
If line-by-line execution -including variable assignment -happens only in the later, execution phase, how can the initial creation phase know how to set up memory? Or, is memory set aside only for the name in each variable name/value pair, with memory for the value being managed differently?
The instructor is referring to Google Chrome's V8 engine (as is evidenced by his use of it in the video).
The V8 engine uses several optimization approaches in order to facilitate memory management. At first, it will compile the JavaScript code and during compilation it will determine how many variables (hidden classes, more later) it needs to create. These will determine the amount of memory originally allocated.
V8 compiles JavaScript source code directly into machine code when it is first executed. There are no intermediate byte codes, no interpreter. Property access is handled by inline cache code that may be patched with other machine instructions as V8 executes. 1
The first set is created by navigating the JavaScript code to determine how many different object "shapes" there are. Anything without a prototype is considered to be a "Transitioning object shape"
The main way objects are encoded is by separating the hidden class (description) from the object (content). When new objects are instantiated, they are created using the same initial hidden class as previous objects from the same constructor. As properties are added, objects transition from hidden class to hidden class, typically following previous transitions in the so-called “transition tree”. 2
Conversely, if the object does have a prototype then it will have its particular shape tracked separately.
Prototypes have 2 main phases: setup and use. Prototypes in the setup phase are encoded as dictionary objects. Any direct access to the prototype, or access through a prototype chain, will transition it to use state, making sure that all such accesses from now on are fast. 2
The compiler will essentially read all possible variables as being one of these two possible shapes and then allocate the amount of memory necessary to facilitate instantiating those shapes.
Once the first set of shapes is setup, V8 will then take advantage of what they call "fast property access" in order to build on the first set of variables (hidden classes) that were setup during the build.
To reduce the time required to access JavaScript properties V8 dynamically creates hidden classes behind the scenes 3
There are two advantages to using hidden classes: property access does not require a dictionary lookup, and they enable V8 to use the classic class-based optimization, inline caching 3
As a result, not all memory use is known during compilation, only how much to allocate for the core set of hidden classes. This allocation will grow as the code is executed, from things like assignment, inline cache misses, and conversion into dictionary mode (which happens when too many properties are assigned to an object, and several other nuanced factors).
1. Dynamic machine code generation, https://github.com/v8/v8/wiki/Design%20Elements#dynamic-machine-code-generation
2. Setting up prototypes in V8, https://medium.com/#tverwaes/setting-up-prototypes-in-v8-ec9c9491dfe2
3. Fast Property Access, https://github.com/v8/v8/wiki/Design%20Elements#fast-property-access
Javascript is a bit of a problem you know at least in my opinion
in Javascript there are specifications of the language made by ecmascript
and there are implementation made by developers
you have to understand that what is been taught about Javascript what's so called "under the hood" are the specifications of Javascript
you might have heard the terms Execution Context Lexical Environment.
They are only spec on how the language should work they give idea to the Developers how to build their JS engine in a way that it will behave like the spec.
An execution context is purely a specification mechanism and need not correspond to any particular artefact of an ECMAScript implementation. It is impossible for ECMAScript code to directly access or observe an execution context. ECMAScript
Every Javascript engine is implemented differently and they supposed to behave like the spec of ECMAScript
there is a concept that everytime when an execution context is created.
it has stages creation phase and execution phase.
everytime when creation phase begin it's allocating memory for the variables for that execution context.
in reality it doesn't work like that at all
there is no really a "Creation Phase" at least in V8 engine(Google Chrome JS Engine) in the way that you think.
i can give you a hint and tell you that eveytime when you call a function that
doesn't have another function inside it.
the variables inside that function are basically replacing some "block" in memory
i will give you a basic example let's say V8 engine uses some address in memory
let's say the address is 0x61FF1C
function setNum(){ var num = 5; }
everytime when i will call the setNum function the value for example of num will get stored at address 0x61FF1C EVERYTIME when i will call the function the value of num 5it will get stored at 0x61FF1C so it's overwriting the content that were before inside 0x61FF1C.
that's just how the v8 engine works in that scenario now the example i gave is just to get the idea i know it's sounds a little vague.
there is much more to the V8 engine which i'm not going to discuss because it's huge
by the way i'm not a V8 developer so i dont know everything about that engine
i'm not even close to that level but i know some things.
anyway i think that every JS Developer should think in the spec way but also remember that many engines behave like the spec but it doesn't mean that they work exactly like the spec
When any program executes, time of execution we call running time. During running time or processing time, processor process code and communicate with memory. Procesor takes code from memory, process it, result give back to memory, take other code. All the running time, some space in memory get grater; some space gets zero, some variables get the new value, some variables have been deleting, etc. Volume of working memory in running time is changing all the time.
A while ago I read that you shouldn't use Function.caller inside a function because it makes the function non-inlineable. To test this assertion I wrote the following benchmark:
Does Function.caller affect preformance? · jsPerf.
The results prove that using Function.caller indeed makes a function execute slower than normal:
In Opera it is 16% slower.
In Chrome it is 80% slower.
In Firefox it is 100% slower.
Hence my question is this: what's the concensus on using Function.caller in JavaScript? Is it alright to use it sparingly? Should it be shunned altogether?
As far as I know, dynamically inspecting the execution stack with caller/callee/etc is not allowed in strict mode so you can kind of see that as a consensus to avoid this feature if possible.
Anyway, why do you even want to use Function.caller in the first place? It makes your code depend on something that usually doesnt matter (the call stack) and data gets passed around implicitly instead of via explicit arguments. The only real use I ever saw for this kind of feature is printing stack traces and in that case you usually can pay the performance cost or can get around it with a debugger.
If performance is your only concern, it's probably fine. While massively slower than not referencing caller, my machine can still do that 1.6 million times per second.
"Slow" can be a relative term. If you only need to call it rarely, it does it's magic fast enough most of the time. I just wouldn't put it in a big loop, iterated on every animation frame in my game.
However, this magic property has other problems. There are are more concerns than just performance, as #missingno points out.
So, I am fairly new to JavaScript coding, though not new to coding in general. When writing source code I generally have in mind the environment my code will run in (e.g. a virtual machine of some sort) - and with it the level of code optimization one can expect. (1)
In Java for example, I might write something like this,
Foo foo = FooFactory.getFoo(Bar.someStaticStuff("qux","gak",42);
blub.doSomethingImportantWithAFooObject(foo);
even if the foo object only used at this very location (thus introducing an needless variable declaration). Firstly it is my opinion that the code above is way better readable than the inlined version
blub.doSomethingImportantWithAFooObject(FooFactory.getFoo(Bar.someStaticStuff("qux","gak",42));
and secondly I know that Java compiler code optimization will take care of this anyway, i.e. the actual Java VM code will end up being inlined - so performance wise, there is no diffence between the two. (2)
Now to my actual Question:
What Level of Code Optimization can I expect in JavaScript in general?
I assume this depends on the JavaScript engine - but as my code will end up running in many different browsers lets just assume the worst and look at the worst case. Can I expect a moderate level of code optimization? What are some cases I still have to worry about?
(1) I do realize that finding good/the best algorithms and writing well organized code is more important and has a bigger impact on performance than a bit of code optimization. But that would be a different question.
(2) Now, I realize that the actual difference were there no optimization is small. But that is beside the point. There are easily features which are optimized quite efficiently, I was just kind of too lazy to write one down. Just imagine the above snippet inside a for loop which is called 100'000 times.
Don't expect much on the optimization, there won't be
the tail-recursive optimization,
loop unfolding,
inline function
etc
As javascript on client is not designed to do heavy CPU work, the optimization won't make a huge difference.
There are some guidelines for writing hi-performance javascript code, most are minor and technics, like:
Not use certain functions like eval(), arguments.callee and etc, which will prevent the js engine from generating hi-performance code.
Use native features over hand writing ones, like don't write your own containers, json parser etc.
Use local variable instead of global ones.
Never use for-each loop for array.
Use parseInt() rather than Math.floor.
AND stay away from jQuery.
All these technics are more like experience things, and may have some reasonable explanations behind. So you will have to spend some time search around or try jsPerf to help you decide which approach is better.
When you release the code, use closure compiler to take care of dead-branch and unnecessary-variable things, which will not boost up your performance a lot, but will make your code smaller.
Generally speaking, the final performance is highly depending on how well your code organized, how carefully your algorithm designed rather than how the optimizer performed.
Take your example above (by assuming FooFactory.getFoo() and Bar.someStaticStuff("qux","gak",42) is always returning the same result, and Bar, FooFactory are stateless, that someStaticStuff() and getFoo() won't change anything.)
for (int i = 0; i < 10000000; i++)
blub.doSomethingImportantWithAFooObject(
FooFactory.getFoo(Bar.someStaticStuff("qux","gak",42));
Even the g++ with -O3 flag can't make that code faster, for compiler can't tell if Bar and FooFactory are stateless or not. So these kind of code should be avoided in any language.
You are right, the level of optimization is different from JS VM to VM. But! there is a way of working around that. There are several tools that will optimize/minimize your code for you. One of the most popular ones is by Google. It's called the Closure-Compiler. You can try out the web-version and there is a cmd-line version for build-script etc. Besides that there is not much I would try about optimization, because after all Javascript is sort of fast enough.
In general, I would posit that unless you're playing really dirty with your code (leaving all your vars at global scope, creating a lot of DOM objects, making expensive AJAX calls to non-optimal datasources, etc.), the real trick with optimizing performance will be in managing all the other things you're loading in at run-time.
Loading dozens on dozens of images, or animating huge background images, and pulling in large numbers of scripts and css files can all have much greater impact on performance than even moderately-complex Javascript that is written well.
That said, a quick Google search turns up several sources on Javascript performance optimization:
http://www.developer.nokia.com/Community/Wiki/JavaScript_Performance_Best_Practices
http://www.nczonline.net/blog/2009/02/03/speed-up-your-javascript-part-4/
http://mir.aculo.us/2010/08/17/when-does-javascript-trigger-reflows-and-rendering/
As two of those links point out, the most expensive operations in a browser are reflows (where the browser has to redraw the interface due to DOM manipulation), so that's where you're going to want to be the most cautious in terms of performance. Some of that can be alleviated by being smart about what you're modifying on the fly (for example, it's less expensive to apply a class than modify inline styles ad hoc,) so separating your concerns (style from data) will be really important.
Making only the modifications you have to, in order to get the job done, (ie. rather than doing the "HULK SMASH (DOM)!" method of replacing entire chunks of pages with AJAX calls to screen-scraping remote sources, instead calling for JSON data to update only the minimum number of elements needed) and other common-sense approaches will get you a lot farther than hours of minor tweaking of a for-loop (though, again, common sense will get you pretty far, there, too).
Good luck!