This is a question about the typical behaviour of the JIT compiler in modern javascript engines. Lets say I have a class A with many fields, instances of which which are heavily used from another class B, including within loops. Rather than expose the internals of A, there are a bunch of one-line access methods.
Individually, each method will make little difference to performance, but let's assume that collectively they make a big difference. Will a modern JIT inline these functions?
Late answer, but I think this may help future viewers of the question.
It depends on the complexity of the methods, a side-effecting function may not be inlined, while a simple method will usually be inlined.
However, as #Ry- mentioned in the comments, it is not truly predictable. JavaScript engines take many things into consideration before making an optimization -- to see whether or not it is actually worth it, so there is no objective answer to your question.
The best thing to do is to take your code and profile it to see whether or not the functions are being inlined, this article also shows another way of doing this, if you are serious enough.
Related
So, fundamentally, this question is Not opinion-based. I seriously pursuit this issue objectively without feeling mostly arisen from the predominant opinion - Why is extending native objects a bad practice?
and this quesion is related but unanswered questions:
If Object.prototype is extended with Symbol property, is it possible to break code without specifically designed APIs to access Symbol in JavaScript?
Should the extension of built-in Javascript prototypes through symbols also be avoided?
The first question is already closed as they say it's opinion based, and as you might know in this community once a question is Banned, however we modified it, moderators will never bother to re-open. That is the way how the things work here.
For the second question. For some unknown reason, the question has been taken more seriously and not considered as opinion based although the context is identical.
There are two parts to the "don't modify something you don't own" rule:
You can cause name collisions and you can break their code.
By touching something you don't own, you may accidentally overwrite
something used by some other library. This will break their code in
unexpected ways.
You can create tight dependencies and they can break your code.
By binding your code so tightly to some other object, if they make
some significant change (like removing or renaming the class, for
example), your code might suddenly break.
Using symbols will avoid #1, but you still run into #2. Tight dependencies between classes like that are generally discouraged. If the other class is ever frozen, your code will still break. The answers on this question still apply, just for slightly different reasons.
Also, I've read opinions(how can we discuss such a thing here without "opinion" base?), they claim
a) Library code Using Symbols exists and they may tweak Symbol API (such as Object.getOwnPropertySymbols())
b) extending object property with Symbol is not different from non-Symbol property fundamentally.
Here, for the major rationale of untouching Object.prototype is due to #1, almost all answers I saw claimed that and we don't have to discuss if there is no Symbol usage.
However, Using symbols will avoid #1 as they say. So most of the traditional wisdom won't apply anymore.
Then, as #2 says,
By binding your code so tightly to some other object, if they make some significant change (like removing or renaming the class, for example), your code might suddenly break.
well, in principle, any fundamental API version upgrade will break any code. The well-known fact is nothing to do with this specific question. #2 did not answer the question.
Only considerable part is Object.freeze(Object.prototype) can be the remaining problem. However, this is essentially the same manner to upgrade the basic API by some other unexpectedly.
As the API users not as API providers, the expected API of Object.prototype is not frozen.
If some other guys touch the basic API and modifies it as frozen, it is he/she who broke the code. They upgraded the basic API without notice.
For instance, in Haskell, there are many language extensions. Probably they solve the collision issue well, and most importantly, they won't "freeze" the basic API because freezing the basic API would brake their eco.
Therefore, I observe that Object.freeze(Object.prototype) is the anti-pattern. It cannot be justified as a matter of course to prevent Object.prototype extension with Symbols.
So here is my question. Although I observe this way, is it safe to say:
In case of that Object.freeze(Object.prototype) is not performed, which is the anti-pattern and detectable, it is safe to perform extending Object.prototype with Symbols?
If you don't think so, please provide a concrete example.
Is it safe? Yes, if you are aware of all the hazards that come with it, and either choose to ignore them, shrug them off, or invest effort to ensure that they don't occur (testing, clear documentation of compatibility requirements), then it is safe. Feel free to do it in your own code where you can guarantee these things.
Is it a good idea? Still no. Don't introduce this in code that other people will (have to) work with.
If some other guys touch the basic API and modifies it as frozen, it is he/she who broke the code. Therefore, I observe that Object.freeze(Object.prototype) is the anti-pattern.
Not quite. If you both did something you shouldn't have done, you're both to blame - even if doing only one of these things gets away with working code. This is exactly what point #2 is about: don't couple your code tightly to global objects that are shared with others.
However, the difference between those things is that freezing the prototype is an established practice to harden an application against prototype pollution attacks and generally works well (except for one bit),
whereas extending the prototype with your own methods is widely discouraged as a bad practice (as you already found out).
In Haskell, there are many language extensions. Probably they solve the collision issue well, and most importantly, they won't "freeze" the basic API because freezing the basic API would brake their eco.
Haskell doesn't have any global, shared, mutable object, so the whole problem is a bit different. The only collision issue is between identifiers from "star-imported" modules, including the prelude from the base API. However, this is per module, not global, so it doesn't break composability as you can resolve the same identifier to different functions in separate modules.
Also yes, their base API is frozen and versioned, so they can evolve it without breaking old applications (who can continue using old dependencies and old compilers). This is a luxury that JavaScript doesn't have.
Is it safe to extend Object.prototype with a pipe symbol so that something[pipe](f) does f(something), like something |> f in F# or the previous proposal of pipe-operator?
No, it's not safe, not for arbitrary values of something. Some obvious values where this doesn't work are null and undefined.
However, it doesn't even work for all objects: there are objects that don't have Object.prototype on their prototype chain. One example is Object.create(null) (also done for security purposes), another example are objects from other realms (e.g. iframes). This is also the reason why you shouldn't expect .toString() to work on all objects.
So for your pipe operator, better use a static standalone method, or just use a transpiler to get the syntax you actually want. An Object.prototype method is only a bad approximation.
Extending the Object prototype is a dangerous practice.
You have obviously done some research and found that the community of javascript developers overwhelmingly considers it to be a very bad practice.
If you're working on a personal project all by yourself, and you think the rest of us are all cowards for being unwilling to take the risk, then by all means: go ahead and modify your Object prototype! Nobody can stop you. It's up to you whether you will be guided by our advice. Sometimes the conventional wisdom is wrong. (Spoiler: in this case, the conventional wisdom is right.)
But if you are working in a shared repository, especially in any kind of professional setting, do not modify the Object prototype. Whatever you want to accomplish by this technique, there will be alternative approaches that avoid the dangers of modifying the base prototypes.
The number one job of code is to be understood by other developers (including yourself in the future), not just to work. Even if you manage to make this work, it is counterintuitive, and nobody who comes after you will expect to find this. That makes it unacceptable by definition, because what matters here is reasonable expectations, NOT what the language supports. Any person who fails to recognize that professional software development is a team effort has no place writing software professionally.
You are not going to find a technical limitation to extending the Object prototype. Javascript is a very flexible language -- it will give you plenty of rope with which to hang yourself. That does not mean it's a good idea to place your head in the noose.
I come from a background in Haskell. I'm very used to getting things done with recursive functions and the typical higher-order functions (folds, maps, filters, etc) and composing functions together. I'm developing in node.js now, and I'm seriously tempted to write my own modules implementing these functions so I can use them in my code in a way that makes sense to me.
My question is, basically: is Javascript set up to handle this type of burden? I understand that the aforementioned recursive functions can be easily refactored into iterative ones, but often times I find myself calling a lot of functions within functions, and I don't know if Javascript can handle this type of thing well. I know that things like Underscore exist and implement some FP principles, but my question basically boils down to: is it good practice to program functionally in Javascript? If not, why not?
I also apologize if this question is a little too soft for SO, but I don't want to start putting together my own tool set if it's just going to break everything once it gets too large.
In my opinion, the short answer to your question is yes -- applying functional programming principles is viable in Javascript! (I believe that this is also true for most other languages -- there's usually something to be gained from applying FP principles).
Here's an example of a functional parser combinator library I built in Javascript. (And here it is in action). It was important to be functional because: 1) it allows me to build parsers by composition, which means I can build and test small parsers independently, then put them together and have confidence that the behavior will be the same, and 2) it makes backtracking super easy to get right (which is important because the choice operator backtracks when an alternative fails).
So these are FP principles (note the absence of recursion, folds, maps, and filters from this list) that I found extremely useful in building my library:
avoiding mutable state
pure functions (i.e. output depends only on input)
composition: building complex apps by gluing together simple pieces
It's usually quite easy and pleasant to apply these in Javascript because of Javascript's support for:
first-class functions
anonymous functions
lexical closures
but here are some things to watch out for:
lack of popular library of efficient functional data structures
lack of tail-call optimization (at least at the moment)
partial application is more syntax-heavy than in Haskell
lots of popular libraries are not especially functional
the DOM is not functional (by design)
However, your last comment -- "I don't want to start putting together my own tool set if it's just going to break everything once it gets too large" -- is a good one. This is basically a problem for every approach, and I can't tell you whether FP in Javascript will be more problematic than "mainstream" techniques when things get too large. But I can tell you that in my experience, FP in Javascript helps me to prevent things from getting too large.
Like many folks I learned JavaScript by learning jQuery.
Lately I have been replacing bits like:
$(this).attr('title') with this.title
$(this).attr('id') with this.id
$(this).val() with this.value
$(this).parent() with this.parentNode
$(this).attr('class') with this.className
Not only is my code cleaner but technically faster.
Is this type of reduction acceptable and encouraged?
Are there any other common practices I should be doing in raw plain JavaScript instead of jQuery?
Are there any potential cross browser issues with this type of reduction-ism?
Whilst using native Javascript functions are generally faster than their jQuery counterparts it does expose you to any browser compatibly issues that may arise from their use. this.value and such is unlikely to cause problems but other similar attributes / functions may well not work in all browsers. Using a framework like jQuery means you dont have to deal with, or worry about, such things.
I would only ever use plain Javascript if performance is an issue i.e. you have a lot of tight loops and repeated operations.
I would recommend using the DOM properties wherever possible. Nearly all of them will cause no problem, performance will improve and you become less reliant on jQuery. For properties like checked, for example, you're much better off forgetting all about jQuery, which only serves to add confusion to a simple task.
If you're in any doubt for a particular property, you could have a look through the jQuery source to see whether it has any special handling for that property and view it as a learning exercise.
While many people reject such claims, I have also observed that avoiding/minimizing the jQuery usage can yield significantly faster scripts. Avoid repeated/unnecessary $() in particular; instead try to do things once e.g. a = $(a);
Things that I have noticed as being quite costly are in particular $(e).css({a:b}).
Google's optimizing Closure Compiler supposedly can inline such simple functions, too!
And in fact, it comes with a rather large library (closure library) that offers most of the cross-browser compatibility stuff without introducing an entirely new notion.
It takes a bit to get used to the closure way of exporting variables and functions (so they don't get renamed!) in full optimization mode. But at least in my cases, the generated code was quite good and small, and I bet it has received some further improvements since.
https://developers.google.com/closure/compiler/
How much overhead is there when use functions that have a huge body?
For example, consider this piece of code:
(function() {
// 25k lines
})();
How may it affect loading speed / memory consumption?
To be honest I'm not sure, the good way to help answer your question is to measure.
You can use a javascript profiler, such as the one built into Google Chrome, here is a mini intro to the google chrome profiler
You can also use Firebug profiler() and time(): http://www.stoimen.com/blog/2010/02/02/profiling-javascript-with-firebug-console-profile-console-time/
Overhead is negligible on a static function declaration regardless of size. The only performance loss comes from what is defined inside the function.
Yes, you will have large closures that contain many variables, but unless your declaring several tens of thousands of private variables in the function, or executing that function tens of thousands of times, then you won't notice a difference.
The real question here is, if you split that function up into multiple smaller functions, would you notice a performance increase? The answer is no, you should actually see a slight performance decrease with more overhead, although your memory allocation should at least be able to collect some unused variables.
Either way, javascript is most often only bogged down by obviously expensive tasks, so I wouldn't bother optimizing until you see a problem.
Well that's almost impossible to answer.
If you realy want to understand about memory usage, automatic garbage colection and other nitty gritty of closure, start here: http://jibbering.com/faq/notes/closures/
Firstly, products like JQuery are built on using closures extremely heavily. JQuery is considered to be a very high performance piece of Javascript code. This should tell you a lot about the coding techniques it uses.
The exact performance of any given feature is going to vary between different browsers, as they all have their own scripting engines, which are all independantly written and have different optimisations. But one thing they will all have done is tried to give the best optimisations to the most commonly used Javascript features. Given the prevelance of JQuery and its like, you can bet that closures are very heavily optimised.
And in any case, with the latest round of browser releases, their scripting engines are all now sufficiently high performance that you'd be hard pushed to find anything in the basic language constructs which consitutes a significant performance issue.
I was wondering, if there is any generalities (among all the javascript engines out there) in the cost related to execute a given instruction vs another.
For instance, eval() is slower than calling a function that already has been declared.
I would like to get a table with several instructions/function calls vs an absolute cost, maybe a cost per engine.
Does such a document exists?
There's a page here (by one of our illustrious hosts, no less) that gives a breakdown by browser and by general class of instruction:
http://www.codinghorror.com/blog/archives/001023.html
The above page links to a more detailed breakdown here:
http://www.codinghorror.com/blog/files/sunspider-09-benchmark-results.txt
Neither of those pages breaks down performance to the level of individual function calls or arithmetic operations or what have you. Still, there is quite a bit of potentially useful information.
There is also a link to the benchmark itself:
http://www2.webkit.org/perf/sunspider-0.9/sunspider.html
By viewing the benchmark source you can get a better idea of what specific function calls are being tested.
It also seems like it might be a simple matter to create your own custom version of the benchmark that collects the more specific data you are interested in. You could then run the modified benchmark on a variety of browsers, perhaps even taking advantage of a service such as browsershots.org to test a wide spectrum of browsers. Not sure how well that would work, but it might be fun to try....
It is of course possible that the same operation executed in the same browser might take significantly different amounts of time depending on the context in which it's being used, in ways that might not be immediately obvious. For example, I could imagine a Javascript engine spending more time optimizing code that is executed frequently, with the result that the code executed in a tight loop might run faster than identical code executed infrequently. Of course, that might not matter much in practice. Still, I imagine that a good table of the sort you are looking for might also summarize any such effects if they turned out to be important.