How much overhead is there when use functions that have a huge body?
For example, consider this piece of code:
(function() {
// 25k lines
})();
How may it affect loading speed / memory consumption?
To be honest I'm not sure, the good way to help answer your question is to measure.
You can use a javascript profiler, such as the one built into Google Chrome, here is a mini intro to the google chrome profiler
You can also use Firebug profiler() and time(): http://www.stoimen.com/blog/2010/02/02/profiling-javascript-with-firebug-console-profile-console-time/
Overhead is negligible on a static function declaration regardless of size. The only performance loss comes from what is defined inside the function.
Yes, you will have large closures that contain many variables, but unless your declaring several tens of thousands of private variables in the function, or executing that function tens of thousands of times, then you won't notice a difference.
The real question here is, if you split that function up into multiple smaller functions, would you notice a performance increase? The answer is no, you should actually see a slight performance decrease with more overhead, although your memory allocation should at least be able to collect some unused variables.
Either way, javascript is most often only bogged down by obviously expensive tasks, so I wouldn't bother optimizing until you see a problem.
Well that's almost impossible to answer.
If you realy want to understand about memory usage, automatic garbage colection and other nitty gritty of closure, start here: http://jibbering.com/faq/notes/closures/
Firstly, products like JQuery are built on using closures extremely heavily. JQuery is considered to be a very high performance piece of Javascript code. This should tell you a lot about the coding techniques it uses.
The exact performance of any given feature is going to vary between different browsers, as they all have their own scripting engines, which are all independantly written and have different optimisations. But one thing they will all have done is tried to give the best optimisations to the most commonly used Javascript features. Given the prevelance of JQuery and its like, you can bet that closures are very heavily optimised.
And in any case, with the latest round of browser releases, their scripting engines are all now sufficiently high performance that you'd be hard pushed to find anything in the basic language constructs which consitutes a significant performance issue.
Related
I'm trying to think the best solution for a NodeJS library I've been writting in the last year. In this project, performance definitively matters. The current performance is good (we've been able to reduce our response times in a 30%) but I'm trying to find ways to make a better library.
After some research, I realized that I use Array.map in some critical functions. Since Array.map has to check for possible empty values, it has to be slower than any function that does the same without performing that check. I wanted to know how slow is Array.map, so I wrote a benchmark and run it with the latest version of Google Chrome.
I expected Array.map to be slow, but not as much! Array.map seems to be 90% slower than other equivalent codes. So now I have a problem.
I'd love to get that extra performance for free and I would probably reduce the response time in 1ms, but in the other hand the real bottleneck is in some data provider's response times and I'm depending on benchmarks that could change in new V8 releases.
What should I do? I know I'm taking too much time in a micro optimization, but I feel I can learn an important lesson that could be extrapolated to many similar problems.
If performance definitively matters it's not unreasonable try out easy potential optimizations like these. In the first place Array.map is an extra function call that invokes many extra function calls to do something that can already be done very succinctly with a simple iterator, so it's pretty common for it to perform slower than native for-loops - but as pointed out in comments below, v8 performs many optimizations and the only way you'll get a good answer on whether it'll make your code appreciably faster is to profile it yourself.
If it's in some critical functions, it's perfectly reasonable to try swapping your .map()s with for-loops and run some benchmarks to see if it makes enough difference to warrant the potential losses in readability and maintainability.
This is a question about the typical behaviour of the JIT compiler in modern javascript engines. Lets say I have a class A with many fields, instances of which which are heavily used from another class B, including within loops. Rather than expose the internals of A, there are a bunch of one-line access methods.
Individually, each method will make little difference to performance, but let's assume that collectively they make a big difference. Will a modern JIT inline these functions?
Late answer, but I think this may help future viewers of the question.
It depends on the complexity of the methods, a side-effecting function may not be inlined, while a simple method will usually be inlined.
However, as #Ry- mentioned in the comments, it is not truly predictable. JavaScript engines take many things into consideration before making an optimization -- to see whether or not it is actually worth it, so there is no objective answer to your question.
The best thing to do is to take your code and profile it to see whether or not the functions are being inlined, this article also shows another way of doing this, if you are serious enough.
I have a node.js file with several functions. Suppose each of this function require the underscore module. What is the impact on performance of adding var underS = require("underscore"); in each and every function versus declaring a single var underS = require("underscore"); globally at the top? Which is better performance?
I just want to add a generalized answer on top of a precise one.
Never let performance impact the broad organization of your codebase, unless you seriously have a good (measured) reason not to.
"Measured" in this context doesn't mean some micro-benchmark. It means profiled in the context of your actual project.
Ignoring this rule early on in my career bit me hard. It didn't aid me in writing efficient code, it simply diminished productivity to a minimum by making my codebase difficult to maintain, and difficult to stay in love with (a motivating human quality I think is often neglected to be mentioned).
So on top of the "no", you shouldn't worry about the performance of importing modules, I would suggest even more strongly, "no", you shouldn't worry about it anyway even if it did impact performance.
If you want to design for performance, performance-critical loopy code tends to make up a very small fraction of a codebase. It's tempting to think of it as a bunch of teeny inefficiencies accumulating to produce a bottleneck, but getting in the habit of profiling will quickly eliminate that superstition. In that little teeny section of your codebase, you might get very elaborate, maybe even implementing some of it in native code. And there, for this little teeny section of your codebase, you might study computer architecture, data-oriented design for cache locality, SIMD, etc. You can get extremely elaborate optimizing that little section of code which actually constitutes a profiling hotspot.
But the key to computational speed is first your own developer speed at producing and maintaining code efficiently by zooming out and focusing on more important priorities. That's what buys you time to zoom in on those little parts that actually do constitute measured hotspots, and affords you the time to optimize them as much as you want. First and foremost is productivity, and this is coming from one working in a field where competitive performance is often one of the sought-out qualities (mesh processing, raytracing, image processing, etc).
Modules are cached after the first time they are loaded. This means (among other things) that every call to require('foo') will get exactly the same object returned, if it would resolve to the same file.
So basically no, it'll not impact the performence.
Am I correct to say that JavaScript code isn't compiled, not even JIT? If so, does that mean that comments have an affect on performance, and I should be very careful where I put place my comments? Such as placing function comments above and outside the function definition when possible, and definitely avoid placing comments inside loops, if I wanted to maximize performance? I know that in most cases (at least in non-loop cases), the change in performance would be negligible, but I think that this would be something that is good to know and be aware of, especially for front-end/js developers. Also, a relevant question was asked on a js assessment I recently took.
Am I correct to say that JavaScript code isn't compiled, not even JIT?
No. Although JavaScript is traditionally an "interpreted" language (although it needn't necessarily be), most JavaScript engines compile it on-the-fly whenever necessary. V8 (the engine in Chrome and NodeJS) used to compile immediately and quickly, then go back and aggressively optimize any code that was used a lot (the old FullCodegen+TurboFan stack); a while back having done lots of real-world measurement, they switched to initially parsing to byteocde and interpreting, and then compiling if code is reused much at all (the new Ignition+TurboFan stack), gaining a significant memory savings by not compiling run-once setup code. Even engines that are less aggressive almost certainly at least parse the text into some form of bytecode, discarding comments early.
Remember that "interpreted" vs. "compiled" is usually more of an environmental thing than a language thing; there are C interpreters, and there are JavaScript compilers. Languages tend to be closely associated with environments (like how JavaScript tends to be associated with the web browser environment, even though it's always been used more widely than that, even back in 1995), but even then (as we've seen), there can be variation.
If so, does that mean that comments have an affect on performance...
A very, very, very minimal one, on the initial parsing stage. But comments are very easy to scan past, nothing to worry about.
If you're really worried about it, though, you can minify your script with tools like jsmin or the Closure Compiler (even with just simple optimizations). The former will just strip comments and unnecessary whitespace, stuff like that (still pretty effective); the latter does that and actually understands the code and does some inlining and such. So you can comment freely, and then use those tools to ensure that whatever minuscule impact those comments may have when the script is first loaded is bypassed by using minifying tools.
Of course, the thing about JavaScript performance is that it's hard to predict reliably cross-engine, because the engines vary so much. So experiments can be fun:
Here's an experiment which (in theory) reparses/recreates the function every time
Here's one that just parses/creates the function once and reuses it
Result? My take is that there's no discernable difference within the measurement error of the test.
The biggest effect that comments have is to bloat the file size and thereby slow down the download of the script. Hence why all professional sites use a minimizer for a productive version to cut the js down to as small as it gets.
It may have some effect. Very minimalistic effect, though (even IE6 handles comments correctly ! to be confirmed...).
However, most people use a minifier that strips off comments. So it's okay.
Also:
V8 increases performance by compiling JavaScript to native machine code before executing it.
Source
It can prevent functions from being inlined, which affects performance, though this shouldn't really happen.
In some perhaps isolated circumstances, comments definitely somehow bog down the code execution. I am writing a lengthy userscript, using in the latest Firefox on Mac using TamperMonkey, and several days' increasingly frustrated troubleshooting came to an end when I stripped the lengthy comments from the script and suddenly the script execution stopped hanging completely on Facebook. Multiple back-and-forth comparisons running the same exact script on a fresh user account, the only difference being the comments, proved this to be the case.
How bad is it, to use JavaScript (CoffeeScript) for implementing a heavy computational task? I am concerned with an optimization problem, where an optimal solution cannot be computed that fast.
JavaScript was chosen in the first place, because visualization is required and instead of adding the overhead for communication between different processes the decision was to just implement everything in JavaScript.
I don't see a problem with that, especially when looking at the benchmarks game. But I often receive the question: Why on earth JavaScript?
I would argue in the following way: It is an optimization problem, NP-hard. It does not matter how much faster another language would be, since this only adds a constant factor to the running time - is that true?
Brendan Eich (Mozilla's CTO and creator of JavaScript) seems to think so.
http://brendaneich.com/2011/09/capitoljs-rivertrail/
I took time away from the Mozilla all-hands last week to help out on-stage at the Intel Developer Forum with the introduction of RiverTrail, Intel’s technology demonstrator for Parallel JS — JavaScript utilizing multicore (CPU) and ultimately graphics (GPU) parallel processing power, without shared memory threads (which suck).
See especially his demo of JS creating a scene graph:
Here is my screencast of the demo. Alas, since RiverTrail currently targets the CPU and its short vector unit (SSE4), and my screencast software uses the same parallel hardware, the frame rate is not what it should be. But not to worry, we’re working on GPU targeting too.
At CapitolJS and without ScreenFlow running, I saw frame rates above 35 for the Parallel demo, compared to 3 or 2 for Sequential.
If JavaScript is working for you and meeting your requirements, what do you care what other people think?
One way to answer the question would be to benchmark it against an implementation in a "good" language (your terms, not mine) and see how much of a difference it makes.
I don't buy the visualization argument. If your "good" language implementation was communicating with a front end you might be able to have faster performance and visualization. You might be overstating the cost of communication to make yourself feel better.
I also don't like your last argument. JavaScript is single threaded; another language might offer parallelism that JavaScript can't. Algorithm can make a huge difference; perhaps you've settled on one that is far from optimal.
I can tell you that no one in their right mind would consider using JavaScript for computationally intensive tasks like scientific computing. SO did have a reference to a JavaScript linear algebra library, but I doubt that it could be used for analysis of non-linear systems will millions of degrees of freedom. I don't know what kind of optimization problem you're dealing with.
With that said, I'd wonder if it's possible to treat this question fairly in a forum like this. It could lead to a lot of back and forth and argument.
Are you seeking a justification for your views or do you want alternatives? It's hard to tell.
Well it's not exactly constant time it's usually measured X times slower than Java. But, as you can see from your results for the benchmark shootout it really depends on the algorithm as to how much slower it is. This is V8 javascript so it's going to depend on the browser you are running it in as how much slower. V8 is the top performer here, but it can dramatically run slower on other VMs: ~2x-10x.
If your problem can be subdivided into parallel processors then the new Workers API can dramatically improve performance of Javascript. So it's not single threaded access anymore, and it can be really fast.
Visualization can be done from the server or from the client. If you think lots of people are going to executing your program at once you might not want to run it on the server. If one of these eats up that much processors think what 1000 of them would do to your server. With Javascript you do get a cheap parallel processor by federating all browsers. But, as far as visualization goes it could be done on the server and sent to the client as it works. It's just what you think is easier.
The only way to answer this question is to measure and evaluate those measurements as every problem and application has different needs. There is no absolute answer that covers all situations.
If you implement your app/algorithm in javascript, profile that javascript to find out where the performance bottlenecks are and optimize them as much as possible and it's still too slow for your application, then you need a different approach.
If, on the other hand, you already know that this is a massively time draining problem and even in the fastest language possible, it will still be a meaningful bottleneck to the performance of your application, then you already know that javascript is not the best choice as it will seldom (if ever) be the fastest executing option. In that case, you need to figure out if the communication between some sort of native code implementation and the browser is feasible and will perform well enough and go from there.
As for NP-hard vs. a constant factor, I think you're fooling yourself. NP-hard means you need to first make the algorithm as smart as possible so you've reduced the computation to the smallest/fastest possible problem. But, even then, the constant factor can still be massively meaningful to your application. A constant factor could easily be 2x or even 10x which would still be very meaningful even though constant. Imagine the NP-hard part was 20 seconds in native code and the constant factor for javascript was 10x slower. Now you're looking at 20 sec vs. 200 sec. That's probably the difference between something that might work for a user and something that might not.