The online closure compiler is amazing:
http://closure-compiler.appspot.com/home
However, when using the advanced option, will it affect performance of the script at all? IE, will it make it faster or slower in general, or does it depend on the script itself? Or is there no performance hit at all?
I only ask this as some scripts I write will be performance critical, and I know the answer to this question is "try and see" but I'm not very good at running these sorts of tests, I don't know where to start.
Here are two points from the Closure Compilier faq that may interest you.
Does the compiler make any trade-off between my application's execution speed and download code size?
Yes. Any optimizing compiler makes trade-offs. Some size optimizations do introduce small speed overheads. However, the Closure Compiler's developers have been careful not to introduce significant additional runtime. Some of the compiler's optimizations even decrease runtime (see next question).
Does the compiler optimize for speed?
In most cases smaller code is faster code, since download time is usually the most important speed factor in web applications. Optimizations that reduce redundancies speed up the run time of code as well.
So it would seem that it will depend on the code you've written. Could be faster, but there's a chance it could be a little slower. Ultimately, testing will be required.
Related
Are there any performance benefits of the latter compared to the former? When I tested it on my own, the time it took for one of my projects to load already transpiled was much shorter than when it uses babel/register. Apart from start-up time though, I'm not entirely sure what I'd write to benchmark the two fairly. Does anyone know whether the overhead Babel adds is just from babel/register transpiling code as it's running, or is it slow no matter what you do?
You are correct in that it affects startup time (it will drastically, depending on how large your project is). As for actual runtime, it should make exactly 0 difference unless there is a bug in Node itself (which would sadly not likely to be fixed, since it's deprecated).
i am stuck with big problem i working on big project that is hanging browser automaticaly javacript executes
"how to detect how much memory javascript is using and clear the memory in regular interval.Is it posible?"
You don't have any way to play with memory. Javascript runs in a sandbox environment, so you have no access to memory management in any way. The garbage collector takes care of this, and you can somehow make it do what you want, but it's random. Don't count on it.
Rather, for your problem, you can use Chrome Inspector's Profiler.
What does it do? Well... it profiles the webpage you're in. You can see how long each function takes, and especially: where is your bottleneck.
Try in Chrome, specifically.
Chrome's V8 has a brilliant generational garbage collector, where three types of polling happens: There are three threads constantly polling the three generation types, and I think they run at 10, 50 and 200 millisecond intervals (I may have got the figures wrong, but they are principally similar, with the time intervals increasing for older generations).
This is aggressive, and ensures that memory usage remains low.
In spite of this, if your code is hogging memory in Chrome, then you can be sure that the issue is with the code. It could be that:
(a) Your code is really unoptimized, or
(b) It is really working on very large data that is probably not best suited for the client (e.g. an excessively heavy page that has tons of widgets, dom nodes etc.)
Care to post some snippets?
I need some embeddable language for tasks similar to query execution in mongodb. Language should be fast and it should have both JIT and interpreter (for frequent scripts that JIT-compiled and for one-time-run scripts too), should have in-memory runtime that I populate with specific API functions (or classes, whatever) by hand (and nothing "built-in" else like gettime, thread spawning or similar), it should have C API and it should work on ARM (MIPS also would be nice), not too big footprint also would be nice (but this is not critical).
I have two candidates:
Google V8.
Spidermonkey (There was IonMonkey's ARM support announced AFAIK).
I have not experienced embedding languages into C projects before so I have a few questions: recently there was a rumor that V8 is not thread-safe, is this problem still exists? If so, where that lack of thread-safe can cause problems?
Also I would be glad if anyone suggested embeddable language which is more suitable for my requirements (except lua, I can't find any advantages in comparison with js except smaller footprint about what I don't care).
I'm not sure how SpiderMonkey's multithreading embeddability compares to V8's, but I do know that it's possible to do with SpiderMonkey -- we have a few multiprogramming embedders on dev.tech.js-engine that you may want to post followup questions to.
Our web workers implementation in the browser uses one runtime instance per worker (you can multiply instantiate the runtime in a single process) -- we've moved away from an multithread-safe single-runtime approach over the past few years because it's unnecessary for the web and adds a significant amount of complexity to the engine.
An alternative to multiprogramming is also the asynchronous, select-based, run-to-completion approach a la node.
A nit: I don't think an interpreter is really a requirement of yours -- your requirement is fast start up times for one-off code. SpiderMonkey has an interpreter and V8 does not, but V8 has a fast-code-emission (which we tend to call "baseline") JIT compiler that offers comparable performance in that area. That capability is an important requirement for JS on the web in general. :-)
I want to ask if someone measured impact of some javascript obfuscators on the resulted code. I am aiming mobile users so the speed is cruicial. And I am especially trying to run 2 or 3 different obfuscators on same code in a row, which obfuscates the code very well, but I am afraid that it will have some speed impact.
it shouldn't. Compiler/interpreters couldn't care less about what your symbols are, as long as they are correct.
Most JavaScript obfuscators only minifies and rename local variables, which is extremely easy to reverse-engineer with a beautifier.
The best combination I've found is the DojoToolkit and the Closure Compiler in Advanced Mode.
Closure in Advanced Mode makes JavaScript code almost impossible to reverse-engineer, even after passing through a beautifier. Once your JavaScript code is obfuscated beyond any recognition and any possibility to reverse-engineer, your HTML won't disclose much of your secrets.
This link for using the Dojo Toolkit with the Closure Compiler in Advanced Mode for mobile applications:
http://dojo-toolkit.33424.n3.nabble.com/file/n2636749/Using_the_Dojo_Toolkit_with_the_Closure_Compiler.pdf?by-user=t
The Closure Compiler in Advanced Mode actually makes JavaScript runs faster in mobile environments due to its industrial-scale optimizations. For example, in-lining of functions, virtualization of prototype methods, namespace folding, dead-code removal etc. will all make code run faster, so it is not only an obfuscator, it is an optimizing compiler as well.
My own benchmarks runs code on the iPad around 10-20% faster, and 30% faster on the Android. Memory usage is also reduced.
Your question really needs an analysis on your own javascript in order to arrive at a useful answer.
Often though, obfuscation actually speeds up javascript since the file sizes are reduced (faster loading) and symbols get small names (less to compare against).
If the obfuscator is doing some encoding and calling eval, like some do, then there will be a performance penalty at script load time. After that is run, there should be no difference and, as stated before, it may speedup your code due to slower size.
That depends on what you mean by obfuscation.
If you're referring to minification, using a tool like JSMin, then the effect is nil.
If you're talking about something like Packer, the eval process actually does have an impact on how long it takes for the code to execute. On a slow device, that impact can be significant.
Is it possible to run JavaScript code in parallel in the browser? I'm willing to sacrifice some browser support (IE, Opera, anything else) to gain some edge here.
If you don't have to manipulate the dom, you could use webworkers ... there's a few other restrictions but check it out # http://ejohn.org/blog/web-workers/
Parallel.js of parallel.js.org (see also github source) is a single file JS library that has a nice API for multithreaded processing in JavaScript. It runs both in web browsers and in Node.js.
Perhaps it would be better to recode your JavaScript in something that generally runs faster, rather than trying to speed up the Javascript by going parallel. (I expect you'll find the cost of forking parallel JavaScript activities is pretty high, too, and that may well wipe out any possible parallel gain; this is common problem with parallel programming).
Javascript is interpreted in most browsers IIRC, and it is dynamic on top of it which means it, well, runs slowly.
I'm under the impression you can write Java code and run it under browser plugins. Java is type safe and JIT compiles to machine code. I'd expect that any big computation done in Javascript would run a lot faster in Java. I'm not specifically suggesting Java; any compiled language for which you can get a plug in would do.
As an alternative, Google provides Closure, a JavaScript compiler. It is claimed to be a compiler, but looks like an optimizer to me and I don't know much it "optimizes". But, perhaps you can use that. I'd expect the Closure compiler to be built into Chrome (but I don't know for a fact) and maybe just running Chrome would get your JavaScript compiler "for free".
EDIT: After reading about what about Closure does, as compiler guy I'm not much impressed. It looks like much of the emphasis is on reducing code size which minimizes download time but not necessarily performance. The one good thing they do in function inlining. I doubt that will help as much as switching to a truly compiled langauge.
EDIT2: Apparantly the "Closure" compiler is different than the engine than runs JavaScript in Chrome. I'm told, but don't know this for a fact, that the Chrome engine has a real compiler.
Intel is coming up with an open-source project codenamed River Trail check out http://www.theregister.co.uk/2011/09/17/intel_parallel_javascript/