I want to ask if someone measured impact of some javascript obfuscators on the resulted code. I am aiming mobile users so the speed is cruicial. And I am especially trying to run 2 or 3 different obfuscators on same code in a row, which obfuscates the code very well, but I am afraid that it will have some speed impact.
it shouldn't. Compiler/interpreters couldn't care less about what your symbols are, as long as they are correct.
Most JavaScript obfuscators only minifies and rename local variables, which is extremely easy to reverse-engineer with a beautifier.
The best combination I've found is the DojoToolkit and the Closure Compiler in Advanced Mode.
Closure in Advanced Mode makes JavaScript code almost impossible to reverse-engineer, even after passing through a beautifier. Once your JavaScript code is obfuscated beyond any recognition and any possibility to reverse-engineer, your HTML won't disclose much of your secrets.
This link for using the Dojo Toolkit with the Closure Compiler in Advanced Mode for mobile applications:
http://dojo-toolkit.33424.n3.nabble.com/file/n2636749/Using_the_Dojo_Toolkit_with_the_Closure_Compiler.pdf?by-user=t
The Closure Compiler in Advanced Mode actually makes JavaScript runs faster in mobile environments due to its industrial-scale optimizations. For example, in-lining of functions, virtualization of prototype methods, namespace folding, dead-code removal etc. will all make code run faster, so it is not only an obfuscator, it is an optimizing compiler as well.
My own benchmarks runs code on the iPad around 10-20% faster, and 30% faster on the Android. Memory usage is also reduced.
Your question really needs an analysis on your own javascript in order to arrive at a useful answer.
Often though, obfuscation actually speeds up javascript since the file sizes are reduced (faster loading) and symbols get small names (less to compare against).
If the obfuscator is doing some encoding and calling eval, like some do, then there will be a performance penalty at script load time. After that is run, there should be no difference and, as stated before, it may speedup your code due to slower size.
That depends on what you mean by obfuscation.
If you're referring to minification, using a tool like JSMin, then the effect is nil.
If you're talking about something like Packer, the eval process actually does have an impact on how long it takes for the code to execute. On a slow device, that impact can be significant.
Related
I am working on a javascript/html5 gif editor and i am wondering how can i test in a slower environment. It works great on my pc but since its using heavy javascript and algorithms i need to see if it works smoothly with a less powerful processor.
I recommend setting up a virtual machine using VMware Player or VirtualBox. You can adjust attributes like processor speed, number of processor cores, and memory. This will help you test your code out in a slower environments.
What an odd request! Usually people want things faster and faster.
Using a Virtual Machine is a good solve here. It allows you to allocate exactly how powerful the computer is, and might make a very suitable testing environment for you.
If you want inefficiency, look no further than Internet Explorer. You just need to make sure it supports your HTML5 and your JS.
Using older versions of some of the more capable browsers might be a good idea too.
Also, and this is a bit of a hack, opening your program several times in the same browser might be something to look into. It will hog up the RAM, and your processor will be under more stress and will not perform as well as it would in a normal situation.
Just thought I'd throw out every suggestion I could think of :)
After much reading, it seems that when people say, browser engine, they refer to the layout engine such as gecko or webkit.
I also know that the layout engine is basically responsible for "painting" the screen and the javascript engine is used for interpreting.
Question though, is for a modern web app, which has a bigger impact on performance? And how related are this two? What are their other uses outside the browser. What other functions do they serve.
Thank you very much.
Whichever engine your content taxes the most will have the biggest impact. If you have a gigantic, complex HTML document with thousands of complex nodes and elaborate CSS, you will be taxing the layout/rendering engine a lot, and therefore you might notice differences between the various browsers. However, for the most part I believe your content has to be pretty darn complex for significant differences to manifest.
On the javascript side, if your page is highly dynamic with lots of callbacks processing many rapid events and making big changes to the document in response to those events, the javascript engine will have a larger impact on your page's performance.
Outside of a browser, sometimes the layout/rendering engine will be used in a "headless" program such as PhantomJS. The Javascript engines can be used for interpreting javascript in non-browser environments as is done with node.js, Rhino, etc.
The online closure compiler is amazing:
http://closure-compiler.appspot.com/home
However, when using the advanced option, will it affect performance of the script at all? IE, will it make it faster or slower in general, or does it depend on the script itself? Or is there no performance hit at all?
I only ask this as some scripts I write will be performance critical, and I know the answer to this question is "try and see" but I'm not very good at running these sorts of tests, I don't know where to start.
Here are two points from the Closure Compilier faq that may interest you.
Does the compiler make any trade-off between my application's execution speed and download code size?
Yes. Any optimizing compiler makes trade-offs. Some size optimizations do introduce small speed overheads. However, the Closure Compiler's developers have been careful not to introduce significant additional runtime. Some of the compiler's optimizations even decrease runtime (see next question).
Does the compiler optimize for speed?
In most cases smaller code is faster code, since download time is usually the most important speed factor in web applications. Optimizations that reduce redundancies speed up the run time of code as well.
So it would seem that it will depend on the code you've written. Could be faster, but there's a chance it could be a little slower. Ultimately, testing will be required.
Is it possible to run JavaScript code in parallel in the browser? I'm willing to sacrifice some browser support (IE, Opera, anything else) to gain some edge here.
If you don't have to manipulate the dom, you could use webworkers ... there's a few other restrictions but check it out # http://ejohn.org/blog/web-workers/
Parallel.js of parallel.js.org (see also github source) is a single file JS library that has a nice API for multithreaded processing in JavaScript. It runs both in web browsers and in Node.js.
Perhaps it would be better to recode your JavaScript in something that generally runs faster, rather than trying to speed up the Javascript by going parallel. (I expect you'll find the cost of forking parallel JavaScript activities is pretty high, too, and that may well wipe out any possible parallel gain; this is common problem with parallel programming).
Javascript is interpreted in most browsers IIRC, and it is dynamic on top of it which means it, well, runs slowly.
I'm under the impression you can write Java code and run it under browser plugins. Java is type safe and JIT compiles to machine code. I'd expect that any big computation done in Javascript would run a lot faster in Java. I'm not specifically suggesting Java; any compiled language for which you can get a plug in would do.
As an alternative, Google provides Closure, a JavaScript compiler. It is claimed to be a compiler, but looks like an optimizer to me and I don't know much it "optimizes". But, perhaps you can use that. I'd expect the Closure compiler to be built into Chrome (but I don't know for a fact) and maybe just running Chrome would get your JavaScript compiler "for free".
EDIT: After reading about what about Closure does, as compiler guy I'm not much impressed. It looks like much of the emphasis is on reducing code size which minimizes download time but not necessarily performance. The one good thing they do in function inlining. I doubt that will help as much as switching to a truly compiled langauge.
EDIT2: Apparantly the "Closure" compiler is different than the engine than runs JavaScript in Chrome. I'm told, but don't know this for a fact, that the Chrome engine has a real compiler.
Intel is coming up with an open-source project codenamed River Trail check out http://www.theregister.co.uk/2011/09/17/intel_parallel_javascript/
what basic tips should we observe in design web pages(html/css/javascript) for having highest compatibility with most browsers(IE-firefox-opera-chrome-safari)?
thanks
Validate often and squash all validation errors by the time you make a public release. The purpose of validation, after all, is to parse the html as a standards-compliant browser would and then avoid the errors that a browser's parser would find.
Apply progressive-enhancement techniques. Often that means moving some of the complexity of dynamic pages to the back-end (e.g. php, django, what have you) so that you can have complex functionality that doesn't break in one of the thousands of different client environments in which a page's javascript will run. jQuery is excellent for narrowing the focus of your javascript development towards feature enhancement instead of open-ended features-in-javascript, and it'll help with cross-browser compatibility as well.
IE - Test in at least one live version of IE 7 or 8. Unfortunately, there really isn't any way around this, because even IE8 misbehaves like no other browser. If possible, limit your goal of support for IE6 to html/css (i.e. don't promise support for user-enhancement-features via javascript in ie6). If possible, drop support for IE 5.5 and below.
For Javascript, use libraries that are intended on being platform-independent (ie: JQuery, Prototype). Not everything will be, but it'll make your life much easier.
For CSS, I'd say follow standards, but IE tends to cause problems across the board.
Which means, you need to test, and test often. Selenium is awesome for automated functional testing, and it works with pretty much every browser. We use a Selenium RC server on a Windows machine to test IE and Firefox, which are then controlled from our standard Java JUnit tests.
Keep things simple.
The simpler your markup, CSS, and JavaScript, the easier it will be to track down incompatibilities. Try to limit yourself to CSS1 for as much as possible. Only use more modern CSS2/3 features when there is no easier way to accomplish your task.
Don't use tables, they just add extra complexity. Using semantic markup not only makes things maintainable, but also gets you the best cross browser support if done properly.
Keep in mind that floats are evil, but are also very powerful. Use them generously, but avoid trying to clear floats. Use overflow instead.
Use a JavaScript framework. Framework developers have smoothed out most of the cross-browser bugs for you. I recommend jQuery, but you can choose any framework your developers feel comfortable with. My advice is to:
Use a JavaScript framework that doesn't alter the prototypes of native objects (like Prototype JS does)
Doesn't introduce many global variables. Most frameworks follow this rule.
Aside from those 2 rules for JavaScript, try using closures to encapsulate code so you don't introduce your own global variables.
One strategy I use is to start my CSS with a set of rules that blank everything out. Each browser may have different values for element attributes so ensuring that everything is consistent from the get-go can be handy. Here is an example reset.css
http://meyerweb.com/eric/tools/css/reset/
Take a look at this great article: Browser Compatibility Tutorial
Remember: something won't just work for a specific browser (mayble a left dotted border won't show in Chrome). Do not be upset about that if you can! :) Cross-compatibility is an art that takes a lot of time.