difference between layout engine and javascript engine - javascript

After much reading, it seems that when people say, browser engine, they refer to the layout engine such as gecko or webkit.
I also know that the layout engine is basically responsible for "painting" the screen and the javascript engine is used for interpreting.
Question though, is for a modern web app, which has a bigger impact on performance? And how related are this two? What are their other uses outside the browser. What other functions do they serve.
Thank you very much.

Whichever engine your content taxes the most will have the biggest impact. If you have a gigantic, complex HTML document with thousands of complex nodes and elaborate CSS, you will be taxing the layout/rendering engine a lot, and therefore you might notice differences between the various browsers. However, for the most part I believe your content has to be pretty darn complex for significant differences to manifest.
On the javascript side, if your page is highly dynamic with lots of callbacks processing many rapid events and making big changes to the document in response to those events, the javascript engine will have a larger impact on your page's performance.
Outside of a browser, sometimes the layout/rendering engine will be used in a "headless" program such as PhantomJS. The Javascript engines can be used for interpreting javascript in non-browser environments as is done with node.js, Rhino, etc.

Related

w3c dom and the javascript

I was seeing the W3C Document Object Model and excited that different programming languages have to implement their interfaces accordingly. Like other languages, JavaScript also maintains the DOM.
So I'm curious to know about the following questions:
Which versions of javascript implements dom level 1, 2,3 and so
forth.
Are they all implemented in javascript?
Are they implemented by javascript or implemented by ECMAScript and
followed by JavaScript?
And what are the IDL definitions described in W3C DOM: Are they
needed to understand for javascript developers or is that the symbol
of implementation by HTML?
The pedantic answer is "none".
There is no formal mapping of JS iteration to DOM-specification.
In general, JS-versioning has all but been abandoned (save major overhauls), though can be seen as signposts of when you might start to consider feature-checking.
This is because...
Not really, no.
That is to say, yes, the APIs which you will use to interface with the HTML DOM are all implemented in JavaScript...
However, no browser has a stable, feature-complete implementation of either
JS or HTML DOM[1-4].
Because both specs are so large, and ever-changing, different vendors have prioritized different features at different times, leading to patches of incompatibility.
To further this actual answer, the JS spec says nothing of DOM or BOM ("Browser ...") APIs.
This is the reason #1 must be a "No", as different DOM/BOM combinations on different JS implementations leads to the fundamental inability to say "All JS1.7-compliant browsers are DOM3 compliant."
The truth is that no browser is wholly compliant with either spec, and neither spec is the latest, anyway. As for technical-implementation (the code behind the API), there are no rules, so long as the behaviour is well-defined. Some browsers defer to C/C++ for core JS/DOM/BOM functionality, while older IE browsers had an ActiveX layer between the browser and JS DOM access (making touching elements for any reason arbitrarily expensive).
Here's the rub.
Most people would consider them to be different things.
Most people would think "JS is the thing that you use in the browser, to do your scripting.".
Really, ECMAScript and JavaScript are the same thing, and "JavaScript" is a Sun (now Oracle) trademark... how none of us are getting sued is a mystery.
JS/ECMAS knows nothing of DOM or BOM, and it's up to the vendors to include DOM-access in their browser (on a per-feature, rather than per-version basis). It should also be noted that while VendorA might implement a feature from the spec, and VendorB might omit it, VendorC might have an off-standard implementation of it, and also implement a similar but completely out-of-spec feature, as well.
Don't worry about the DOM implementation specifics.
As a JS-dev, you won't need to know or care what a Java implementation of an HTML node might look like.
Even with WebIDL, and moving away from the old-world Java-centric view, as far as day-to-day usage of JS as a language, the DOM-node interfaces are as dry as toast, unbuttered, face-down in a sand dune, unless it's really what you're into.
Even then, it's more for people who make the browsers, and not the people who make things which run in them.
These aren't all of the answers. And while I've tried to remain subjective, I'm sure there's a little objectivism in there, as they aren't wholly cut and dry. I've tried to be, at least, factual.
From an engineering perspective, being careful about how and when you use the DOM in client-side JS is important -- both for making code portable and for allowing each language in the client-side stack to have access to the HTML in question, without doing somersaults in JS, to accommodate, because you built your whole site using DOM construction in JS.
From a pragmatic standpoint, rather than trying to match features to versions, use sites like http://caniuse.com to match features to browser versions. It's much more productive.
And have fun.

Netscape Enterprise Server and Server-Side JavaScript (SSJS) vs Node.js

What are the major differences between the Netscape Enterprise Server implementation of Server-Side JavaScript (SSJS) and the node.js implementation?
Why did not Netscape's implementation gain attention while the node.js seems to be far more popular?
Back in 1999/2000, I used to work at a company that used Netscape Server and SSJS. I don't know how popular it was at the time, but from first hand experience, I can tell you that almost everything about it was terrible:
It was a giant pain to debug (any changes to source files, even static files, required full reloading of the application, which was not a fast operation)
A simple error (such as an uncaught exception) often would lead to catastrophic server failure. Somewhat amusingly, this is the default behavior of NodeJS, although it is much easier to get around this problem with Node.
Although the syntax was JavaScript, it failed to implement one key advantage of modern JavaScript: runtime interpretation. Server Side JS with Netscape Server required compilation before deployment, and therefore dictated a very slow development process.
It followed a multi-threaded execution model (rather than modern JS VMs, which are almost always event-loop based)
Possibly it's biggest weakness was a lack of asynchronous programming support. All IO operations were blocking, and as such it required a heavyweight multithreaded model to support multiple clients. The execution model was more similar to a J2EE container than to modern event driven JavaScript VMs (ie: V8). In my opinion, this is the number one thing that NodeJS gets right: the async philosophy is deeply embedded in the NodeJS development workflow and it is the key to its lightweight, event driven, extremely efficient concurrency model.
Just for giggles, here's a link to the SSJS reference guide from version 1.2 . Starting on page 21, you can see all the standard functions and synchronous APIs for file objects, database queries, etc...
My company ended up switching to ColdFusion shortly thereafter and never looked back.
The main difference would be the evolution of Javascript over the the past 15+ years. Node.js uses the V8 Javascript Engine which would be far more optimized for modern computers.
Wikipedia has a good list of the differences between various server-side JS solutions.
Here is a list of features for Netscape Enterprise Server - provides a good idea of what makes modern SSJS solutions much better.
Why did it not gain attention? Realistically, client-side JS has only recently started to become the standard for web development so it was unlikely anybody would have considered using it for server-side development when it wasn't even really widely adopted for it's original purpose. I say widely adopted in that previously it was always difficult to cater JavaScript solutions to all browsers.

Javascript engine (or other embeddable language) for mongodb-like query execution environment and multithreading

I need some embeddable language for tasks similar to query execution in mongodb. Language should be fast and it should have both JIT and interpreter (for frequent scripts that JIT-compiled and for one-time-run scripts too), should have in-memory runtime that I populate with specific API functions (or classes, whatever) by hand (and nothing "built-in" else like gettime, thread spawning or similar), it should have C API and it should work on ARM (MIPS also would be nice), not too big footprint also would be nice (but this is not critical).
I have two candidates:
Google V8.
Spidermonkey (There was IonMonkey's ARM support announced AFAIK).
I have not experienced embedding languages into C projects before so I have a few questions: recently there was a rumor that V8 is not thread-safe, is this problem still exists? If so, where that lack of thread-safe can cause problems?
Also I would be glad if anyone suggested embeddable language which is more suitable for my requirements (except lua, I can't find any advantages in comparison with js except smaller footprint about what I don't care).
I'm not sure how SpiderMonkey's multithreading embeddability compares to V8's, but I do know that it's possible to do with SpiderMonkey -- we have a few multiprogramming embedders on dev.tech.js-engine that you may want to post followup questions to.
Our web workers implementation in the browser uses one runtime instance per worker (you can multiply instantiate the runtime in a single process) -- we've moved away from an multithread-safe single-runtime approach over the past few years because it's unnecessary for the web and adds a significant amount of complexity to the engine.
An alternative to multiprogramming is also the asynchronous, select-based, run-to-completion approach a la node.
A nit: I don't think an interpreter is really a requirement of yours -- your requirement is fast start up times for one-off code. SpiderMonkey has an interpreter and V8 does not, but V8 has a fast-code-emission (which we tend to call "baseline") JIT compiler that offers comparable performance in that area. That capability is an important requirement for JS on the web in general. :-)

What effect has javascript obfuscation on speed on mobile browsers?

I want to ask if someone measured impact of some javascript obfuscators on the resulted code. I am aiming mobile users so the speed is cruicial. And I am especially trying to run 2 or 3 different obfuscators on same code in a row, which obfuscates the code very well, but I am afraid that it will have some speed impact.
it shouldn't. Compiler/interpreters couldn't care less about what your symbols are, as long as they are correct.
Most JavaScript obfuscators only minifies and rename local variables, which is extremely easy to reverse-engineer with a beautifier.
The best combination I've found is the DojoToolkit and the Closure Compiler in Advanced Mode.
Closure in Advanced Mode makes JavaScript code almost impossible to reverse-engineer, even after passing through a beautifier. Once your JavaScript code is obfuscated beyond any recognition and any possibility to reverse-engineer, your HTML won't disclose much of your secrets.
This link for using the Dojo Toolkit with the Closure Compiler in Advanced Mode for mobile applications:
http://dojo-toolkit.33424.n3.nabble.com/file/n2636749/Using_the_Dojo_Toolkit_with_the_Closure_Compiler.pdf?by-user=t
The Closure Compiler in Advanced Mode actually makes JavaScript runs faster in mobile environments due to its industrial-scale optimizations. For example, in-lining of functions, virtualization of prototype methods, namespace folding, dead-code removal etc. will all make code run faster, so it is not only an obfuscator, it is an optimizing compiler as well.
My own benchmarks runs code on the iPad around 10-20% faster, and 30% faster on the Android. Memory usage is also reduced.
Your question really needs an analysis on your own javascript in order to arrive at a useful answer.
Often though, obfuscation actually speeds up javascript since the file sizes are reduced (faster loading) and symbols get small names (less to compare against).
If the obfuscator is doing some encoding and calling eval, like some do, then there will be a performance penalty at script load time. After that is run, there should be no difference and, as stated before, it may speedup your code due to slower size.
That depends on what you mean by obfuscation.
If you're referring to minification, using a tool like JSMin, then the effect is nil.
If you're talking about something like Packer, the eval process actually does have an impact on how long it takes for the code to execute. On a slow device, that impact can be significant.

Parallel JavaScript Code

Is it possible to run JavaScript code in parallel in the browser? I'm willing to sacrifice some browser support (IE, Opera, anything else) to gain some edge here.
If you don't have to manipulate the dom, you could use webworkers ... there's a few other restrictions but check it out # http://ejohn.org/blog/web-workers/
Parallel.js of parallel.js.org (see also github source) is a single file JS library that has a nice API for multithreaded processing in JavaScript. It runs both in web browsers and in Node.js.
Perhaps it would be better to recode your JavaScript in something that generally runs faster, rather than trying to speed up the Javascript by going parallel. (I expect you'll find the cost of forking parallel JavaScript activities is pretty high, too, and that may well wipe out any possible parallel gain; this is common problem with parallel programming).
Javascript is interpreted in most browsers IIRC, and it is dynamic on top of it which means it, well, runs slowly.
I'm under the impression you can write Java code and run it under browser plugins. Java is type safe and JIT compiles to machine code. I'd expect that any big computation done in Javascript would run a lot faster in Java. I'm not specifically suggesting Java; any compiled language for which you can get a plug in would do.
As an alternative, Google provides Closure, a JavaScript compiler. It is claimed to be a compiler, but looks like an optimizer to me and I don't know much it "optimizes". But, perhaps you can use that. I'd expect the Closure compiler to be built into Chrome (but I don't know for a fact) and maybe just running Chrome would get your JavaScript compiler "for free".
EDIT: After reading about what about Closure does, as compiler guy I'm not much impressed. It looks like much of the emphasis is on reducing code size which minimizes download time but not necessarily performance. The one good thing they do in function inlining. I doubt that will help as much as switching to a truly compiled langauge.
EDIT2: Apparantly the "Closure" compiler is different than the engine than runs JavaScript in Chrome. I'm told, but don't know this for a fact, that the Chrome engine has a real compiler.
Intel is coming up with an open-source project codenamed River Trail check out http://www.theregister.co.uk/2011/09/17/intel_parallel_javascript/

Categories