V8 javascript for 16mb ram ARM device - javascript

I'm part of a team developing embedded applications for ARM9-devices with 16mb ram and it's own OS. We're currently developing in C, but are all geared towards switching the language to something else.
Currently C++ and Haskell are good candidates, but I'm thinking on Coffee-script. The question is if Chrome's v8 engine would use to much ram for this to be a viable alternative? If so, is there any other that might fit the bill?
Forgot to mention, i need easy interop with the C libraries installed on the system. As most code we have today is C, and there will be a long re-writing period, using C functions should not be a hassle (having to create bindings etc.).
Unfortunatly, we are also bound by an old compiler (GCC 3.4.3).

Any language with automatic memory management will always have memory overhead and any dynamically typed language will always add some more overhead. So if you are limited to 16 MiB and want to squeeze out a lot of it, go with something with static typing and explicit memory management, which means C++.
Modern C++ (ok, no C++11 features in gcc 3.4.3, but the standard library was already there and boost should compile) will still do most memory management for you while still keeping the overhead low. And being almost backward compatible to C makes interoperating with existing libraries trivial.
If you don't need to squeeze out that much, many languages will do. Mono seems quite promising as it's one of the smallest managed runtimes, decently fast, portable and has multiple languages targeting it (C#, F#, boo etc.). But I suppose even JavaScript should do; it's interpreter is very small and if you don't need that many objects in memory, they will fit even with all the overhead of allocating everything separately.

Related

Learning (& teaching) assembly with WASM (WebAssembly)?

Given that WASM reached MVP in February, has anyone spent any time trying to work through the viability of using WASM to actually learn / teach higher level aspects of assembly with WebAssembly?
After going through a bit of material, it seems that it's still aimed at C/C++ development (perhaps due to the potential state of flux that still exists?) and there is no real material that talks about using WASM directly as far as learning assembly programming principles.
WASM is (at least could be) uniquely suited to learning, and teaching, assembly itself in a very universal manner that could later be extended to specific hardware, if desired. Learning WASM itself could be valuable for the general effort going forward and writing interesting and uniquely optimized programs.
It might even be neat to see interest in things like the old demo scene resurrected with WASM...
I'm not sure if Wasm is ideal for learning first things about assembly language. While it distills many of the basic operations available in modern CPUs, it also is a somewhat more high-level abstraction. For example:
It is a stack machine.
It has an infinite set of virtual registers.
It has structured control flow.
It provides no access to the stack.
It provides no access to code.
It is typed.
It will probably gain other more high-level features in the future.
Many of these are prerequisites to making Wasm safe and portable, which in itself is very unlike ordinary assembly languages.
(I happen to agree with those who say that "WebAssembly" is a bit of a misnomer -- both the "Web" part and the "Assembly" part, actually. It was a play on JavaScript previously being called "the assembly language of the web" for its ubiquity.)
WASM is just like regular processor assembly, but... for web :-) The same way 99.9% of people don't need to learn assembly if they don't want to, one doesn't need to learn WASM.
WASM will be very useful if you wanna jump into low level stuff and especially if you will deal with compilers. WASM will be a convenient way for someone to port, for example, the backend of a C++ compiler to work in the browser. So the same GCC/LLVM you compile to x86/x64/ARM/etc, will be able to compile for web as well.
Notice that besides the name Web "assembly", this has not too much to do with CPU architeture like x86 or x64. So "learning assembly with WASM" doesn't need one will be learning the "bare metal". Sometimes can be a little confusing.

Why isn't it good non-compile scripting languages to use for intensive calculations?

I was reading a ebook about web technologies and I found this.
JavaScript is a language in its own right (theoretically it isn't tied
to web development), it's supported by most web clients under any
platform, and it has some object-oriented capabilities. JavaScript is
not a compiled language so it's not suited for intensive calculations
or writing device drivers and it must arrive in one piece at the
client browser to be interpreted so it is not secure either, but it
does a good job when used in web pages.
Here my problem is why we can't use JavaScript for process intensive calculations? It doesn't describe in the book. However, I have use JavaScript for mobile applications too, In some we have done very large calculation. How non-compile languages effect on this?
Two parts to this.
In a non-compiled language, you have to take a hit to compile or interpret it. Optimisation can reduce the cost of that, ie cache the result of the compilation, though of course that introduces complexity and uses up memory.
The other side is after a program is compiled, the result can be tweaked and specifically optimised for a particular purpose.
You have to consider the context though, one calculation to isolate a particular Calibi-Yau space was estimated to need 4 years to complete on the best super computer available at the time. So your definition of big and the guy who wrote the article might not be comparable. Course they could be one of those micro-optimisation types...
With modern compiler/interpreters and the most optimised code you can write, has to be a real edge case for this to be significant, and pre-compiled code is pretty much a given in those scenarios.

Benchmark for Google Closure Library

When I search for performance of Javascript libraries I get many sites showing the comparision of performance between the following popular libraries
jQuery (pretty slow)
Prototype(very slow in IE)
Dojo ( fastest when comes to DOM )
ExtJs (average)
Micro JS( slow but OK )
But in any of the benchmarks, Google Closure Library is NOT included. Is it not like any other standard library for it is said that it is a procedural style library.
I need some benchmarks on the performance of Closure library. And would like an advice on "Is switching to Closure library good when using dojo at beginner stage and jQuery at some intermediate stage"
Google posts that it uses the closure library in all its apps like Gmail, etc... The performance is very good. Is this because of the library ? Can an intermediate javaScript coder who can write OO code in JS use Closure library to very high level, or is it advisable to continue using DOJO.
On Closure Library
The Closure Library is pretty close to Dojo in style -- actually, when it was first developed, the authors took inspiration from Dojo.
However, the speed and power of the Closure Library came from the Closure Compiler, which heavily optimizes a JavaScript program in order to remove all the bottlenecks (such as navigating chains of namespaces).
I personally don't like it a single bit as it detracts from the beauty of Dojo class-based constructs (simply to satisfy the compiler) and all those goog.kitchen.sink.getMeACupOfTeaSoICanRelax() long namespaces make writing (and reading) JavaScript programs a royal pain -- the fact that long namespaces are all optimized away by the compiler does not make it pretty (to me) to overuse them because you can.
In addition, its obsession with trying to make JavaScript programs look as much OOP as possible (perhaps because there are tons of Java programmers in Google) means over-reliance on OOP concepts like property getters and setters and the avoidance of many useful (and unique) JavaScript features like mixin's. If you are a Java programmer learning to program in JavaScript, you'll be right at home using the Closure Library. That doesn't make it any bit elegant.
It does, however, offer an industrial strength environment that is rock solid -- since Google has built HUGE sites with it. It is something that (in my personal opinion) is solid and works well, but looks ugly.
However, Dojo is rock solid as well, but more volatile since it is an open-source development project. You decide whether you'd like to switch.
On the Closure Compiler and Dojo
Actually, you can use Dojo with the Closure Compiler in Advanced Mode also. See this link for a description on how to do it. Based on my own tests, a program compiled by the Closure Compiler is typically around 25% smaller than minified versions (due to dead-code elimination) and runs about 20-30% faster for simple pages and more for large pages.
On Speed of Libraries in General
Other libraries all have their own characteristics and quirks, and each balance useability, flexibility and power with performance. For example, jQuery creates many many jQuery objects on the way and has a performance penalty, especially on older browsers. However, modern browsers, esp. Google Chrome, actually does optimizations so that the performance hits of using jQuery is minimal.
You actually need to ask yourself why you need JavaScript to run fast. Most modern browsers are already quite fast so that it is really not a very important consideration regarding choice of library. Better choose your library based on whether it suits you or not (and the task you're at hand) instead of whether it runs 10ms faster in a browser.
If you are writing a web site for mobile devices, or writing an HTML5 game for instance, you may need to squeeze the last drop of performance (in games) and/or save as much resources as possible (in mobile). In such cases, I find that using Dojo and then compiling with the Closure Compiler yields one of the best combinations for such scenarios.
Would be nice to back this up with some data. There's lies, statistics and benchmarks. You may for example ask yourself why framework x is slower then framework y. In general though, benchmarks do not reflect real world scenario's, and end to be fun, but useless. I've worked with jQuery for a few years now and I've not encountered situations where it's slow, or that it had any effect on the usability.
I your code is too slow and you need to optimize it. Then you may benchmark, profile and debug the sweet jesus out of that code. Things like gmail are likely to have had that kind of treatment. So gmail is probably not fast because of closure, but fast because of intelligent programming.
If you're affraid of slow code, you might want to learn generic javascript optimization methods, and you'll be a happy camper with any framework.
edit: on closure
Choosing closure might have several advantages when you're building big and complex: there are some nice bandwith optimization tools, tools for managing more complex applications. There is a good guideline on how to write proper code, and tools to check that you followed these guidelines (closure linter, quite interesting). There's a nice templating system (probably won't improve your performance per se) and dependency management stuff. If you wish to integrate these tools, or build upon it, I would say go for it, otherwise, you may rest easy and be happy with your choice for dojo.
using dojo at beginner stage and jQuery at some intermediate stage
You've got that the wrong way around. Dojo is far more advanced than jQuery. jQuery itself is for beginners.
Can an intermediate javaScript coder who can write OO code in JS use Closure library to very high level, or is it advisable to continue using DOJO.
Either library will work. Spend a few hours playing with google closure and see how you find it. I doubt there's that much difference apart from general easy integration with google libraries like google charts.

What are the use cases of Node.js vs Twisted?

Assuming a team of developers are equally comfortable with writing Javascript on the server side as they are with Python & Twisted, when is Node.js going to be more appropriate than Twisted (and vice versa)?
Twisted is more mature -- it's been around for a long, long time, and has so many bells and whistles as to make your head spin (implementations of the fanciest protocols, integration of the reactor with a large variety of other event loops, and so forth).
Node.js is said to be faster (I have not measured it myself) and might perhaps be simpler to use (if you need none of the extra bells and whistles) exactly because those extras aren't there (kind of like Tornado in the Python world -- again, I have never measured relative performance).
So, I'd absolutely use Twisted if I needed any of its extra features or wanted to feel on a more solid ground by using a more mature package. If these considerations don't apply, but top performance is a key goal of the project, then I'd write a simple benchmark (but still representative of at least one or two key performance-need situations for my actual project) in Twisted, Node.js, and Tornado, and do a lot of careful measurement before I decide which way to go overall. "Extra features" (third party extensions and standard library) for Python vs server-side Javascript are also much more abundant, and that might be a key factor if any such extras are needed for the project.
Finally, if none of these issues matter to a specific application scenario, have the development team vote on relative simplicity of the three candidates (Twisted, Node.js, Tornado) in terms of simplicity and familiarity -- any of them will probably be just fine, might as well pick whatever most of the team is most comfortable with!
As of 2012, Node.js has proved to be a fast, scalable, mature, and widely used platform. Ryan Dahl, creator of Node.js quotes:
These days, Node is being used by a large number of startups and established companies
around the world, from Voxer and Uber to Walmart and Microsoft. It’s safe to say that
billions of requests are passing through Node every day. As more and more people
come to the project, the available third-party modules and extensions grow and increase
in quality. Although I was once reserved about recommending it for mission-critical
applications, I now heartily recommend Node for even the most demanding server
systems.
More formally, the advantages of Node can be classified as:
Great community: It can be said that no other platform gained such community appeal in such a short period of time, it has hundreds of contributors and thousands of watchers in GitHub, and is being used by giants like Yahoo! (Manhattan project), e-bay, LinkedIn, Microsoft, and Voxer.
NPM: Although having a relatively small core, Node has lots of packages available to extends its functionality to anything you may consider! Its all automated and being developed and extended actively, think of PyPI (pip).
Scalability and Speed: Node's architecture and single threaded nature allows high scalability and speed. Specially after 0.8 release, its speed got really faster (benchmarks), which can be confirmed by a lot of large businesses using Node. The V8 core of it is also constantly getting better thanks to the current browser war.
JavaScript: The core language of Node (JS) fits better for such server side usages, specially lambda functions, dynamic objects, and easy JSON serialization are JS highlights which really fits well in cases that speed and scalability counts. (Python has all of them but these are really better and more powerful in JS).
Deployment: Because of its wide usage, a lot of really good sites provide tools for easy and powerful Node deployment, including: Heroku, Joyent, Cloud9 and a lot more.
Therefore, Node seem more powerful and with a lighter future, so if there isn't any constraint to use it (like existing code, servers, team capability), it is recommended for any new collaborative network project aiming high speed and scalability.

Why has Javascript been (mostly) only a browser-side technology for more than 10 years?

Recently there is a lot of projects that pushes Javascript into other directions: as a general purpose scripting language (GLUEScript, Rhino), as an extension language (QTScript, Adobe Reader, OO Macros), Widgets (Yahoo Widgets, MS Gadgets, Dashboard), and even server-side JS & web frameworks (CommonJS, Helma, Phobos, V8cgi), which seems obvious since it is already a language widely used for web development.
But wait, everything is so new and nothing is really mature. However JS is around for almost 15 years, being as powerfull as any other scripting languages, being standardised by the ECMA, and being a mandatory technology for web development.
Why did it take so much time to gain acceptance into other domains than web browsers?
Douglas Crockford, who has done much to help people use JavaScript productively, also has a very clear picture of the things that hold JavaScript back. You can pick up a few of the points at JavaScript: The World's Most Misunderstood Programming Language. See also his lecture series Crockford on JavaScript.
The proliferation of Javascript as a viable language for heavy applications development isn't as old as the langauge itself... atleast it's definitely more recent than 15 years.
Mainly, it's popular and heavily-utilized right now due to the proliferation of AJAX and frameworks like jQuery/mootools/protoypes/scriptaculous and largely because browsers support is improving in compatibility, performances and whatnot.
Take, for example, Node.js which is built on V8 (which didn't exists until Google made Chrome) which strikes up the javascript performance bar so high that you can make high-performance networked applications on top of it dead easy.
So, IMO, it's because people jumped on the AJAX bandwagon that made JavaScript now suddenly becomes so much more awesome and spanning out to other areas.
There are some big flaws in the language surrounding code reuse - in particular, all code is executed in a single namespace, and there's no language-level support for importing other code. Plenty of enterprising library authors have worked around this on the client-side, due to necessity, but these problems are big ones to avoid when implementation choice is a language.
There is also not a single standard implementation of the language - Rhino is the most prominent, but it's not the most advanced in the days of SpiderMonkey, JavaScriptCore and V8. This shouldn't be as much of an issue with a standardized issue, but there's still a problem that non-browser JS code is unlikely to work with all JS engines, and very likely to be targeted to a single engine (Node.js depending on V8 is the most prominent example of this).
These problems have kept JS libraries from being written outside the browser, and since nobody writes non-browser JS libraries, writing non-browser JS becomes that much more difficult.
Things are changing - in particular, the CommonJS group has created a module spec that allows for better code reuse, which is already being used in Node, and are working on better specs for packaging JS code.
Language adoption is as much about the auxiliary libraries as it is the language itself. In the case of Javascript, there's a dearth of libraries for doing things like I/O and other standard requirements for fully-fledged use.
8 years ago I tried to have a brief look at JavaScript to see if I should learn more or not about it. I didn't. Why? I thought it would die within 2-3 years.
But thanks to JQuery and other JS frameworks it has gained very much reputation the last couple of years.
It's also associative. Do cars drive on water, do airplanes land on highways? JavaScript have always "belonged" to browsers, even though you could use it for non-browser related stuff.

Categories