The WebIDL spec is in last call and the buzz is that this is a big step for the web. However, I can't find much useful information on it and why it is important.
I understand it defines an interface but an interface for what? How does WebIDL relate to ECMAscript, JavaScript, HTML or the DOM? Will this affect the life of the humble web developer or is it for a different crowd - browser developers perhaps?
Links to articles are welcome.
One area where WebIDL is important is typed language compilers targeting JavaScript. I work on a tool called WebSharper that compiles F# to JavaScript; a much better known example is GWT for Java. In these frameworks our users want typed access to available functionality, with code completion and checking. We used to describe F#-JS mappings by hand or in semi-formal semi-automated ways. It sounds like compiling WebIDL will help a whole lot to be more up-to-date with the evolving HTML5 spec. Ideally we could even convince JavaScript library writers to provide or generate WebIDL for their code.
I would say that it relates to the DOM and ways to manipulate it through languages other than JavaScript. JavaScript already has ways to manipulate the DOM, but that's not a standard, and WebIDL is probably an interface for other languages.
Related
We all know the great benefits that js libraries such as jquery and mootools etc. have contributed to web browsers and web development. These libraries are now included in a lot if not most of all websites.
So, I was wondering why none of the current javascript engines just include these functionalities inside the javascript engine itself. No doubt this has even more benefits such as performance, no need for external loading, standardisation (and its own benefits), etc.
I realize that this would probably only benefit web browsers and alike, though there must be also many uses beyond just web browsers, but for the sake of argument, one could just add such engine built in functionalities in an optional engine / ECMASCript -and I am guessing the word here- component (with emphasis on optional), that could then be enabled or added only in the engines inside web browsers.
Does anyone know this or has more info on this all?
My second questions is: If we, the community, would decide this to be a great progress for the future, where can we propose/ask such a thing and what else can we do to make this happen?
(Some of you must realize the trouble that some feature cost to be included in some projects, such as the years to decade old feature requests voted on by zillion users and never got through because of ...well..let's not be ungrateful to developers and leave these dots for your own imagination. So I'd rather have that the community focus this wish on 1 place only nad maybe the answer to this second question is the start of it?)
ECMAScript only intends to standardize the minimum amount of language and support library necessary to build these higher level libraries you're describing. Also, things like jQuery work on the Document Object Model provided by the browser code, which isn't even part of the ECMAScript standard -- ECMAScript just knows about DOM nodes in the general category of "foreign objects". The SpiderMonkey engine implements just the JavaScript language and its small standard library, which is then embedded into the larger Firefox browser environment.
So, to answer the question more directly: is it possible to give the JavaScript engine intimate knowledge (and perhaps an implementation) of a user-level library like jQuery? Yes, although you'd be breaking a lot of componentization in the browser, as you mention. Will anybody actually do it? Most likely not, because JavaScript engines just implement the core of what's necessary to build higher level libraries, like jQuery. Everybody is happy with them living outside of the JS engine, and a nice property of JavaScript is that you can just load the library in as you need it -- the source is freely available.
In fact, as a further note, JS engines are doing more and more to push ECMAScript standard library code out of C++-implementation land and into something called "self-hosted builtins", which enables functions like Array.indexOf to be implemented in JavaScript itself (i.e., with a for loop and comparisons). This exposes more JavaScript code to the natural process of optimizing JIT compilers, instead of having to deal specially with calls into native C++ implementation code.
I'm trying to implement an application using node.js and other related technologies. Heading from java land polymorphism but natural, but for the classical programmer node works differently.
The application will load new code during run-time provided by the user. In order for the main core to use this code "we" need to agree on some kind of a convention. Knowing how new Node is I wasn't that surprised that I didn't find the answer. The problem is this issue is rather vague in JS too.
Requirements:
Strong decoupling.
loading new code in run-time.
The solution should be applicable so I can share as much code as possible with the browser.
Update:
I did fiddle around with duck-typing, I've also encountered ideas from Clojure in regards to protocol based implementation.
I would appreciate some code in the answer.
JavaScript, just like most other scripting languages (i.e. no compile-time type checking) does polymorphism through duck typing.
If you're from Java-land you're probably looking for Dependency Injection which generally provides uber decoupling. You can probably use google to find a good dependency injection framework for Node, like this one.
Although truthfully you can probably just make a single Javascript/Coffeescript file that does all the wiring and config loading.
Because of the flexibility of Javascript just about every form polymorphism has been implemented (traits, interfaces, inheritance, prototypes). Each have their advantages/disadvantages but almost all are runtime check (if any) and not compile time.
Personally I would probably just use either Coffeescripts inheritance, traits.js or Javascript's builtin prototype chain.
EDIT: However since you're talking about allowing users to extend the system then callbacks and/or custom events are the preferred approach (i.e. higher order functional programming and event-bus). If you're looking for something substantial like a plugin system then loader-js looks rather complete (tip of the hat to #Larry Battle).
I've read the article about Google's upcoming DASH/DART language, which I found quite interesting.
One thing I stumbled upon is that they say they will remove the inherent performance problems of JavaScript. But what are these performance problems exactly? There aren't any examples in the text. This is all it says:
Performance -- Dash is designed with performance characteristics in
mind, so that it is possible to create VMs that do not have the performance
problems that all EcmaScript VMs must have.
Do you have any ideas about what those inherent performance problems are?
This thread is a must read for anyone interested in dynamic language just in time compilers:
http://lambda-the-ultimate.org/node/3851
The participants of this thread are the creator of luajit, the pypy folks, Mozilla's javascript developers and many more.
Pay special attention to Mike Pall's comments (he is the creator of luajit) and his opinions about javascript and python in particular.
He says that language design affects performance. He gives importance to simplicity and orthogonality, while avoiding the crazy corner cases that plague javascript, for example.
Many different techiques and approaches are discussed there (tracing jits, method jits, interpreters, etc).
Check it out!
Luis
The article is referring to the optimization difficulties that come from extremely dynamic languages such as JavaScript, plus prototypal inheritance.
In languages such as Ruby or JavaScript, the program structure can change at runtime. Classes can get a new method, functions can be eval()'ed into existence, and more. This makes it harder for runtimes to optimize their code, because the structure is never guaranteed to be set.
Prototypal inheritance is harder to optimize than more traditional class-based languages. I suspect this is because there are many years of research and implementation experience for class-based VMs.
Interestingly, V8 (Chrome's JavaScript engine) uses hidden classes as part of its optimization strategy. Of course, JS doesn't have classes, so object layout is more complicated in V8.
Object layout in V8 requires a minimum of 3 words in the header. In contrast, the Dart VM requires just 1 word in the header. The size and structure of a Dart object is known at compile time. This is very useful for VM designers.
Another example: in Dart, there are real lists (aka arrays). You can have a fixed length list, which is easier to optimize than JavaScript's not-really-arrays and always variable lengths.
Read more about compiling Dart (and JavaScript) to efficient code with this presentation: http://www.dartlang.org/slides/2013/04/compiling-dart-to-efficient-machine-code.pdf
Another performance dimension is start-up time. As web apps get more complex, the number of lines of code goes up. The design of JavaScript makes it harder to optimize startup, because parsing and loading the code also executes the code. In Dart, the language has been carefully designed to make it quick to parse. Dart does not execute code as it loads and parses the files.
This also means Dart VMs can cache a binary representation of the parsed files (known as a snapshot) for even quicker startup.
One example is tail call elimination (I'm sure some consider it required for high-performance functional programming). A feature request was put in for Google's V8 Javascript VM, but this was the response:
Tail call elimination isn't compatible with JavaScript as it is used in the real
world.
I'm curious what's the view on "things that compile into javascript" e.g. GWT, Script# and WebSharper and their like. These seem to be fairly niche components aimed at allowing folks to write javascript without writing javascript.
Personally I'm comfortable writing javascript (using JQuery/Prototype/ExtJS or some other such library) and view things like GWT these as needless abstractions that may end up limiting what a developer needs to accomplish or best-case providing a very long-winded workaround. In some cases you still end up writing javascript e.g. JSNI.
Worse still if you don't know what's going on under the covers you run the risk of unintended consequences. E.g. how do you know GWT is creating closures and managing namespaces correctly?
I'm curious to hear others' opinions. Is this where web programming is headed?
Should JavaScript be avoided in favor of X? By all means!
I will start with a disclaimer: my answer is very biased as I am on the WebSharper developer team. The reason I am on this team in the first place is that I found myself a complete failure in writing pure JavaScript, and then suggested to my company that we try and write a compiler from our favorite language, F#, to JavaScript.
For me, JavaScript is the portable assembly of the web, fulfilling the same role as C does in the rest of the world. It is portable, widely used, and it will stay. But I do not want to write JavaScript, no more than I want to write assembly. The reasons that I do not want to use JavaScript as a language include:
There is no static analysis, it does not even check if functions are called with the right number of arguments. Typos bite!
There is no or a very limited concept of libraries, namespaces, modules, classes, therefore every framework invents their own (a similar situation to that of R5RS Scheme).
The tooling (code editors, debuggers, profilers) is rather poor, and most of it because of (1) and (2): JavaScript is not amenable to static analysis.
There is no or a very limited standard library.
There are many rough edges and a bias to using mutation. JavaScript is a poorly designed language even in the untyped family (I prefer Scheme).
We are trying to address all of these issues in WebSharper. For example, WebSharper in Visual Studio has code completion, even when it exposes third-party JavaScript APIs, like Ext Js. But whether we have or will succeed or fail is not really the point. The point is that it is possible, and, I would hope, very desirable to address these issues.
Worse still if you don't know what's
going on under the covers you run the
risk of unintended consequences. E.g.
how do you know GWT is creating
closures and managing namespaces
correctly?
This is just about writing the compiler the right way. WebSharper, for instance, maps F# lambdas to JavaScript lambdas in a 1-1 manner (in fact, it never introduces a lambda). I would perhaps accept your argument if you mentioned that, say, WebSharper is not yet mature and tested enough and therefore you are hesitant to trust it. But then GWT has been around for a while and should produce correct code.
The bottom line is that strongly typed languages are strictly better than untyped languages - you can easily write untyped code in them if you need to, but you have the option of using the type-checker, which is the programmer's spell-checker. Why wouldn't you? A refusal to do so sounds a bit luddite to me.
Although, I don't personally favor one style over another, I don't think that abstraction from Javascript is the only benefit that these frameworks bring to the table. Surely, in abstracting the entire language, there will be things that become impossible that were previously possible, and vice-versa. The decision to go with a framework such as GWT over writing vanilla JavaScript depends on many factors.
Making this a discussion of JavaScript vs language X is fruitless as each language has its strengths and weaknesses. Instead, do an objective cost-benefit analysis on what is to be gained or lost by going with such a framework, and that can only be done by you and not the SO community unfortunately.
The issue of not knowing what goes on under the hood applies to JavaScript just as much as it does to any translated source. How many people do you think would know exactly what is going on in jQuery when they try to do a comparison such as $("p") == $("p") and get back false as a result. This is not a hypothetical situation and there are several questions on SO regarding the same. It takes time to learn a language or framework, and given sufficient time, developers could just as well understand the compiled source of these frameworks.
Another related aspect to the above question is of trust. We continuously build higher level abstractions upon lower level abstractions, and rely on the fact that the lower level stuff is supposed to work as expected. What was the last time you dug down into the compiled binary of a C++ or Java program just to ensure that it worked correctly? We don't because we trust the compiler.
Moreover, when using such a framework, there is no shame in falling back to JavaScript using JSNI, for example. It's all about solving the problem in the best possible manner with the tools at hand. There is nothing sacred about JavaScript, or Java, or C#, or Ruby, etc. They are all tools for solving problems, and while it may be a barrier for you, it might be a real time-saver and advantageous to someone else.
As for where I think web programming is headed, there are many interesting trends that I think or rather hope will succeed such as JavaScript on the server side. It solves very real problems for me at least in that we can avoid code duplication easily in a web application. Same validations, logic, etc. can be shared on the client and server sides. It also allows for writing a simple (de)serialization mechanism so RPC or RMI communication becomes possible very easily. I mean it would be really nice to be able to write:
account.debit(200);
on the client side, instead of:
$.ajax({
url: "/account",
data: { amount: 200 },
success: function(data) {
..
}
error: function() {
..
}
});
Finally, it's great that we have all this diversity in frameworks and solutions for building web applications as the next generation of solutions can learn from the failures of each and focus on their successes to build even better, faster, and more awesome tools.
I have three big practical issues I have with websharper and other compilers that claim to avoid the pain of Javascript.
If you won’t know Javascript well you can’t understand most of the examples on the web of using the DOM/ExtJs etc., so you have to learn Javascript whatever. (For the some reason all F# programmers must be able to at least read C# or VB.NET otherwise they cannot access most information about the .net framework)
On any web project you need a few web experts that know the DOM and CSS inside out; would such a person be willing to work with F# rather than Javascript?
Being tied into the provider of the compiler, will they be about in 5 years’ time; I want full open source or the tools to be supported by Microsoft.
The 4 big positives I see with these frameworks are:
Shareing code between the server/client
Having fewer languages a programmer needs to know (javascript is a real pain as it looks like Jave/C# but is not anything like them)
The average quality of a F# programmer is a lot better than a jscript programmer.
My opinion for what it's worth is that every framework has its pros/cons and a project team should evaluate their use cases before including one. To me any framework is just a tool to be used to solve a problem, and you should pick the best one for the job.
I prefer to stick to pure JavaScript solutions myself, but that being said I can think of a few cases where GWT would be helpful.
GWT would allow a team to share code between the server/client, reducing the need to write the same code twice (JS and Java). Or if someone was porting a Java client to a web UI, they may find it easier to stick to GWT ( of course then again it may make it harder :-) ).
I know this is a gross over-simplification, because there are many other things that frameworks like GWT offer, but here is how I view it: if you like JavaScript, write JavaScript; if you don't, use GWT or Cappuccino or whatever.
The reason people use frameworks like GWT is not necessarily the abstraction that they give--you can have that with JavaScript frameworks like ExtJS--but rather the fact that they allow you to write web applications in something other than JavaScript. If I were a Java programmer who wanted to write a web application, I would use GWT because I would not have to learn a new language.
It's all preference, really. I prefer to write JavaScript, but many people don't.
Inspired by this question.
I commonly see people referring to JavaScript as a low level language, especially among users of GWT and similar toolkits.
My question is: why? If you use one of those toolkits, you're cutting yourself from some of the features that make JavaScript so nice to program in: functions as objects, dynamic typing, etc. Especially when combined with one of the popular frameworks such as jQuery or Prototype.
It's like calling C++ low level because the standard library is smaller than the Java API. I'm not a C++ programmer, but I highly doubt every C++ programmer writes their own GUI and networking libraries.
It is a high-level language, given its flexibility (functions as objects, etc.)
But anything that is commonly compiled-to can be considered a low-level language simply because it's a target for compilation, and there are many languages that can now be compiled to JS because of its unique role as the browser's DOM-controlling language.
Among the languages (or subsets of them) that can be compiled to JS:
Java
C#
Haxe
Objective-J
Ruby
Python
Answering a question that says "why is something sometimes called X..." with "it's not X" is completely side stepping the question, isn't it?
To many people, "low-level" and "high-level" are flexible, abstract ideas that apply differently when working with different systems. To people that are not all hung up on the past (there is no such thing as a modern low-level language to some people) the high-ness or low-ness of a language will commonly refer to how close to the target machine it is. This includes virtual machines, of which a browser is now days. Sorry to all the guys who pine for asm on base hardware.
When you look at the browser as a virtual machine, javascript is as close to the (fake) hardware as you get. That is the viewpoint that many who call javascript "low-level" have. I think it's a pointless distinction to make and people shouldn't get hung up on what is low and what is high.
A lot of people say this because the objects and structures provided in JavaScript are about as simple as you can get. To develop any sort of real functionality, you have to use an external library. Low level is a bad way to put this, because it already has a meaning in computer science. A better way to say it may be that it's without built-in libraries.
Compare this with Java, where the actual language really doesn't do a whole lot. Trying making an array without an ArrayList, or access the file system without the IO libraries. Most languages are more than just the basics, they come with this extra functionality.
With JavaScript, the only real power we get comes from the APIs that are introduced by the browsers and aren't part of the language. Things like DOM manipulation and Ajax are supplied by the browser.
It may be better to summarize all of this by saying that with a language like Java, you can get started doing some serious work without having to download third-party libraries, but with JavaScript, you either have to download a library or write a library of your own.
"Low" here has the same meaning as in the sentences "The number of casualties suffered in the first world war was low," and "Reduced-fat ice cream is low in calories." It makes sense when there's an obvious point of comparison, but out of context, it is simply absurd.
I don't view javascript as a low level language. A lot of functionality and user experience boosters are provided by it. Maybe others may view it as such simply because users can turn it off in their browser options, but it's a hugely robust language that virtually runs the web on virtually all types of browsers...
It's not, it may be as low-level as you can get in normal browser programming, but its on par with functional languages like Scheme or Python.
I think the great lacks of Javascript are lack of name spaces or packaging and no threads
It's low-level compared to the GWT and similar toolkits, but it's not a low-level language in the larger scheme of things. The features it offers are very high-level: closures, dynamic typing, and prototypical inheritance are just a few examples of its high-level features.
It's considered to be low level by a number of people who prefer writing java to generate javascript then writing javascript(ie they dislike it fairly or unfairly). A lot of people complain about java these days but despite the lack of static type checking most people would probably consider ruby and python easier to write in most cases (java being a fairly simple static language-which are much harder to design well without large built-in feature sets then a simple dynamic language).
Few people would call python or ruby low level compared to java and if people were forced to target a python or ruby vm its hard to imagine that a java to python/ruby compiler would become as popular as gwt.
In closing javascript has an image problem(people sometimes think of languages as getting more difficult as they become more low level and vice versa).