Why transformed the bytecode in the SpiderMonkey & JSC? - javascript

Javascript engine is usually used to transform bytecode from source code.then, the bytecode transforms to native code.
1) Why transformed bytecode ?? source code directly transforming native code is poor performance ?
2) If source code is very simple (ex. a+b function), source code directly transforming native code is good ?

Complexity and portability.
Transforming from source code to and kind of object code, whether it's bytecode for a virtual machine or machine code for a real machine, is a complex process. Bytecode more closely mimics what most real machines do, and so it's easier to work with: better for optimizing the code to run faster, transforming to machine code for an even bigger boost, or even turning into other formats if the situation calls for it.
Because of this, it usually turns out to be easier to write a front end whose only job is to transform the source code to bytecode (or some other intermediate language), and then a back end that works on the intermediate language: optimizes it, outputs machine code, and all that jazz. More traditional compilers for languages like C have done this for a long time. Java could be considered an unusual application of this principle: its build process usually stops with the intermediate representation (i.e. Java bytecode), and then developers ship that out, so that the JVM can "finish the job" when the user runs it.
There are two big benefits to working this way, aside from making the code easier to work with. The first big advantage is that you can reuse the backend to work with other languages. This doesn't matter so much for JavaScript (which doesn't have a standardized backend), but it's how projects like LLVM and GCC eventually grow to cover so many different languages. Writing the frontend is hard work, but let's say I made, for example, a Lua frontend for Mozilla's JavaScript backend. Then I could tap into all of the optimization work that Mozilla had put into that backend. This saves me a lot of work.
The other big advantage is that you can reuse the frontend to work with more machines. This one does have practical implications for JavaScript. If I were to write a JavaScript interpreter, I'd probably write my first backend for x86 -the architecture most PCs use- because that's where I'd probably be doing the development work. But most cell phones don't use an x86-based architecture -ARM is more common these days- so if I wanted to run fast on cell phones, I'd need to add an ARM backend. But I could do that, without having to rewrite the whole frontend, so once again, I've saved myself a lot of work. If I wanted to run on the Wii U (or the previous generation of game consoles, or older Macs) then I'd need a POWER backend, but again, I could do that without rewriting the frontend.
The bottom line is that while it seems more complex to do two transformations, in the long run it actually turns out to be easier. This is one of those strange and unintuitive things that pops up sometimes in software design, but the benefits are real.

Related

Why isn't it good non-compile scripting languages to use for intensive calculations?

I was reading a ebook about web technologies and I found this.
JavaScript is a language in its own right (theoretically it isn't tied
to web development), it's supported by most web clients under any
platform, and it has some object-oriented capabilities. JavaScript is
not a compiled language so it's not suited for intensive calculations
or writing device drivers and it must arrive in one piece at the
client browser to be interpreted so it is not secure either, but it
does a good job when used in web pages.
Here my problem is why we can't use JavaScript for process intensive calculations? It doesn't describe in the book. However, I have use JavaScript for mobile applications too, In some we have done very large calculation. How non-compile languages effect on this?
Two parts to this.
In a non-compiled language, you have to take a hit to compile or interpret it. Optimisation can reduce the cost of that, ie cache the result of the compilation, though of course that introduces complexity and uses up memory.
The other side is after a program is compiled, the result can be tweaked and specifically optimised for a particular purpose.
You have to consider the context though, one calculation to isolate a particular Calibi-Yau space was estimated to need 4 years to complete on the best super computer available at the time. So your definition of big and the guy who wrote the article might not be comparable. Course they could be one of those micro-optimisation types...
With modern compiler/interpreters and the most optimised code you can write, has to be a real edge case for this to be significant, and pre-compiled code is pretty much a given in those scenarios.

How can you leverage Haskell type safety through the whole web application stack?

I'm wondering how much benefit a CRUD-centric web application can benefit from Haskell's type system, particularly when the front end is built with a Javascript MVC framework like AngularJS which passes around typeless data objects.
It seems to me that as soon as you transform Haskell datatypes into JSON objects which you pass to a heavy JavaScript MVC framework layer, the benefits of having Haskell's type system as part of the web stack start eroding dramatically, as there is no way to let the type checker ensure the type-integrity of the data flow through the whole web application.
For example, you could change the database schema and the associated Haskell types, but the type checker won't be able to tell you what parts of the JavaScript MVC front-end need updating as well. I see this as a problem.
Am I stating the problem correctly, and if so, what advice could Haskell web application developers give on this point?
We've been wrestling with this exact same question quite a bit since we recently started a project with a substantial javascript front end. My anecdotal observation has been that we have a lot more bugs in the javascript application than we had with previous applications that just used Snap and generated HTML with Heist. We haven't decided on anything yet, but here are some of the possible solutions we've been considering:
Thin Javascript Wrapper
CoffeeScript
TypeScript
Dart
These are pretty unsatisfying solutions to me. The improve slightly on Javascript, but don't come close to giving me the things that I get with Haskell.
Much more functional and type safe front-end language
Fay is a proper subset of Haskell that compiles to Javascript. It definitely has some appeal, but doesn't give you access to all of Haskell. Last I heard it didn't support type classes, which I imagine would become an obstacle pretty quickly.
Elm
Roy
The problem with these solutions (as well as with the previous group) is that if your back end is written in Haskell, you still have the impedance mismatch because your front end language is not Haskell. This makes your code less DRY because you have to end up defining the same data structures in Haskell and the front end language. And when you change them in Haskell, you don't get errors indicating where your front end code needs changing. The app just breaks.
Compile Haskell to Javascript
The new game in town here is ghcjs. This is a very promising project, but I don't consider it to be viable for production at least until GHC 7.8 is released. That will hopefully happen within the next week. Once 7.8 is out the door, you still have to take into consideration that ghcjs is still very new. And even in the hypothetical scenario that it was 100% feature complete and the first release worked perfectly, you still have to remember that a fair amount of infrastructure has to be built before Haskell+ghcjs is as effective as high level javascript frameworks like Angular, Ember, etc.
UPDATE September 2016: Now, almost three years after I originally wrote this answer, GHCJS has improved greatly. There is still room for more improvements, but I have used it for production applications and it worked very well. It's especially powerful when combined with the Reflex FRP library that makes it much easier to build reactive UIs.
Generate Javascript from Haskell with an EDSL
If you have a relatively constrained problem, it might be possible to do all your application work on top of an EDSL that generates javascript. We already have the fantastic jmacro package to take care of the low level concerns of generating Javascript. You could leverage that and generate code that uses whatever other javascript libraries are appropriate for your application. That could be javascript + jquery, D3.js, or even code using a higher level javascript framework like Angular or Ember. I tend to think that Angular would be much easier to generate code for than Ember because of its simplicity and stronger encapsulation.
Greenfield a bytecode VM designed for functional languages in the browser
This is just a pie in the sky idea of mine. I don't think it's really practical because it would take a huge amount of work and be very difficult to gain adoption. But I like to at least mention the idea for completeness. Others have pointed out that asm.js is almost like this already. That may be the case, but it would be nice to have things like tailcall optimization designed into the VM level from the start.
In my opinion, easy solution - generate typescript interfaces from haskell data declarations(or another api scheme descriptions) and use TypeScript for front-end part.
it gives the opportunity to work on a project for people who do not know haskell, but who knows Javascript.
For example
data RpcResponse = RpcResponse { number :: Int, string :: Maybe String }
compile to
interface RpcResponse {
number : number,
string?: string
}
and function parseRpcResponseJson has Typescript type
parseRpcResponseJson(response: string): Option<RpcResponse>;

Javascript Performance: While vs For Loops

The other day during a tech interview, one of the question asked was "how can you optimize Javascript code"?
To my own surprise, he told me that while loops were usually faster than for loops.
Is that even true? And if yes, why is that?
You should have countered that a negative while loop would be even faster! See: JavaScript loop performance - Why is to decrement the iterator toward 0 faster than incrementing.
In while versus for, these two sources document the speed phenomenon pretty well by running various loops in different browsers and comparing the results in milliseconds:
https://blogs.oracle.com/greimer/entry/best_way_to_code_a and:
http://www.stoimen.com/blog/2012/01/24/javascript-performance-for-vs-while/.
Conceptually, a for loop is basically a packaged while loop that is specifically geared towards incrementing or decrementing (progressing over the logic according to some order or some length). For example,
for (let k = 0; k < 20; ++k) {…}
can be sped up by making it a negative while loop:
var k = 20;
while (--k) {…}
and as you can see from the measurements in the links above, the time saved really does add up for very large numbers.
While this is a great answer in minute detection of speed and efficiency I'd have to digress to #Pointy original statement.
The right answer would have been that it's generally pointless to
worry about such minutia, since any effort you put into such
optimizations could be rendered a complete waste by the next checkin
to V8 or SpiderMonkey
Since Javascript is client side determined and was originally having to be coded per browser for full cross-browser compatibility (back before ECMA was even involved it was worse) the speed difference may not even be a logical answer at this point due to the significant optimization and adoption of Javascript on browsers and their compiler engines.
We're not even talking about about non-strict script only writing such as applications in GAS, so while the answers and questions are fun they would most likely be more trivial than useful in real world application.
To expound on this topic you first need to understand where this topic is originally coming from and compiling vs interpreting. Let's take a brief history of the evolution of languages and then jump back to compiling vs interpreting. While not required reading you can just read Compiling vs Interpeting for the quick answer but for in-depth understanding I'd recommend reading through both Compiling vs Interpreting and the Evolution of Programming (showing how they are applied today).
COMPILING VS INTERPRETING
Compiled language coding is a method of programming in which you write your code in a compilable manner that a compiler understands, some of the more recognized languages today are Java, C++ and C#. These languages are written with the intent that a compiler program then translates the code into the machine code or bytecode used by your target machine.
Interpreted code
is code that is processed Just In Time (JIT) at the time of the execution without compiling first, it skips this step and allows for quicker writing, debugging, additions/changes, etc. It also will never store the script's interpretation for future use, it will re-interpret the script each time a method is called. The interpreted code is ran within a defined and intended program runtime environment (for javascript is usually a browser) to which once interpreted by the environment is then output to the desired result. Interpreted scripts are never meant to be stand-alone software and are always looking to plug into a valid runtime environment to be interpreted. This is why a script is not executable. They'll never communicate directly a operating system. If you look at the system processes occurring you'll never see your script being processed, instead you see the program being processed which is processing your script in its runtime environment.
So writing a hello script in Javascript means that the browser interprets the code, defines what hello is and while this occurs the browser is translating this code back down to machine level code saying I have this script and my environment wants to display the word hello so the machine then processes that into a visual representation of your script. It's a constant process, which why you have processors in computers and a constant action of processing occurring on the system. Nothing is ever static, processes are constantly being performed no matter the situation.
Compilers
usually compile the code into a defined bytecode system, or machine code language, that is now a static version of your code. It will not be re-interpreted by the machine unless the source code is recompiled. This is why you will see a runtime error post compilation which a programmer then has to debug in the source and recompile. Interpreter intended scripts (like Javascript or PHP) are simply instructions not compiled before being ran so the source code is easily edited and fixed without the need for additional compiling steps as the compilation is done in real-time.
Not All Compiled Code is Created Equal
An easy way to illustrate this is video game systems. The Playstation vs Xbox. Xbox system are built to support the .net framework to optimize coding and development. C# utilizes this framework in conjunction with a Common Language Runtime in order to compile the code into bytecode. Bytecode is not a strict definition of compiled code, it's a intermediate step placed in the process that allows the writing of code quicker and on a grander scale for programs, that is then interpreted when the code is executed at runtime using, you guessed it, Just In Time (JIT). The difference is this code is only interpreted once, once compiled the program will not re-interpret that code again unless restarted.
Interpreted script languages will never compile the code, so a function in an interpreted script is constantly being re-processed while a compiled bytecode's function is interpreted once and the instructions are stored until the program's runtime is stopped. The benefit is that the bytecode can be ported to another machine's architecture provided you have the necessary resources in place. This is why you have to install .net and possibly updates and frameworks to your system in order for a program to work correctly.
The Playstation does not use a .net framework for its machine. You will need to code in C++, C++ is meant to be compiled and assembled for a particular system architecture. The code will never be interpreted and will need to be exactly correct in order to run. You can never easily move this type language like you could an intermediate language. It's made specifically for that machine's architecture and will never be interpreted otherwise.
So you see even compiled languages are not inherently finalized versions of a compiled language. Compiled languages are meant, in their strict definition, to be compiled fully for use. Interpreted languages are meant to be interpreted by a program but are also the most portable languages in programming due to only needing a program installed that understand the script but they also use the most resources due to constantly being interpreted. Intermediate languages (such as Java and C#) are hybrids of these 2, compiling in part but also requiring outside resources in order to still be functional. Once ran they then compile again, which is a one time interpretation while in runtime.
Evolution of Programming
Machine Code
The lowest form of coding, this code is strictly binary in its representation (I won't get into ternary computation as it's based on theory and practical application for this discussion). Computers understand the natural values, on/off true/false. This is machine level numerical code, which is different from the next level, assembly code.
Assembly Code
The direct next level of code is assembly language. This is the first point in which a language is interpreted to be used by a machine. This code is meant to interpret mnemonics, symbols and operands that are then sent to the machine in machine level code. This is important to understand because when you first start programming most people make the assumption it's either this or that meaning either I compile or interpret. No coding language beyond low level machine code is either compile only instructions or interpret only instructions!!!
We went over this in "Not All Compiled Code is Created Equal". Assembly language is the first instance of this. Machine code is what the machine reads but assembly language is what a human can read. As computers process faster, through better technological advancements, our lower level languages begin to become more condensed in nature and not needed to be manually implemented. Assembly language used to be the high level coding language as it was the quicker method to coding a machine. It was essentially a syntax language that once assembled (the lowest form of compiling) directly converted to machine language. An assembler is a compiler but not all compilers are assemblers.
High Level Coding
High level coding languages are languages that are one step above assembly but may even contain an even higher level (this would be Bytecode/Intermediate languages). These languages are compiled from there defined syntax structure into either the machine code needed, the bytecode to be interpreted or a hybrid of either of the previous method combined with a special compiler that allows for assembly to be written inline. High level coding like it's predecessor, Assembly, is meant to reduce the workload of the developer and remove any chance for critical errors in redundant tasks, like the building of executable programs. In today's world rarely will you see a developer work in assembly for the sake of crunching data in for the benefit of size alone. More often than a developer may have a situation, like in video game console development, where they need a speed increase in the process. Because high level coding compilers are tools that seek to ease the development process they may not 100% of the compile the code in the most efficient manner for that system architecture. In that case Assembly code would be written to maximize the system's resources. But you'll never see a person writing in machine code, unless you just meet a weirdo.
THE SUMMARY
If you made it this far, congratulations! You just listened more in one sitting than my wife can, about this stuff, for a lifetime. The OP's question was about performance of while vs for loops. The reason this is a moot point in today's standards is two-fold.
Reason One
The days of interpreting Javascript are gone. All major browsers (Yes, even Opera and Netscape) utilize a Javascript Engine that is made to compile the script before implementing it. The performance tweaks discussed by JS developers in terms of non-call out methods are obsolete methods of study when looking at native functions within the language. The code is already compiled and optimized for that before ever being a part of the DOM. It's not interpreted again while that page is up because that page is the runtime environment. Javascript has really become an intermediate language more so than interpreted script. The reason it will never be called an intermediate scripting language is because Javascript is never compiled. That's the only reason. Besides that it's function in a browser environment a minified version of what happens with Bytecode.
Reason Two
The chances of you writing a script, or library of scripts, that would ever require as much processing power as an desktop application on a website is almost nill. Why? Because Javascript was never created with the intent to be an all encompassing language. It's creation was simply to provide a medium level language programming method that would allow processes to be done that weren't provided by HTML and CSS, while the alleviating development struggles of requiring dedicated high level coding languages, specifically Java.
CSS and JS was not supported for most of the early ages of web development. Till around 1997 CSS was not a safe integration and JS fought even longer. Everything besides HTML is a supplemental language in the web world.
HTML is specific for being the building blocks for a site. You'd never write javascript to fully frame a website. At most you'd do DOM manipulation but building a site.
You'd never style your site in JS as it's just not practical. CSS handles that process.
You'd never store, besides temporarily, using Javascript. You'd use a database.
So what are we left with then? Increasingly just functions and processes. CSS3 and its future iterations are going to take all methods of styling from Javascript. You see that already with animations and psuedo states(hover, active, etc.).
The only valid argument of optimization of code in Javascript at this point is for badly written functions, methods and operations that could be helped by optimization of the user's formula/code pattern. As long as you learn proper and efficient coding patterns Javascript, in today's age, has no loss of performance from its native functions.
for(var k=0; ++k; k< 20){ ... }
can be sped up by making it a negative
while loop:
var k = 20; while(--k){ ... };
A more accurate test would be to use for to the same extent as while. The only difference will be that using for loops offers more description. If we wanted to be super crazy we could just forgo the entire block;
var k = 0;
for(;;){doStuff till break}
//or we could do everything
for (var i=1, d=i*2, f=Math.pow(d, i); f < 1E9; i++, d=i*2, f=Math.pow(d,i)){console.log(f)}
Either way...in NodeJS v0.10.38 I'm handling a JavaScript loop of 109 in a quarter second with for being on average about 13% faster. But that really has no affect on my future decisions with which loop to use or the amount I choose to describe in a loop.
> t=Date.now();i=1E9;
> while(i){--i;b=i+1}console.log(Date.now()-t);
292
> t=Date.now();i=1E9;
> while(--i){b=i+1}console.log(Date.now()-t);
285
> t=Date.now();i=1E9;
> for(;i>0;--i){b=i+1}console.log(Date.now()-t);
265
> t=Date.now();i=1E9;
> for(;i>0;){--i;b=i+1}console.log(Date.now()-t);
246
2016 Answer
In JavaScript the reverse for loop is the fastest. For loops are trivially faster than while loops. Be more focused on readability.
Here is some bench marking.
The following loops where tested:
var i,
len = 100000,
lenRev = len - 1;
i = 0;
while (i < len) {
1 + 1;
i += 1;
}
i = lenRev;
while (-1 < i) {
1 + 1;
i -= 1;
}
for (i = 0; i < len; i += 1) {
1 + 1;
}
for (i = lenRev; - 1 < i; i -= 1) {
1 + 1;
}
2017 Answer
jsperf for vs foreach on Chrome 59
Here you can see Array.forEach has become fastest on the latest version of Chrome (59) as of the date written (7/31/17). You can find average times for other browser versions here: https://jsperf.com/for-vs-foreach/66.
This shows to prove that ES engine optimization changes what is more efficient at any time.
My recommendation is that you use whichever is more expressive for your use case.
Performance differences within the same magnitude will mostly be irrelevant in the future as computers become faster exponentially by Moore's Law.

Writing optimised JS for asm.js

There is quite a bit of excitement on asm.js and how it will be able to run some very heavy applications. However, it is compiled from C++ code. Is it still possible to get the benefit of current improvements without knowing C++ or other low level languages?
Here is the thought that I had: Is it at all possible that we can write the code in Js, have it recompiled for asm.js for optimisation?
If you have small function that is very computation-heavy (crunching numbers rather than manipulating DOM) you could rewrite it in asm.js style yourself, manually. It's possible (I've done it), but tedious.
There are other asm.js compilers, e.g. LLJS that you may use instead of C++.
However, asm.js is not magic. You will only get performance benefit when you use language that is much better suited for ahead-of-time optimization than JS. You can't take fully-featured JS and make it faster by running JS VM on top of JS VM, just like you can't make ZIP files smaller by zipping them.
However, it is compiled from C++ code.
It is not. It's a language. Any program can emit text files which contain asm.js code. Emscripten compiles LLVM IR into asm.js, and there are compilers from C and C++ to LLVM IR, but this is only one possible way of getting asm.js code. Admittedly, it's currently the most mature, practical and popular way, but I would not be surprised at all if other asm.js compilers for other languages pop up some time in the future.
Is it still possible to get the benefit of current improvements without knowing C++ or other low level languages?
Well, in theory any language that can efficiently be compiled to machine code ahead-of-time can be implemented efficiently using asm.js, and that includes some rather high-level ones (e.g. Haskell).
But currently, nobody has a working implementation, and I don't expect this to ever become very popular. Right now, if you want asm.js performance, you'd probably write C or C++ code and compile it to asm.js, yes.
Note that the above excludes (among many others) Javascript.The fact that asm.js is a subset of Javascript is convenient in that asm.js code will run on unmodified browsers, but it's not of much use for anyone writing Javascript. asm.js is basically just a thin layer above machine code, with some amends for security and JS interoperability. Compiling JS to asm.js is as hard as compiling it to machine code: Easy if you don't give a damn about performance (just always used boxed dynamically-typed values like an interpreter, and emit calls to runtime library functions), very hard when you do.
In fact, after decades of research into the subject, there's still no example of a highly dynamic language like Javascript, Ruby or Python being compiled into machine code ahead of time and running much faster than a clever interpreter. Just-in-time compilation, on the other hand, is very much practical -- but the major JS engines already do that, in a less roundabout way than compiling to asm.js, then parsing it again and compiling it to machine code.
Asm.js isn't a separate language, but a subset of Javascript. It's just Javascript with a lot stripped out for performance. This means that you don't need to learn another language, though in this case knowing C/C++ might be useful to understand it.
Asm.js is a very strict subset of JavaScript, that can be generated relatively easily when compiling from C/C++ to JavaScript. Asm.js code is much closer to machine code than ordinary JavaScript code, which allowed browsers to heavily optimise for any code written in asm.js. In browsers that implemented those optimizations, your code will typically run about 50% of the speed of a C/C++ program that is compiled to machine code... which may seem slow, but is a hell of a lot faster than any ordinary JavaScript!
However, precisely because it's optimized for machines rather than humans, asm.js is practically impossible to handcode by any human developer... even though it's just JavaScript. While it is technically possible to convert - at least a subset of - ordinary JavaScript to an asm.js equivalent, such a conversion is not an easy task and I haven't encountered any project yet that made any attempt to achieve this.
Until someone achieves such a Herculean task, the best approach to producing asm.js code remains writing your code in C/C++ and converting it to JavaScript.
For more info on asm.js, see eg. John Resig's article from 2013 or the official specs.

Tools for compiling Python / Boo / Ruby like syntax to C / C++ / LLVM / Javascript (using JS ArrayBuffer for speed)

I'm trying to automatically compile / convert code written with Pythonic semantics into native and fast Javascript code.
What tools can do this, with nice debugging support possible like with Java etc?
Has anyone done this?
Why?
I'm trying to write some visualisation code with a complex main loop, a timeline, some physics simulation, and some complex interaction. I.E: it IS an actual CPU bound problem.
Writing with Javascript and testing in it's browser environment is harder to debug than say, Java, .NET or Python running in a decent IDE.
But for doing actual large scale web development with complex client side code, it's necessary to at least compile to Javascript, if not directly write in it.
Background: Recent advances
Emscripten allows compiling C/C++ to Javascript, that can run with increasing efficiency in the browser due to ArrayBuffer's typed array support and new browser JS engines, as ASM.js and LLJS take advantage of Mozilla's recent speed improvements (that other venders will likely soon follow).
Altjs.org has a laundry list of Javascript alternaltives, but doesn't yet focus on the recent speed improvements or nice semantics specifically but it is becoming common place for people to code for browsers with better tools. Emscripten in particular has loads of amazing demos.
Possible options already considered:
Shedskin - Currently I have tried getting Shedskin working but I have limited C++/C skills (Emscripten only exposes a C API for the Boehm inspired garbage collector it uses, and Shedskin needs a C++ garbage collection class for it's objects, which doesn't exist yet).
Unladen Swallow / RPython, to LLVM - have not been able to setup correctly on Ubuntu yet
Boo to Java then to LLVM (not been able to setup on my Ubuntu system yet)
Additional constraints:
I need to use this on my Ubuntu system.
The compiled Javascript should probably be less than 1 MB
Debugging in the native language which is also cross compiled, should still be possible, allowing taking advantage of existing debug tools.
"This process of constructing instruction tables should be very fascinating. There need be no real danger of it ever becoming a drudge, for any processes that are quite mechanical may be turned over to the machine itself." -- Alan M. Turing, 1946
You want a high level dynamic language that compiles down to efficient low level JavaScript? There is no such thing. If dynamic languages were fast we would not need asm.js in the first place.
If you want to write code that compiles to efficient JavaScript your will have to learn a lower level language. The reason why Emscripten is fast is because it compiles from a low level language (C/C++) that allows for greater compiler optimization than regular JavaScript. That is also the reason why asm.js and LLVM can be faster. They get their speed from not have dynamic types, garbage collection (this specifically is what makes it possible to use ArrayBuffer for memory) and other high-level features.
Bottom line is. There exists no tools for compiling a language with Pythonic semantics into native and fast Javascript code. And depending on what you mean by semantics it is unlikely that such a thing will appear since Python is a slow language in itself.
The best option right now for generating fast JavaScript is Emscripten. You could also consider LLJS or writing fast JavaScript by hand (Chrome has debugging tools for this).
Also, considering the title of your question you are very concerned about syntax. You should not be. When choosing the right language for the job the syntax is one of the least important factors.
Since you mentioned shedskin yourself, I would imaging that you can share some of your experience (and explain what exactly in your opinion shedskin is missing, except that its input is a restricted python grammar). I could also assume that Cython/Pyrex are not acceptable (due too grammar restrictions).
If shedskin is too much in alpha stage for you, then you might be looking for something like Numba project, which includes a compiler of dynamic python into LLVM as well as llvm-py which allows to link LLVM exposed bytecode similar as ctypes allows to link shared-libraries and build LLVM IR compilers.
Here is a cut from the blog where it is shown how one can use Numba as JIT for numpy (incl. performance comparison with equivalent Cython code):
import numpy as np
from numba import double
from numba.decorators import jit
#jit(arg_types=[double[:,:], double[:,:]])
def pairwise_numba(X, D):
M = X.shape[0]
N = X.shape[1]
for i in range(M):
for j in range(M):
d = 0.0
for k in range(N):
tmp = X[i, k] - X[j, k]
d += tmp * tmp
D[i, j] = np.sqrt(d)
Emscripten should allow you to expose and call your python -> llvm -> JS code as described here: https://github.com/kripken/emscripten/wiki/Interacting-with-code

Categories