is coffeescript faster than javascript? - javascript

Javascript is everywhere and to my mind is constantly gaining importance. Most programmers would agree that while Javascript itself is ugly, its "territory" sure is impressive. With the capabilities of HTML5 and the speed of modern browsers deploying an application via Javascript is an interesting option: It's probably as cross-platform as you can get.
The natural result are cross compilers. The predominant is probably GWT but there are several other options out there. My favourite is Coffeescript since it adds only a thin layer over Javascript and is much more "lightweight" than for example GWT.
There's just one thing that has been bugging me: Although my project is rather small performance has always been an important topic. Here's a quote
The GWT SDK provides a set of core Java APIs and Widgets. These allow
you to write AJAX applications in Java and then compile the source to
highly optimized JavaScript
Is Coffeescript optimized, too? Since Coffeescript seems to make heavy use of non-common Javascript functionality I'm worried how their performance compares.
Have you experience with Coffeescript related speed issues ?
Do you know a good benchmark comparison ?

Apologies for resurrecting an old topic but it was concerning me too. I decided to perform a little test and one of the simplest performance tests I know is to write consecutive values to an array, memory is consumed in a familiar manner as the array grows and 'for' loops are common enough in real life to be considered relevant.
After a couple of red herrings I find coffeescript's simplest method is:
newway = -> [0..1000000]
# simpler and quicker than the example from http://coffeescript.org/#loops
# countdown = (num for num in [10..1])
This uses a closure and returns the array as the result. My equivalent is this:
function oldway()
{
var a = [];
for (var i = 0; i <= 1000000; i++)
a[i] = i;
return a;
}
As you can see the result is the same and it grows an array in a similar way too. Next I profiled in chrome 100 times each and averaged.
newway() | 78.5ms
oldway() | 49.9ms
Coffeescript is 78% slower. I refute that "the CoffeeScript you write ends up running as fast as (and often faster than) the JS you would have written" (Jeremy Ashkenas)
Addendum: I was also suspicious of the popular belief that "there is always a one to one equivalent in JS". I tried to recreate my own code with this:
badway = ->
a = []
for i in [1..1000000]
a[i] = i
return a
Despite the similarity it still proved 7% slower because it adds extra checks for direction (increment or decrement) which means it is not a straight translation.

This is all quite intersting and there is one truth, coffee script cannot work faster than fully optimized javascript.
That said, since coffee script is generating javascript. There are ways to make it worth it. Sadly, it doesn't seem to be the case yet.
Lets take the example:
new_way = -> [0..1000000]
new_way()
It compiles to this with coffee script 1.6.2
// Generated by CoffeeScript 1.6.2
(function() {
var new_way;
new_way = function() {
var _i, _results;
return (function() {
_results = [];
for (_i = 0; _i <= 1000000; _i++){ _results.push(_i); }
return _results;
}).apply(this);
};
new_way();
}).call(this);
And the code provided by clockworkgeek is
function oldway()
{
var a = [];
for (var i = 0; i <= 1000000; i++)
a[i] = i;
return a;
}
oldway()
But since the coffee script hides the function inside a scope, we should do that for javascript too. We don't want to polute window right?
(function() {
function oldway()
{
var a = [];
for (var i = 0; i <= 1000000; i++)
a[i] = i;
return a;
}
oldway()
}).call(this);
So here we have code that does the same thing actually. And then we'd like to actually test both versions a couple of time.
Coffee script
for i in [0..100]
new_way = -> [0..1000000]
new_way()
Generated JS, and you may ask yourself what is going on there??? It's creating i and _i for whatever reason. It's clear to me from these two, only one is needed.
// Generated by CoffeeScript 1.6.2
(function() {
var i, new_way, _i;
for (i = _i = 0; _i <= 100; i = ++_i) {
new_way = function() {
var _j, _results;
return (function() {
_results = [];
for (_j = 0; _j <= 1000000; _j++){ _results.push(_j); }
return _results;
}).apply(this);
};
new_way();
}
}).call(this);
So now we're going to update our Javascript.
(function() {
function oldway()
{
var a = [];
for (var i = 0; i <= 1000000; i++)
a[i] = i;
return a;
}
var _i;
for(_i=0; _i <= 100; ++_i) {
oldway()
}
}).call(this);
So the results:
time coffee test.coffee
real 0m5.647s
user 0m0.016s
sys 0m0.076s
time node test.js
real 0m5.479s
user 0m0.000s
sys 0m0.000s
The js takes
time node test2.js
real 0m5.904s
user 0m0.000s
sys 0m0.000s
So you might ask yourself... what the hell coffee script is faster??? and then you look at the code and ask yourself... so let's try to fix that!
(function() {
function oldway()
{
var a = [];
for (var i = 0; i <= 1000000; i++)
a.push(i);
return a;
}
var _i;
for(_i=0; _i <= 100; ++_i) {
oldway()
}
}).call(this);
We'll then do a small fix to the JS script and change a[i] = i to a.push(i) And then lets try again...and then BOOM
time node test2.js
real 0m5.330s
user 0m0.000s
sys 0m0.000s
This small change made it faster than our CoffeeScript Now lets look at the generated CoffeeScript... and remove those double variables...
to this:
// Generated by CoffeeScript 1.6.2
(function() {
var i, new_way;
for (i = 0; i <= 100; ++i) {
new_way = function() {
var _j, _results;
return (function() {
_results = [];
for (_j = 0; _j <= 1000000; _j++){ _results.push(_j); }
return _results;
}).apply(this);
};
new_way();
}
}).call(this);
and BOOM
time node test.js
real 0m5.373s
user 0m0.000s
sys 0m0.000s
Well what I'm trying to say is that there are great benefits to use a higher language. The generated CoffeeScript wasn't optimized. But wasn't that far from the pure js code. The code optimization that clockworkgeek tried to use with using index directly instead of push actually seemed to backfire and worked slowlier than the generated coffeescript.
The truth it that such kind of optimization could be hard to find and fix. On the other side, from version to version, coffeescript could generate optimized js code for current browser or interpreters. The CoffeeScript would remain unchanged but could be generated again to speedup things.
If you write directly in javascript, there is now way to really optimize the code as much as one would with a real compiler.
The other interesting part is that one day, CoffeeScript or other generators to javascript could be used to analyse code (like jslint) and remove parts of the code where some variables aren't needed... Compile functions differently with different arguments to speed up things when some variables aren't needed. If you have purejs, you'll have to expect that there is a JIT compiler that will do the job right and its good for coffeescript too.
For example, I could optimize the coffee script one last time..by removing the new_way = (function... from inside the for loop. One smart programmer would know that the only thing that happen here is reaffection the function on each loop which doesn't change the variable. The function is created in the function scope and isn't recreated on each loop. That said it shouldn't change much...
time node test.js
real 0m5.363s
user 0m0.015s
sys 0m0.000s
So this is pretty much it.

Short answer: No.
CoffeeScript generates javascript, so its maximum possible speed equals to the speed of javascript. But while you can optimize js code at low-level (yeah, it sounds ironical) and gain some performance boost - with CoffeeScript you cannot do that.
But speed of code should not be your concern, when choosing CS over JS, as the difference is negligible for most tasks.

Coffescript compiles directly to JavaScript, meaning that there is always a one to one equivalent in JS for any Coffeescript source. There is nothing non-common about it. A performance gain can come from optimized things e.g. the fact that Coffescript stores the Array length in a separate variable in a for loop instead of requesting it in every iteration. But that should be a common practise in JavaScript, too, it is just not enforced by the language itself.

I want to add something to the answer of Loïc Faure-Lacroix...
It seems, that you only printed the times of one Browser. And btw "x.push(i)" is not faster that "x[i] = i" according to jsperf : https://jsperf.com/array-direct-assignment-vs-push/130
Chrome: push => 79,491 ops/s; direct assignment => 3,815,588 ops/s;
IE Edge: push => 358,036 ops/s; direct assignment => 7,047,523 ops/s;
Firefox: push => 67,123 ops/s; direct assignment => 206,444 ops/s;
Another point -> x.call(this) and x.apply(this)... I don't see any performance reason to that. Even jsperf confirms by that: http://jsperf.com/call-apply-segu/18
Chrome:
direct call => 47,579,486 ops/s; x.call => 45,239,029 ops/s; x.apply => 15,036,387 ops/s;
IE Edge:
direct call => 113,210,261 ops/s; x.call => 17,771,762 ops/s; x.apply => 6,550,769 ops/s;
Firefox:
direct call => 780,255,612 ops/s; x.call => 76,210,019 ops/s; x.apply => 2,559,295 ops/s;
First to mention - I used the actual Browsers.
Secondly - I extended the test by a for-loop, because with one call the test is to short...
Last but not least - now the tests for all browsers are like the following:
Here I used CoffeeScript 1.10.0 (compiled with the same code given in his answer)
console.time('coffee');// added manually
(function() {
var new_way;
new_way = function() {
var i, results;
return (function() {
results = [];
for (i = 0; i <= 1000000; i++){ results.push(i); }
return results;
}).apply(this);
};
// manually added on both
var i;
for(i = 0; i != 10; i++)
{
new_way();
}
}).call(this);
console.timeEnd('coffee');// added manually
Now the Javascript
console.time('js');
(function() {
function old_way()
{
var i = 0, results = [];
return (function()
{
for (i = 0; i <= 1000000; i++)
{
results[i] = i;
}
return results;
})();// replaced apply
}
var i;
for(i = 0; i != 10; i++)
{
old_way();
}
})();// replaced call
console.timeEnd('js');
The limit value of the for loop is low, because any higher it would be a pretty slow testing (10 * 1000000 calls)...
Results
Chrome: coffee: 305.000ms; js: 258.000ms;
IE Edge: coffee: 5.944,281ms; js: 3.517,72ms;
Firefox: coffee: 174.23ms; js: 159.55ms;
Here I must have to mention, that not always coffee was the slowest in this test. You can see that by testing those codes in firefox.
My final answer:
First to say - I am not really familiar with coffeescript, but I looked into it, because I am using the Atom Editor and wanted to try to build my first package there, but drived back to Javascript...
So if there is anything wrong you can correct me.
With coffeescript you can write less code, but if it comes to optimization, the code gets heavy. My own opinion -> I don't see any so called "productiveness" in this Coffeescripting language...
To get back to the performances :: The most used browser is the Chrome Browser (src: w3schools.com/browsers/browsers_stats.asp) with 60% and my testings also have shown that manually typed Javascript runs a bit faster than Coffeescript (except IE ... - much faster). I would recommend Coffeescript for smaller projects, but if no one minds, stay the language you like.

Related

Javascript animation and optimization

I'm using java to generate javascript code that does various animations on a canvas.
My resulting javascript code is already getting large enough that the animation smoothness is suffering. The "draw" method that gets continually called takes long enough to run to notice the pause between frames. In the generated HTML there's already over 4,000 lines of code. So I'm looking for optimization tips.
Noting that the javascript code is mostly java generated, is it faster for the browser to execute this:
for( var i = 0; i < 4; i++ ) {
drawNumber(i);
}
Or to have java generate this?:
drawNumber(0);
drawNumber(1);
drawNumber(2);
drawNumber(3);
I prefer the latter because it makes the code more obscure.
Is it better to do:
var x = (2+1) * (2+1);
Or
var a = (2+1);
var x = a * a;
Of course the first has more math operations, but I don't know if declaring a new variable is more costly.

Why is a custom function slower than a builtin?

I'm messing around with the performances of JavaScript's push and pop functions.
I have an array called arr.
When I run this:
for (var i = 0; i < 100; i++) {
for (var k = 0; k < 100000; k++) {
arr.push(Math.ceil(Math.random() * 100));
arr.pop();
}
}
I get a time of 251.38515999977244 Milliseconds (I'm using performance.now() function).
But when I run a custom push and pop:
Array.prototype.pushy = function(value) {
this[this.length] = value;
}
Array.prototype.poppy = function() {
this.splice(-1, 1);
}
for (var i = 0; i < 100; i++) {
for (var k = 0; k < 100000; k++) {
arr.pushy(Math.ceil(Math.random() * 100));
arr.poppy();
}
}
The time is 1896.055750000014 Milliseconds.
Can anyone explain why there's such a huge difference between these?
To those who worry about time difference. I ran this test 100 times and computed an average time. I did that 5 times to ensure there weren't any outlying times.
Because the built-in function is written is whatever language the browser was written in (probably C++) and is compiled. The custom function is written in Javascript and is interpreted.
Generally interpreted languages are much slower than compiled ones. One usually doesn't notice this with Javascript because for the most part, you only execute a couple lines of JS between human interactions (which is always the slowest part).
Running JS in a tight loop as your done here, highlights the difference.
The reason is that the built-in function was specifically designed and optimized to perform a specific function. The browser takes whatever shortcuts possible when using the built-in function that it may not be as quick to recognize in the custom function. For example, with your implementation, the function needs to get the array length every single time the function is called.
Array.prototype.pushy = function(value) {
this[this.length] = value;
}
However, by simply using Array.prototype.push, the browser knows that the purpose is to append a value on to the array. While browsers may implement the function differently, I highly doubt any needs to compute the length of the array for every single iteration.

Is it possible to have quadratic time complexity without nested loops?

It was going so well. I thought I had my head around time complexity. I was having a play on codility and used the following algorithm to solve one of their problems. I am aware there are better solutions to this problem (permutation check) - but I simply don't understand how something without nested loops could have a time complexity of O(N^2). I was under the impression that the associative arrays in Javascript are like hashes and are very quick, and wouldn't be implemented as time-consuming loops.
Here is the example code
function solution(A) {
// write your code in JavaScript (Node.js)
var dict = {};
for (var i=1; i<A.length+1; i++) {
dict[i] = 1;
}
for (var j=0; j<A.length; j++) {
delete dict[A[j]];
}
var keyslength = Object.keys(dict).length;
return keyslength === 0 ? 1 : 0;
}
and here is the verdict
There must be a bug in their tool that you should report: this code has a complexity of O(n).
Believe me I am someone on the Internet.
On my machine:
console.time(1000);
solution(new Array(1000));
console.timeEnd(1000);
//about 0.4ms
console.time(10000);
solution(new Array(10000));
console.timeEnd(10000);
// about 4ms
Update: To be pedantic (sic), I still need a third data point to show it's linear
console.time(100000);
solution(new Array(100000));
console.timeEnd(100000);
// about 45ms, well let's say 40ms, that is not a proof anyway
Is it possible to have quadratic time complexity without nested loops? Yes. Consider this:
function getTheLengthOfAListSquared(list) {
for (var i = 0; i < list.length * list.length; i++) { }
return i;
}
As for that particular code sample, it does seem to be O(n) as #floribon says, given that Javascript object lookup should be constant time.
Remember that making an algorithm that takes an arbitrary function and determines whether that function will complete at all is provably impossible (halting problem), let alone determining complexity. Writing a tool to statically determine the complexity of anything but the most simple programs would be extremely difficult and this tool's result demonstrates that.

JavaScript: multiple array conditions in if statement

I'm pretty new here but i'm posting this cause i haven't found a single answer on the internet to this question.
How can I use multiple arrays as conditions to an if statement. the reason i would need this is simply for creating a 2D game. But i'm learning that even a simple 2D game has tons of variables because of all the objects involved. But here is a simple example for what I've started with.
var a = 27;
var test = 0;
if(a in {18:1, 27:1, 36:1}) {
test = 1;
}
This tests an array of variables against one variable. I've found that this returns true but this is only half the battle.
The only place I've found any close reference to this is here.
How to shorten my conditional statements
Now the hard part is getting two arrays as conditions instead of just a variable and an array. So basically i need this idea made shorter.
var a = 27;
var b = 27;
var c = 50;
var test = 0;
if(a in {18:1, 27:1, 36:1} || b in {18:1, 27:1, 36:1} || c in {18:1, 27:1, 36:1}) {
test = 1;
}
even though i'm a noob my bible is the hacker's standard:P. Which basically means i think that when i'm creating something with the process of doing something over and over without very good reason "IT IS THE DEVIL"(kudos to whoever got the references). So let me explain this again but very specific so there's no confusion. Say i create a lot of NPC(non player character) and i want a system that can detect if the individual NPC has been in contact by lets say a projectile. i want that individual to vanish and give a point to a score board. well creating functions for such characters requires a LOT of if statements. So ideally i want an if statement that somehow uses 2 or more arrays for it's conditions but look almost as short as using two variables.
maybe something that looks like this.
var test = 0;
var a = [5,6,8];
var b = [10,30,8];
if(a in b){
test = 1;
}
NOTE: I've actually already tried this but it only took the index of b and not the numbers inside. I believe this topic deserves attention unless there's already someone out there that posted a solution(in which case it NEEDS to be advertised).
EDIT: After a long while i've come to realize that the proper(more efficient and readable) solution is to use both OOP and game engine design. I was just too young to understand how to work with data. So to anyone who see's this wondering the same thing should simply try to more thoroughly study array and class logic. In honesty javascript is NOT the place to learn this. I recommending taking a trip to processing.org. and learning the ways of using classes. if Your having trouble there you can try openFrameworks and learn OOP in c++. But the biggest part is understanding proper array mechanics. The OOP just makes it easier.
var test = false;
var a = [5, 6, 8];
var b = { 10:1, 30:1, 8:1 };
for (var i = 0; i < a.length; i++) {
if (a[i] in b) {
test = true;
break;
}
}
If you're using a library like jQuery or Underscore.js, they have convenience functions like $.any() that can be used to replace the loop. You can also use the built-in Array#some method, but it's not compatible with IE8. Ex:
return a.some(function(x) {
return x in b;
});

CS5 Hiding layers is painfully slow

Is it only me that thinks the CS5 scripts runs painfully slow?
These few lines takes over 1 minute to execute.
for (n=0; n<app.activeDocument.layerSets.length; n++) {
app.activeDocument.layerSets[n].visible = false;
}
The number of layerSets are 20.
I'm running the CS5.1 64bit version on a Vista Home Premium system, AMD Athlon 64 X2 Dual Core 5200+ with 8GB RAM.
I tried to Export the script as a .JSXBIN but it still takes over 1 minute. CPU usage for CS5.1 goes from 3% to 57% when the CS5.1 is running the .JSXBIN script.
There must be something wrong here, how can I speed up the scripts?
// Thanks
* EDIT *
It seems like CS5's own DOM implementation is the problem here. The script speeded up more than twice by reading DOM related values into local variables.
var LayerCount = app.activeDocument.layerSets.length;
var LayerRoot = app.activeDocument.layerSets;
for (n=0; n<LayerCount; n++) {
LayerRoot[n].visible = false;
}
...but still, it far to much time to just change a property in 20 objects. Any help with optimizing would be appreciated :)
The only thing I can think of is try try to loop through the individual layers in app.activeDocument.layers which contains all the layers and groups. When you do this you'll notice that grouped layers will still retain their original visible property but are hidden because their parent group is hidden.
#target photoshop
var myLayers = app.activeDocument.layers;
var myLayersLength = myLayers.length;
for (var i=0; i<myLayersLength; i++) {
myLayers[i].visible = false;
}
EDIT: So I tested this solution on a 400mb file with 50 layers and it worked in seriously less than a second. Are you sure the problem is with Photoshop?
If you have to iterate through every single layer and child-layer individually to perform an action you can do it recursively:
#target photoshop
var doc = app.activeDocument;
findLayers(doc);
function findLayers(set) {
for (var i=0; i<set.layerSets.length; i++) {
//recursive call
findLayers(set.layerSets[i]);
//iterate sub-layers and hide
for (var j=0; j<set.layerSets[i].layers.length; j++) {
set.layerSets[i].layers[j].visible = false;
}
}
//hide top-level layers
for (var l=0; l<set.layers.length; l++) {
set.layers[l].visible = false;
}
}
This takes a bit longer, ~20 seconds on my machine, but it will hit every single layer in a document.
NOTE: I also tested your original scripts from the question and they don't work on un-grouped layers because you're iterating through document.layerSets instead of document.layers
Have a look at this ps-scripts iteration over layers is slow - explanation
Which may be able to help you as well.

Categories