I'm using java to generate javascript code that does various animations on a canvas.
My resulting javascript code is already getting large enough that the animation smoothness is suffering. The "draw" method that gets continually called takes long enough to run to notice the pause between frames. In the generated HTML there's already over 4,000 lines of code. So I'm looking for optimization tips.
Noting that the javascript code is mostly java generated, is it faster for the browser to execute this:
for( var i = 0; i < 4; i++ ) {
drawNumber(i);
}
Or to have java generate this?:
drawNumber(0);
drawNumber(1);
drawNumber(2);
drawNumber(3);
I prefer the latter because it makes the code more obscure.
Is it better to do:
var x = (2+1) * (2+1);
Or
var a = (2+1);
var x = a * a;
Of course the first has more math operations, but I don't know if declaring a new variable is more costly.
Related
My web page creates a lot of DOM elements at once in a (batch) tight loop, depending on data fed by my Comet web server.
I tried several methods to create those elements. Basically it boils down to either (1):
var container = $('#selector');
for (...) container.append('<html code of the element>');
or (2):
var html = '';
for (...) html += '<html code of the element>';
$('#selector').append(html);
or (3):
var html = [];
for (...) html.push('<html code of the element>');
$('#selector').append(html.join(''));
Performance-wise, (1) is absolutely awful (3s per batch on a desktop computer, up to 5mn on a Galaxy Note fondleslab), and (2) and (3) are roughly equivalent (300ms on desktop, 1.5s on fondleslab). Those timings are for about 4000 elements, which is about 1/4 of what I expect in production and this is not acceptable since I should be handle this amount of data (15k elements) in under 1s, even on fondleslab.
The very fact that (2) and (3) have the same performance makes me think that I'm hitting the infamous "naively concatenating strings uselessly reallocates and copies lots of memory" problem (even though I'd expect join() to be smarter than that). [edit: after looking more closely into it, it happens that I was misled about that, the problem is more on the rendering side -- thanks DanC]
In C++ I'd just go with std::string::reserve() and operator += to avoid the useless reallocations, but I have no idea how to do that in Javascript.
Any idea how to improve the performance further? Or at least point me to ways to identify the bottleneck (even though I'm pretty sure it's the string concatenations). I'm certainly no Javascript guru...
Thanks for reading me.
For what it's worth, that huge number of elements is because I'm drawing a (mostly real-time) graph using DIV's. I'm well aware of Canvas but my app has to be compatible with old browsers so unfortunately it's not an option. :(
Using DOM methods, building and appending 12000 elements clocks in around 55ms on my dual-core MacBook.
document.getElementById('foo').addEventListener('click', function () {
build();
}, false);
function build() {
console.time('build');
var fragment = document.createDocumentFragment();
for ( var e = 0; e < 12000; e++ ) {
var el = document.createElement('div');
el.appendChild(document.createTextNode(e));
fragment.appendChild(el);
}
document.querySelectorAll('body')[0].appendChild(fragment);
console.timeEnd('build')
}
Fiddle
Resig on document.createDocumentFragment
This is not a solution to the performance problem, but only a way to ensure the UI loop is free to handle other requests.
You could try something like this:
var container = $('#selector');
for (...) setTimeout(function() {container.append('<html code of the element>') };
To be slightly more performant, I would actually call setTimeout after every x iterations after building up a larger string. And, not having tried this myself, I am not sure if the ordering of setTimeout calls will be preserved. If not, then you can do something more like this:
var arrayOfStrings = 'each element is a batch of 100 or so elements html';
function processNext(arr, i) {
container.append(arr[i]);
if (i < arr.length) {
setTimeout(function() { processNext(arr, i+1); });
}
}
processNext(arrayOfStrings, 0);
Not pretty, but would ensure the UI is not locked up while the DOM is manipulated.
I am trying to create a loading bar during a very intensive period of JavaScript where some pretty heavy 3d arrays are built and filled. This loading bar needs to remain empty until the user clicks a button.
The freezing occurs whether or not I'm using -webkit-transition (This app can be chrome exclusive, cross browser is not necessary in my case).
Seeking simplicity I've built my bar like this...
<div id="loader">
<div id="thumb">
</div>
</div>
... and then sought to increment that bar at various stages of my main for loop:
for(i = 0; i < 5 ; i++){
document.getElementById('thumb').style.width = i*25 + '%';
//More Code
}
Problem is that everything freezes until the JavaScript finishes.
I found a similar question on Stack Overflow, Using CSS animation while javascript computes, and in the comments found and considered and/or tried the following:
Web Workers
Don't think it'll work since my script is filling an array with objects and constructors containing functions which according to this site isn't going to work
jQuery
Not an option, I can't use external libraries in my app - in any case, importing a whole library just for a loading bar seems kind of like overkill...
Keyframes
This was promising and I tried it, but in the end it freezes also, so no joy
timeOut()s
Thought about this, but since the point of the loading bar is to reduce frustration, increasing the waiting time seems counter-productive
I'd be happy to have any incrementation of the bar at this stage, even if it's not smooth! I'm pretty sure this is a problem that has struck more than just me - maybe someone has an interesting solution?
P.S.: I'm posting this as a new question rather than adding to the referenced question since I'm specifically seeking help with JavaScript (not jQuery) and would prefer if I could get it using a transition (!=animation) on the width.
Some people already mentioned that you should use timeouts. That's the appropriate approach, bc it'll give the browser time to "breathe" and render your progress bar mid-task.
You have to split your code up to work asynchronously. Say you currently have something like this:
function doAllTheWork() {
for(var i = 0; i < reallyBigNumberOfIterations; i++) {
processorIntensiveTask(i);
}
}
Then you need to turn it into something like this:
var i = 0;
function doSomeWork() {
var startTime = Date.now();
while(i < reallyBigNumberOfIterations && (Date.now() - startTime) < 30) {
processorIntensiveTask(i);
i++;
}
if(i < reallyBigNumberOfIterations) {
// Here you update the progress bar
incrementBar(i / reallyBigNumberOfIterations);
// Schedule a timeout to continue working on the heavy task
setTimeout(doSomeWork, 50);
}
else {
taskFinished();
}
}
function incrementBar(fraction) {
console.log(Math.round(fraction * 100) + ' percent done');
}
function taskFinished() { console.log('Done!'); }
doSomeWork();
Note the expression (Date.now() - startTime) < 30. That means the loop will get as much done as it can in the span of 30 milliseconds. You can make this number bigger, but anything over 100ms (essentially 10 frames-per-second) is going to start feeling sluggish from the user's point of view.
It may be true that the overall task is going to take somewhat longer using this approach as opposed to the synchronous version. However, from the user's experience, having an indication that something is happening is better than waiting indefinitely while nothing seems to be happening – even if the latter wait time is shorter.
Have you tried going even simpler and making a function, let's say:
Pseudo:
Function increment_bar(amount = 10)
{
document.getElementById('thumb').style.width = i*amount + '%';
}
Then from wherever you are doing your processing work just calling that function every x seconds or whenever you hit a certain point in processing (let's say 10-20% completion?)
Pseudo:
{
Doing_work_here;
increment_bar(25);
LOOP
}
Javascript is everywhere and to my mind is constantly gaining importance. Most programmers would agree that while Javascript itself is ugly, its "territory" sure is impressive. With the capabilities of HTML5 and the speed of modern browsers deploying an application via Javascript is an interesting option: It's probably as cross-platform as you can get.
The natural result are cross compilers. The predominant is probably GWT but there are several other options out there. My favourite is Coffeescript since it adds only a thin layer over Javascript and is much more "lightweight" than for example GWT.
There's just one thing that has been bugging me: Although my project is rather small performance has always been an important topic. Here's a quote
The GWT SDK provides a set of core Java APIs and Widgets. These allow
you to write AJAX applications in Java and then compile the source to
highly optimized JavaScript
Is Coffeescript optimized, too? Since Coffeescript seems to make heavy use of non-common Javascript functionality I'm worried how their performance compares.
Have you experience with Coffeescript related speed issues ?
Do you know a good benchmark comparison ?
Apologies for resurrecting an old topic but it was concerning me too. I decided to perform a little test and one of the simplest performance tests I know is to write consecutive values to an array, memory is consumed in a familiar manner as the array grows and 'for' loops are common enough in real life to be considered relevant.
After a couple of red herrings I find coffeescript's simplest method is:
newway = -> [0..1000000]
# simpler and quicker than the example from http://coffeescript.org/#loops
# countdown = (num for num in [10..1])
This uses a closure and returns the array as the result. My equivalent is this:
function oldway()
{
var a = [];
for (var i = 0; i <= 1000000; i++)
a[i] = i;
return a;
}
As you can see the result is the same and it grows an array in a similar way too. Next I profiled in chrome 100 times each and averaged.
newway() | 78.5ms
oldway() | 49.9ms
Coffeescript is 78% slower. I refute that "the CoffeeScript you write ends up running as fast as (and often faster than) the JS you would have written" (Jeremy Ashkenas)
Addendum: I was also suspicious of the popular belief that "there is always a one to one equivalent in JS". I tried to recreate my own code with this:
badway = ->
a = []
for i in [1..1000000]
a[i] = i
return a
Despite the similarity it still proved 7% slower because it adds extra checks for direction (increment or decrement) which means it is not a straight translation.
This is all quite intersting and there is one truth, coffee script cannot work faster than fully optimized javascript.
That said, since coffee script is generating javascript. There are ways to make it worth it. Sadly, it doesn't seem to be the case yet.
Lets take the example:
new_way = -> [0..1000000]
new_way()
It compiles to this with coffee script 1.6.2
// Generated by CoffeeScript 1.6.2
(function() {
var new_way;
new_way = function() {
var _i, _results;
return (function() {
_results = [];
for (_i = 0; _i <= 1000000; _i++){ _results.push(_i); }
return _results;
}).apply(this);
};
new_way();
}).call(this);
And the code provided by clockworkgeek is
function oldway()
{
var a = [];
for (var i = 0; i <= 1000000; i++)
a[i] = i;
return a;
}
oldway()
But since the coffee script hides the function inside a scope, we should do that for javascript too. We don't want to polute window right?
(function() {
function oldway()
{
var a = [];
for (var i = 0; i <= 1000000; i++)
a[i] = i;
return a;
}
oldway()
}).call(this);
So here we have code that does the same thing actually. And then we'd like to actually test both versions a couple of time.
Coffee script
for i in [0..100]
new_way = -> [0..1000000]
new_way()
Generated JS, and you may ask yourself what is going on there??? It's creating i and _i for whatever reason. It's clear to me from these two, only one is needed.
// Generated by CoffeeScript 1.6.2
(function() {
var i, new_way, _i;
for (i = _i = 0; _i <= 100; i = ++_i) {
new_way = function() {
var _j, _results;
return (function() {
_results = [];
for (_j = 0; _j <= 1000000; _j++){ _results.push(_j); }
return _results;
}).apply(this);
};
new_way();
}
}).call(this);
So now we're going to update our Javascript.
(function() {
function oldway()
{
var a = [];
for (var i = 0; i <= 1000000; i++)
a[i] = i;
return a;
}
var _i;
for(_i=0; _i <= 100; ++_i) {
oldway()
}
}).call(this);
So the results:
time coffee test.coffee
real 0m5.647s
user 0m0.016s
sys 0m0.076s
time node test.js
real 0m5.479s
user 0m0.000s
sys 0m0.000s
The js takes
time node test2.js
real 0m5.904s
user 0m0.000s
sys 0m0.000s
So you might ask yourself... what the hell coffee script is faster??? and then you look at the code and ask yourself... so let's try to fix that!
(function() {
function oldway()
{
var a = [];
for (var i = 0; i <= 1000000; i++)
a.push(i);
return a;
}
var _i;
for(_i=0; _i <= 100; ++_i) {
oldway()
}
}).call(this);
We'll then do a small fix to the JS script and change a[i] = i to a.push(i) And then lets try again...and then BOOM
time node test2.js
real 0m5.330s
user 0m0.000s
sys 0m0.000s
This small change made it faster than our CoffeeScript Now lets look at the generated CoffeeScript... and remove those double variables...
to this:
// Generated by CoffeeScript 1.6.2
(function() {
var i, new_way;
for (i = 0; i <= 100; ++i) {
new_way = function() {
var _j, _results;
return (function() {
_results = [];
for (_j = 0; _j <= 1000000; _j++){ _results.push(_j); }
return _results;
}).apply(this);
};
new_way();
}
}).call(this);
and BOOM
time node test.js
real 0m5.373s
user 0m0.000s
sys 0m0.000s
Well what I'm trying to say is that there are great benefits to use a higher language. The generated CoffeeScript wasn't optimized. But wasn't that far from the pure js code. The code optimization that clockworkgeek tried to use with using index directly instead of push actually seemed to backfire and worked slowlier than the generated coffeescript.
The truth it that such kind of optimization could be hard to find and fix. On the other side, from version to version, coffeescript could generate optimized js code for current browser or interpreters. The CoffeeScript would remain unchanged but could be generated again to speedup things.
If you write directly in javascript, there is now way to really optimize the code as much as one would with a real compiler.
The other interesting part is that one day, CoffeeScript or other generators to javascript could be used to analyse code (like jslint) and remove parts of the code where some variables aren't needed... Compile functions differently with different arguments to speed up things when some variables aren't needed. If you have purejs, you'll have to expect that there is a JIT compiler that will do the job right and its good for coffeescript too.
For example, I could optimize the coffee script one last time..by removing the new_way = (function... from inside the for loop. One smart programmer would know that the only thing that happen here is reaffection the function on each loop which doesn't change the variable. The function is created in the function scope and isn't recreated on each loop. That said it shouldn't change much...
time node test.js
real 0m5.363s
user 0m0.015s
sys 0m0.000s
So this is pretty much it.
Short answer: No.
CoffeeScript generates javascript, so its maximum possible speed equals to the speed of javascript. But while you can optimize js code at low-level (yeah, it sounds ironical) and gain some performance boost - with CoffeeScript you cannot do that.
But speed of code should not be your concern, when choosing CS over JS, as the difference is negligible for most tasks.
Coffescript compiles directly to JavaScript, meaning that there is always a one to one equivalent in JS for any Coffeescript source. There is nothing non-common about it. A performance gain can come from optimized things e.g. the fact that Coffescript stores the Array length in a separate variable in a for loop instead of requesting it in every iteration. But that should be a common practise in JavaScript, too, it is just not enforced by the language itself.
I want to add something to the answer of Loïc Faure-Lacroix...
It seems, that you only printed the times of one Browser. And btw "x.push(i)" is not faster that "x[i] = i" according to jsperf : https://jsperf.com/array-direct-assignment-vs-push/130
Chrome: push => 79,491 ops/s; direct assignment => 3,815,588 ops/s;
IE Edge: push => 358,036 ops/s; direct assignment => 7,047,523 ops/s;
Firefox: push => 67,123 ops/s; direct assignment => 206,444 ops/s;
Another point -> x.call(this) and x.apply(this)... I don't see any performance reason to that. Even jsperf confirms by that: http://jsperf.com/call-apply-segu/18
Chrome:
direct call => 47,579,486 ops/s; x.call => 45,239,029 ops/s; x.apply => 15,036,387 ops/s;
IE Edge:
direct call => 113,210,261 ops/s; x.call => 17,771,762 ops/s; x.apply => 6,550,769 ops/s;
Firefox:
direct call => 780,255,612 ops/s; x.call => 76,210,019 ops/s; x.apply => 2,559,295 ops/s;
First to mention - I used the actual Browsers.
Secondly - I extended the test by a for-loop, because with one call the test is to short...
Last but not least - now the tests for all browsers are like the following:
Here I used CoffeeScript 1.10.0 (compiled with the same code given in his answer)
console.time('coffee');// added manually
(function() {
var new_way;
new_way = function() {
var i, results;
return (function() {
results = [];
for (i = 0; i <= 1000000; i++){ results.push(i); }
return results;
}).apply(this);
};
// manually added on both
var i;
for(i = 0; i != 10; i++)
{
new_way();
}
}).call(this);
console.timeEnd('coffee');// added manually
Now the Javascript
console.time('js');
(function() {
function old_way()
{
var i = 0, results = [];
return (function()
{
for (i = 0; i <= 1000000; i++)
{
results[i] = i;
}
return results;
})();// replaced apply
}
var i;
for(i = 0; i != 10; i++)
{
old_way();
}
})();// replaced call
console.timeEnd('js');
The limit value of the for loop is low, because any higher it would be a pretty slow testing (10 * 1000000 calls)...
Results
Chrome: coffee: 305.000ms; js: 258.000ms;
IE Edge: coffee: 5.944,281ms; js: 3.517,72ms;
Firefox: coffee: 174.23ms; js: 159.55ms;
Here I must have to mention, that not always coffee was the slowest in this test. You can see that by testing those codes in firefox.
My final answer:
First to say - I am not really familiar with coffeescript, but I looked into it, because I am using the Atom Editor and wanted to try to build my first package there, but drived back to Javascript...
So if there is anything wrong you can correct me.
With coffeescript you can write less code, but if it comes to optimization, the code gets heavy. My own opinion -> I don't see any so called "productiveness" in this Coffeescripting language...
To get back to the performances :: The most used browser is the Chrome Browser (src: w3schools.com/browsers/browsers_stats.asp) with 60% and my testings also have shown that manually typed Javascript runs a bit faster than Coffeescript (except IE ... - much faster). I would recommend Coffeescript for smaller projects, but if no one minds, stay the language you like.
I am using this bit of code in order to reformat some large ajax responseText into good binary data. It works, albeit slow.
The data that I am working with can be as large as 8-10 megs.
I need to get this code to be absolutely efficient. How would loop unrolling or Duff's device be applied to this code while still keeping my binary data intact, or does anyone see anything that can be changed that would help increase it's speed?
var ff = [];
var mx = text.length;
var scc= String.fromCharCode;
for (var z = 0; z < mx; z++) {
ff[z] = scc(text.charCodeAt(z) & 255);
}
var b = ff.join("");
this.fp=b;
return b;
Thanks
Pat
Your time hog isn't the loop. It's this: ff[z] = scc(text.charCodeAt(z) & 255); Are you incrementally growing ff? That will be a pig, guaranteed.
If you just run it under the debugger and pause it, I bet you will see it in the process of growing ff. Pre-allocate.
Convert the data to a JSON array on the server. 8/10 megabytes will take a long time even with a native JSON engine. I'm not sure why a JS application needs 8/10 megs of data in it. If you are downloading to the client's device, convert it to a format they expect and just link to it. They can download and process it themselves then.
We are using Bing and/or Google javascript map controls, sometimes with large numbers of dynamically alterable overlays.
I have read http://support.microsoft.com/kb/175500/en-us and know how to set the MaxScriptStatments registry key.
Problem is we do not want to programmatically set this or any other registry key on users' computers but would rather achieve the same effect some other way.
Is there another way?
Hardly anything you can do besides making your script "lighter". Try to profile it and figure out where the heaviest crunching takes place, then try to optimize those parts, break them down into smaller components, call the next component with a timeout after the previous one has finished and so on. Basically, give the control back to the browser every once in a while, don't crunch everything in one function call.
Generally a long running script is encountered in code that is looping.
If you're having to loop over a large collection of data and it can be done asynchronously--akin to another thread then move the processing to a webworker(http://www.w3schools.com/HTML/html5_webworkers.asp).
If you cannot or do not want to use a webworker then you can find your main loop that is causing the long running script and you can give it a max number of loops and then cause it to yield back to the client using setTimeout.
Bad: (thingToProcess may be too large, resulting in a long running script)
function Process(thingToProcess){
var i;
for(i=0; i < thingToProcess.length; i++){
//process here
}
}
Good: (only allows 100 iterations before yielding back)
function Process(thingToProcess, start){
var i;
if(!start) start = 0;
for(i=start; i < thingToProcess.length && i - start < 100; i++){
//process here
}
if(i < thingToProcess.length) //still more to process
setTimeout(function(){Process(thingToProcess, i);}, 0);
}
Both can be called in the same way:
Process(myCollectionToProcess);