I have JavaScript which performs a whole lot of calculations as well as reading/writing values from/to the DOM. The page is huge so this often ends up locking the browser for up to a minute (sometimes longer with IE) with 100% CPU usage.
Are there any resources on optimising JavaScript to prevent this from happening (all I can find is how to turn off Firefox's long running script warning)?
if you can turn your calculation algorithm into something which can be called iteratively, you could release control back the browser at frequent intervals by using setTimeout with a short timeout value.
For example, something like this...
function doCalculation()
{
//do your thing for a short time
//figure out how complete you are
var percent_complete=....
return percent_complete;
}
function pump()
{
var percent_complete=doCalculation();
//maybe update a progress meter here!
//carry on pumping?
if (percent_complete<100)
{
setTimeout(pump, 50);
}
}
//start the calculation
pump();
Use timeouts.
By putting the content of your loop(s) into separate functions, and calling them from setTimeout() with a timeout of 50 or so, the javascript will yield control of the thread and come back some time later, allowing the UI to get a look-in.
There's a good workthrough here.
I had blogged about in-browser performance some time ago, but let me summarize the ones related to the DOM for you here.
Update the DOM as infrequently as possible. Make your changes to in-memory DOM objects and append them only once to the DOM.
Use innerHTML. It's faster than DOM methods in most browsers.
Use event delegation instead of regular event handling.
Know which calls are expensive, and avoid them. For example, in jQuery, $("div.className") will be more expensive than $("#someId").
Then there are some related to JavaScript itself:
Loop as little as possible. If you have one function that collects DOM nodes, and another that processes them, you are looping twice. Instead, pass an anonymous function to the function that collects the nodes, and process the nodes as your are collecting them.
Use native functionality when possible. For example, forEach iterators.
Use setTimeout to let the browser breathe once in a while.
For expensive functions that have idempotent outputs, cache the results so that you don't have to recompute it.
There's some more on my blog (link above).
This is still a little bit bleeding edge, but Firefox 3.5 has these things called Web Workers, I'm not sure about their support in other browsers though.
Mr. Resig has an article on them here: http://ejohn.org/blog/web-workers/
And the Simulated Annealing is probably the simplest example of it, if you'll notice the spinning Firefox logo does not freeze up, when the worker threads are doing their requests (thus not freezing the browser).
You can try performing long running calculations in threads (see JavaScript and Threads), although they aren't very portable.
You may also try using some Javascript profiler to find performance bottlenecks. Firebug supports profiling javascript.
My experience is that DOM manipulation, especially in IE, is much more of an issue for performance than "core" JavaScript (looping, etc.).
If you are building nodes, it is much faster in IE to do so by building an HTML string and then setting innerHTML on a container than by using DOM methods like createElement/appendChild.
You could try shortening the code by
$(xmlDoc).find("Object").each(function(arg1) {
(function(arg1_received) {
setTimeout(function(arg1_received_reached) {
//your stuff with the arg1_received_reached goes here
}(arg1_received), 0)
})(arg1)
}(this));
or for "for" loops try
for (var i = 0 ; i < 10000 ; i = i + 1) {
(function(arg1_received) {
setTimeout(function(arg1_received_reached) {
//your stuff with the arg1_received_reached goes here
}(arg1_received), 0)
})(arg1_to_send)
}
I had the same problem and my customers was reporting this as "Kill page" error. But now I juz got a best solution for that. :)
Related
In a browser, I am trying to make a well-behaved background job like this:
function run() {
var system = new System();
setInterval(function() { system.step(); }, 0);
}
It doesn't matter what that System object is or what the step function does [except it needs to interact with the UI, in my case, update a canvas to run Conway's Game of Life in the background], the activity is performed slowly and I want it to run faster. But I already specified no wait time in the setInterval, and yet, when I check the profiling tool in Chrome it tells me the whole thing is 80% idle:
Is there a way to make it do less idle time and perform my job more quickly on a best effort basis? Or do I have to make my own infinite loop and then somehow yield back time to the event loop on a regular basis?
UPDATE: It was proposed to use requestIdleCallback, and doing that makes it actually worse. The activity is noticably slower, even if the profiling data isn't very obvious about it, but indeed the idle time has increased:
UPDATE: It was then proposed to use requestAnimationFrame, and I find that once again the slowness and idleness is the same as the requestIdleCallback method, and both run at about half the speed that I get from the standard setInterval.
PS: I have updated all the timings to be comparable, all three now timing about 10 seconds of the same code running. I had the suspicion that perhaps the recursive re-scheduling might be the cause for the greater slowness, but I ruled that out, as the recursive setTimeout call is about the same speed as the setInterval method, and both are about twice as fast as these new request*Callback methods.
I did find a viable solution for what I'm doing in practice, and I will provide my own answer later, but will wait for a moment longer.
OK, unless somebody comes with another answer this here would be my FINAL UPDATE: I have once again measured all 4 options and measured the elapsed time to complete a reasonable chunk of work. The results are here:
setTimeout - 31.056 s
setInterval - 23.424 s
requestIdleCallback - 68.149 s
requestAnimationFrame - 68.177 s
Which provides objective data to my impression above that the two new methods with request* will perform worse.
I also have my own practical solution which allows me to complete the same amount of work in 55 ms (0.055 s), i.e., > 500 times faster, and still be relatively well behaved. Will report on that in a while. But wonder what anybody else can figure out here?
I think this is really dependent on what exactly you are trying to achieve though.
For example, you could initialize your web-worker on loading the page and make it run the background-job, if need be, then communicate the progress or status of the job to the main thread of your browser. If you don't like the use of post-message for communication between the threads, consider user Comlink
Web worker
Comlink
However, if the background job you intend to do isn't something worth a web-worker. You could use the requestIdleCallback API. I think it fits perfectly with what you mentioned here since you can already make it recursive. You would not need a timer anymore and the browser can help you schedule the task in such a way that it doesn't affect the rendering of your page (by keeping everything with 60fps).
Something like =>
function run() {
// whatever you want to keep doing
requestIdleCallback(run)
}
You can read more about requestIdleCallback on MDN.
OK, I really am not trying to prevent others to get the bounty, but as you can see from the details I added to my question, none of these methods allow high rate execution of the callback.
In principle the setInterval is the most efficient way to do it, as we already do not need to re-schedule the next call back all the time. But it is a small difference only. Notably requestIdleCallback and requestAnimationFrame are the worst when you want to be rapidly called back.
So, what needs to be done is instead of executing only a tiny amount of work and then expect to be called back quickly, we need to batch up more work. Problem is we don't know exactly how much work we should batch up before it is too much. That can probably in most cases be figured out with trial and error.
Dynamically one might take timing probes to find out how quickly we are being called back again and preemptively exit the work (loop of some kind) when the time between the call-backs is expired.
So I have this single page application that does a lot of computation every time the user does an action like click a button. As javascript is a not threaded, the lengthy calc blocks the UI updates and creates a bad user experience:
$('update-btn').click() {
updateDomWithAnimation();
doLenghtyCalc();
}
After reading perhaps one too many articles on how to handle this, I find that wrapping some of the function calls with window.setTimeout does help. So armed with this knowledge I have started wrapping them up and it seems to bring some responsiveness back to the browser.
However my question is, are there any adverse side effects of having too many timeout statements even if the delay is only 0 or 1? From a business logic perspective I am making sure only independent standalone functions are wrapped in setTimeout. I wanted to check from a technical viewpoint. Can any JS gurus share some insight?
P.S: I had taken a look at Web Workers, but my code is built using Jquery and depends heavily on DOM states etc so implementing web workers atm would not be possible which is why I am using timeouts
Much appreciated!
While technically it's ok to have several timeouts running it's generally advisable to not have too many.
One thing we did was to have a single timeout/setInterval each that when fired runs a set of functions that can be added or removed at anytime.
///Somewhere
var runnableFunctions = []
var runningIntervalID = window.setInterval(function() {
runnableFunctions.forEach(function(func) {
if(typeof func === 'function') {
func.call(null);
}
}, 1);
/// Elsewhere
$(domElem).on(event, function() {
runnableFucntions.push(function() {
//Do something on interval
// slice out this function if it only needs to run once.
});
});
This is just a dirty example but you get the idea where you can shove functions into an array and have them run in a single timeout/interval vs setting up many timeouts/intervals and then remembering to stop them later.
So, I'm loading a 3d model using webgl, and have some code to perform some operations on it before displaying it.
The problem is that this takes on the order of seconds and completely blocks the user from doing anything in the meantime. It's bad enough that you can't even scroll during this period. I've heard there's no such thing as multithreading in javascript, but I need someway for it not to block the main thread.
I even tried a setup where I load it in an iframe and using window.postMessage and a message event listener, but it seems the frame's domain uses the same thread for its javascript as well so that didn't work. Anyone else have a solution for dealing with CPU intensive code so that the user isn't blocked from doing anything?
There really isn't an easy answer at least at the moment
WebWorkers
Web worker run JavaScript in another thread. Unfortunately they are extremely limited it what they are allowed to do. Generally all you can do is pass messages back and forth. A WebWorker can not touch the DOM, it can not do WebGL or make Canvas (yet). All you can really do currently is networking and passing strings and/or typed arrays to and from a WebWorker.
If the thing you are doing that takes lots of time can be passed back to the main thread in a JSON string and/or typed arrays this might work for you.
A state machine
A common way to handle this in JavaScript is to make your loader do things over several states so that you can call it something like this
function doALittleMoreWork() {
...
if (theresStillMoreWorkToDo) {
setTimeout(doALittleMoreWork, 16);
}
}
Or something along those lines. doALittleMoreWork does a portion of the work and then remembers enough state so when called again it can continue where it left off. This is how the O3D loader worked.
You could try to use ES6 generators.
Generators effectively let you create a state machine super easy. Browsers don't yet support ES6 but there are libraries that let you use this feature now like for example Google Traceur
In fact if you write a simple generator like
function *foo() {
console.log("do first thing");
yield 1;
console.log("do 2nd thing");
yield 2;
console.log("do 3rd thing");
yield 3;
console.log("do 4th thing");
yield 4;
console.log("do 5th thing");
yield 5;
}
and you run it through traceur you'll see how it turns it into a state machine for you like #2 above.
Web workers API is definitely what you need
Is it possibly to do things asynchronously in javascript (AJAX aside)? For example, to iterate multiple arrays at the same time. How is it done? A brief example would be nice. Searching for this was hard, due to all the ajax pollution, which is not what I am looking for.
Thanks in advance.
Use web Workers. But remember that it is a very new feature and not all browsers are fully supported.
You could use setTimeout.
setTimeout(function () { iterateArray(array1); reportDone(1); }, 0);
setTimeout(function () { iterateArray(array2); reportDone(2); }, 0);
I'm not sure how concurrent it will be, but it is an asynchronous programming model.
As stated by Grumdrig you can write code like this:
setTimeout(function () { iterateArray(array1); reportDone(1); }, 0);
setTimeout(function () { iterateArray(array2); reportDone(2); }, 0);
But it will still not run concurrently. Here's a general idea of what happens after such timeouts are called:
Any code after the setTimeout calls will be run immediately, including returns to calling functions.
If there are other timers in queue that are at or past their delay or interval time, they will be executed one at a time.
While any timer is running, another might hit its interval/delay time, but it will not be run until the last one is finished.
Some browsers give priority to events fired from user interaction such as onclick and onmousemove, in which case the functions attached to those events will execute at the expense of timer accuracy.
This will continue until there is an opening (no previously called timers or event handlers requesting execution). Only then will the functions in the example code be run. Again one at a time, with the first one likely but not certainly executing first. Also, I'm venturing a guess that some browsers might impose a minimum delay time, which would make any timers set with a delay of 0 milliseconds be run even later than expected.
Obviously there is no performance advantage to running code like this. In every case it will make things take longer to complete. However in cases where a single task is taking so long it freezes the browser (and possibly trips "Script is taking too long" browser warnings), it can be helpful to break it up into smaller faster executing pieces that run sequentially after some delay time, thus giving the browser some time to breathe.
Web Workers have been mentioned, and if you are not concerned about IE compatibility then you can use them for true concurrency. However there are some severe limitations on their use imposed for security reasons. For one they cannot interact with the DOM in any way, meaning any changes to the page still must be done synchronously. Also all data passed to and from workers is serialized in transit, meaning true Javascript objects cannot be used. That being said, for intensive data processing, Web Workers are probably a better solution than breaking a function up into multiple timer delayed tasks.
One new development in this field is HTML5 Web Workers.
JavaScript is normally single threaded; you cannot do several things at once. If your JavaScript code is too slow, you will need to offload the work. The new way is to use web workers, as others have noted. The old way is often to use AJAX and do the work on the server instead. (Either with web workers or with AJAX, the arrays would have to be serialized and the result deserialized)
I have to agree with MooGoo, i also wonder why you would run through such a big array in one go.
There's an extension to JavaScript called StratifiedJS, it allows you do multiple things at once as long as they're asynchronous. Also, webworkers are an awkward "solution" that just make things more complicated, also, they don't work in IE.
In StratifiedJS you could just write.
waitfor {
// do something long lasting here...
}
and {
// do something else at the same time...
}
// and when you get here, both are done
I'm repeatedly coming into troubles with Internet Explorer's "This script is taking too long to run, would you like to continue?" messages. I am wondering if anyone is aware of a clever way to trick the JS engine into keeping quiet? Based on some searching I did, I found that the engine monitors states it thinks potentially could be infinitely looping, so I thought maybe I could add some logic to change up the execution every once in a while to fool it into leaving it alone, but no luck. I also tried breaking up a longer loop into several shorter ones, but that hasn't helped. Specifically the code that is currently causing issues is the expansion of nodes in a tree structure. The code is looping over the current nodes and expanding each. It's a trivial thing to write in Javascript, but I can't allow these timeout errors, so I think my only option might be to request pre-expanded view data via AJAX. I'm currently working in a DEV environment with a small(ish) data set and I know this will not fly in other environments. Has anyone managed to suppress these warnings?
Using setTimeout
A good way is simulating threaded execution using setTimeout() function calls. This requires splitting your whole processing into smaller parts and queueing them one after another. Timeouts can be set quite close to each other but they will run one by one when each of them finishes execution.
How about spacing it using a series of events. So a loop occurs sends an event, listener to event triggers and does a loop. etc..?
Why not break your function into a series of steps and queue them up using jQuery?
http://api.jquery.com/queue/
Have you tried making it output something every once in a while? It might be that it just checks for output and if there hasn't been any in x seconds, it assumes you're in an infinite loop.
If outputting works, you could try something like adding and then immediately deleting something really small (like an empty <span>).
A very common solution for this problem is to use setTimeout function.
The way you do it is that you separate the process into smaller pieces a then execute those pieces one after another using the setTimeout function.
I think this http://www.julienlecomte.net/blog/2007/10/28/ should help you.
There is also another option introduced by HTML5 WebWorkers.
This new standard should allow you to execute long running tasks in a separate thread and then report any results in a callback.
You can read about it here robertnyman.com/2010/03/25/using-html5-web-workers-to-have-background-computational-power/
Unfortunatelly, it is not supported by IE according to html5demos.com/
I think the timeout is more based on the number of statements than timing or heuristics. You could go a long way to increasing the amount your code can handle before triggering the warning by optimizing your code for simple things -- especially if you are using helper APIs on another library like jQuery. For example, change this:
$.each(arr, function(value) {
// do stuff
});
to this:
for (var i = 0, l = arr.length; i < l; i++) {
var value = arr[i];
// do stuff
}
Another easy one -- cache access to fields. If you have two instances of "foo.bar", store the result in a variable and use it, wherever that makes sense.
Obviously I have no idea what your code looks like, but I bet you could do a lot to improve it as these little things really add up when you're talking about this timeout problem.
I managed to do this by using prototypes Function#defer method, which is essentially the same as using the setTimeout method. Thanks everyone!