So I have this single page application that does a lot of computation every time the user does an action like click a button. As javascript is a not threaded, the lengthy calc blocks the UI updates and creates a bad user experience:
$('update-btn').click() {
updateDomWithAnimation();
doLenghtyCalc();
}
After reading perhaps one too many articles on how to handle this, I find that wrapping some of the function calls with window.setTimeout does help. So armed with this knowledge I have started wrapping them up and it seems to bring some responsiveness back to the browser.
However my question is, are there any adverse side effects of having too many timeout statements even if the delay is only 0 or 1? From a business logic perspective I am making sure only independent standalone functions are wrapped in setTimeout. I wanted to check from a technical viewpoint. Can any JS gurus share some insight?
P.S: I had taken a look at Web Workers, but my code is built using Jquery and depends heavily on DOM states etc so implementing web workers atm would not be possible which is why I am using timeouts
Much appreciated!
While technically it's ok to have several timeouts running it's generally advisable to not have too many.
One thing we did was to have a single timeout/setInterval each that when fired runs a set of functions that can be added or removed at anytime.
///Somewhere
var runnableFunctions = []
var runningIntervalID = window.setInterval(function() {
runnableFunctions.forEach(function(func) {
if(typeof func === 'function') {
func.call(null);
}
}, 1);
/// Elsewhere
$(domElem).on(event, function() {
runnableFucntions.push(function() {
//Do something on interval
// slice out this function if it only needs to run once.
});
});
This is just a dirty example but you get the idea where you can shove functions into an array and have them run in a single timeout/interval vs setting up many timeouts/intervals and then remembering to stop them later.
Related
I have been working on writing a library of code for my future projects. One of the functions I've been working on is a pause function. So far I have no problem with the errors reporting that the script is running to long even on pauses as long as 10 seconds. This is to primarily keep malicious users busy, it works well when you set a very long time. I was wondering if there are any errors that I should look out for that I might face?
Here's the code...
pause = function(a) {
var b = new Date().getTime();
e = false;
function wait() {
d=10;
for(i=0;i<d;i++) {
d++;
var c = new Date().getTime();
if(c-b>=a) {
e = true;
break;
}
if(d>1000000) {
break;
}
}
}
wait();
if(e==false) {
pause(a-(new Date().getTime()-b));
}};
You never ever want to do this sort of thing in Javascript. As you have noticed, the browser will complain about your script taking too long. Furthermore, this will take much more energy than necessary which is important for mobile devices with limited battery capacity.
Instead, use the standard setTimeout() function to run code at a later time.
Javascript already has a way to do that. Use setTimeout() and pass it an anonymous function with the continuation of your operation. The second argument is the delay in milliseconds.
Example:
setTimeout(function(){ alert('hello'); }, 2000);
As I've said in my comments so far, trying to run semi-infinite loops in javascript is never a good idea.
Here are some of the issues to watch out for:
If you don't return to the browser event loop by finishing your javascript thread of execution after some period of time, the browser will stop and prompt the user to kill the javascript in that browser window.
Your code uses a bunch of undeclared variables which makes them implicit global variables. Since they have common names, they could easily interfere with other code.
Because javascript is single threaded, while your loop is running, no other browser events for that window can run rendering the browser window essentially frozen. Users will think it is hung. Patient users might wait longer to see if it completes. Impatient users will kill the browser window.
Essentially what you're trying to do is a denial-of-service attack on a user's browser. You may not like what they're doing to your web page, but attacking them back does not seem appropriate. Much better to just block access to your page in a browser-appropriate way rather than a browser-attacking way.
Suggestions:
If what you're really trying to do is just render your page unuseful in some circumstances (you still haven't explained why you're trying to do this or what condition triggers your desire to do this), then it might be better to just make the page unuseful or informative without trying to kill the browser. You could do that by:
Just hiding everything in the DOM so the window goes blank.
Remove all DOM elements so the window goes blank.
Put a transparent layer over the whole window so that no input events can get to the page elements, but the visuals stay there. You could add a message to that window.
Replace the contents of your page with an appropriate message.
Is it possibly to do things asynchronously in javascript (AJAX aside)? For example, to iterate multiple arrays at the same time. How is it done? A brief example would be nice. Searching for this was hard, due to all the ajax pollution, which is not what I am looking for.
Thanks in advance.
Use web Workers. But remember that it is a very new feature and not all browsers are fully supported.
You could use setTimeout.
setTimeout(function () { iterateArray(array1); reportDone(1); }, 0);
setTimeout(function () { iterateArray(array2); reportDone(2); }, 0);
I'm not sure how concurrent it will be, but it is an asynchronous programming model.
As stated by Grumdrig you can write code like this:
setTimeout(function () { iterateArray(array1); reportDone(1); }, 0);
setTimeout(function () { iterateArray(array2); reportDone(2); }, 0);
But it will still not run concurrently. Here's a general idea of what happens after such timeouts are called:
Any code after the setTimeout calls will be run immediately, including returns to calling functions.
If there are other timers in queue that are at or past their delay or interval time, they will be executed one at a time.
While any timer is running, another might hit its interval/delay time, but it will not be run until the last one is finished.
Some browsers give priority to events fired from user interaction such as onclick and onmousemove, in which case the functions attached to those events will execute at the expense of timer accuracy.
This will continue until there is an opening (no previously called timers or event handlers requesting execution). Only then will the functions in the example code be run. Again one at a time, with the first one likely but not certainly executing first. Also, I'm venturing a guess that some browsers might impose a minimum delay time, which would make any timers set with a delay of 0 milliseconds be run even later than expected.
Obviously there is no performance advantage to running code like this. In every case it will make things take longer to complete. However in cases where a single task is taking so long it freezes the browser (and possibly trips "Script is taking too long" browser warnings), it can be helpful to break it up into smaller faster executing pieces that run sequentially after some delay time, thus giving the browser some time to breathe.
Web Workers have been mentioned, and if you are not concerned about IE compatibility then you can use them for true concurrency. However there are some severe limitations on their use imposed for security reasons. For one they cannot interact with the DOM in any way, meaning any changes to the page still must be done synchronously. Also all data passed to and from workers is serialized in transit, meaning true Javascript objects cannot be used. That being said, for intensive data processing, Web Workers are probably a better solution than breaking a function up into multiple timer delayed tasks.
One new development in this field is HTML5 Web Workers.
JavaScript is normally single threaded; you cannot do several things at once. If your JavaScript code is too slow, you will need to offload the work. The new way is to use web workers, as others have noted. The old way is often to use AJAX and do the work on the server instead. (Either with web workers or with AJAX, the arrays would have to be serialized and the result deserialized)
I have to agree with MooGoo, i also wonder why you would run through such a big array in one go.
There's an extension to JavaScript called StratifiedJS, it allows you do multiple things at once as long as they're asynchronous. Also, webworkers are an awkward "solution" that just make things more complicated, also, they don't work in IE.
In StratifiedJS you could just write.
waitfor {
// do something long lasting here...
}
and {
// do something else at the same time...
}
// and when you get here, both are done
I'm repeatedly coming into troubles with Internet Explorer's "This script is taking too long to run, would you like to continue?" messages. I am wondering if anyone is aware of a clever way to trick the JS engine into keeping quiet? Based on some searching I did, I found that the engine monitors states it thinks potentially could be infinitely looping, so I thought maybe I could add some logic to change up the execution every once in a while to fool it into leaving it alone, but no luck. I also tried breaking up a longer loop into several shorter ones, but that hasn't helped. Specifically the code that is currently causing issues is the expansion of nodes in a tree structure. The code is looping over the current nodes and expanding each. It's a trivial thing to write in Javascript, but I can't allow these timeout errors, so I think my only option might be to request pre-expanded view data via AJAX. I'm currently working in a DEV environment with a small(ish) data set and I know this will not fly in other environments. Has anyone managed to suppress these warnings?
Using setTimeout
A good way is simulating threaded execution using setTimeout() function calls. This requires splitting your whole processing into smaller parts and queueing them one after another. Timeouts can be set quite close to each other but they will run one by one when each of them finishes execution.
How about spacing it using a series of events. So a loop occurs sends an event, listener to event triggers and does a loop. etc..?
Why not break your function into a series of steps and queue them up using jQuery?
http://api.jquery.com/queue/
Have you tried making it output something every once in a while? It might be that it just checks for output and if there hasn't been any in x seconds, it assumes you're in an infinite loop.
If outputting works, you could try something like adding and then immediately deleting something really small (like an empty <span>).
A very common solution for this problem is to use setTimeout function.
The way you do it is that you separate the process into smaller pieces a then execute those pieces one after another using the setTimeout function.
I think this http://www.julienlecomte.net/blog/2007/10/28/ should help you.
There is also another option introduced by HTML5 WebWorkers.
This new standard should allow you to execute long running tasks in a separate thread and then report any results in a callback.
You can read about it here robertnyman.com/2010/03/25/using-html5-web-workers-to-have-background-computational-power/
Unfortunatelly, it is not supported by IE according to html5demos.com/
I think the timeout is more based on the number of statements than timing or heuristics. You could go a long way to increasing the amount your code can handle before triggering the warning by optimizing your code for simple things -- especially if you are using helper APIs on another library like jQuery. For example, change this:
$.each(arr, function(value) {
// do stuff
});
to this:
for (var i = 0, l = arr.length; i < l; i++) {
var value = arr[i];
// do stuff
}
Another easy one -- cache access to fields. If you have two instances of "foo.bar", store the result in a variable and use it, wherever that makes sense.
Obviously I have no idea what your code looks like, but I bet you could do a lot to improve it as these little things really add up when you're talking about this timeout problem.
I managed to do this by using prototypes Function#defer method, which is essentially the same as using the setTimeout method. Thanks everyone!
I have JavaScript which performs a whole lot of calculations as well as reading/writing values from/to the DOM. The page is huge so this often ends up locking the browser for up to a minute (sometimes longer with IE) with 100% CPU usage.
Are there any resources on optimising JavaScript to prevent this from happening (all I can find is how to turn off Firefox's long running script warning)?
if you can turn your calculation algorithm into something which can be called iteratively, you could release control back the browser at frequent intervals by using setTimeout with a short timeout value.
For example, something like this...
function doCalculation()
{
//do your thing for a short time
//figure out how complete you are
var percent_complete=....
return percent_complete;
}
function pump()
{
var percent_complete=doCalculation();
//maybe update a progress meter here!
//carry on pumping?
if (percent_complete<100)
{
setTimeout(pump, 50);
}
}
//start the calculation
pump();
Use timeouts.
By putting the content of your loop(s) into separate functions, and calling them from setTimeout() with a timeout of 50 or so, the javascript will yield control of the thread and come back some time later, allowing the UI to get a look-in.
There's a good workthrough here.
I had blogged about in-browser performance some time ago, but let me summarize the ones related to the DOM for you here.
Update the DOM as infrequently as possible. Make your changes to in-memory DOM objects and append them only once to the DOM.
Use innerHTML. It's faster than DOM methods in most browsers.
Use event delegation instead of regular event handling.
Know which calls are expensive, and avoid them. For example, in jQuery, $("div.className") will be more expensive than $("#someId").
Then there are some related to JavaScript itself:
Loop as little as possible. If you have one function that collects DOM nodes, and another that processes them, you are looping twice. Instead, pass an anonymous function to the function that collects the nodes, and process the nodes as your are collecting them.
Use native functionality when possible. For example, forEach iterators.
Use setTimeout to let the browser breathe once in a while.
For expensive functions that have idempotent outputs, cache the results so that you don't have to recompute it.
There's some more on my blog (link above).
This is still a little bit bleeding edge, but Firefox 3.5 has these things called Web Workers, I'm not sure about their support in other browsers though.
Mr. Resig has an article on them here: http://ejohn.org/blog/web-workers/
And the Simulated Annealing is probably the simplest example of it, if you'll notice the spinning Firefox logo does not freeze up, when the worker threads are doing their requests (thus not freezing the browser).
You can try performing long running calculations in threads (see JavaScript and Threads), although they aren't very portable.
You may also try using some Javascript profiler to find performance bottlenecks. Firebug supports profiling javascript.
My experience is that DOM manipulation, especially in IE, is much more of an issue for performance than "core" JavaScript (looping, etc.).
If you are building nodes, it is much faster in IE to do so by building an HTML string and then setting innerHTML on a container than by using DOM methods like createElement/appendChild.
You could try shortening the code by
$(xmlDoc).find("Object").each(function(arg1) {
(function(arg1_received) {
setTimeout(function(arg1_received_reached) {
//your stuff with the arg1_received_reached goes here
}(arg1_received), 0)
})(arg1)
}(this));
or for "for" loops try
for (var i = 0 ; i < 10000 ; i = i + 1) {
(function(arg1_received) {
setTimeout(function(arg1_received_reached) {
//your stuff with the arg1_received_reached goes here
}(arg1_received), 0)
})(arg1_to_send)
}
I had the same problem and my customers was reporting this as "Kill page" error. But now I juz got a best solution for that. :)
I have seen this link: Implementing Mutual Exclusion in JavaScript.
On the other hand, I have read that there are no threads in javascript, but what exactly does that mean?
When events occur, where in the code can they interrupt?
And if there are no threads in JS, do I need to use mutexes in JS or not?
Specifically, I am wondering about the effects of using functions called by setTimeout() and XmlHttpRequest's onreadystatechange on globally accessible variables.
Javascript is defined as a reentrant language which means there is no threading exposed to the user, there may be threads in the implementation. Functions like setTimeout() and asynchronous callbacks need to wait for the script engine to sleep before they're able to run.
That means that everything that happens in an event must be finished before the next event will be processed.
That being said, you may need a mutex if your code does something where it expects a value not to change between when the asynchronous event was fired and when the callback was called.
For example if you have a data structure where you click one button and it sends an XmlHttpRequest which calls a callback the changes the data structure in a destructive way, and you have another button that changes the same data structure directly, between when the event was fired and when the call back was executed the user could have clicked and updated the data structure before the callback which could then lose the value.
While you could create a race condition like that it's very easy to prevent that in your code since each function will be atomic. It would be a lot of work and take some odd coding patterns to create the race condition in fact.
The answers to this question are a bit outdated though correct at the time they were given. And still correct if looking at a client-side javascript application that does NOT use webworkers.
Articles on web-workers:
multithreading in javascript using webworkers
Mozilla on webworkers
This clearly shows that javascript via web-workers has multithreading capabilities. As concerning to the question are mutexes needed in javascript? I am unsure of this. But this stackoverflow post seems relevant:
Mutual Exclusion for N Asynchronous Threads
Yes, mutexes can be required in Javascript when accessing resources that are shared between tabs/windows, like localStorage.
For example, if a user has two tabs open, simple code like the following is unsafe:
function appendToList(item) {
var list = localStorage["myKey"];
if (list) {
list += "," + item;
}
else {
list = item;
}
localStorage["myKey"] = list;
}
Between the time that the localStorage item is 'got' and 'set', another tab could have modified the value. It's generally unlikely, but possible - you'd need to judge for yourself the likelihood and risk associated with any contention in your particular circumstances.
See the following articles for a more detail:
Wait, Don't Touch That: Mutual Exclusion Locks & JavaScript - Medium Engineering
JavaScript concurrency and locking the HTML5 localStorage - Benjamin Dumke-von der Eh, Stackoverflow
As #william points out,
you may need a mutex if your code does something where it expects a
value not to change between when the asynchronous event was fired and
when the callback was called.
This can be generalised further - if your code does something where it expects exclusive control of a resource until an asynchronous request resolves, you may need a mutex.
A simple example is where you have a button that fires an ajax call to create a record in the back end. You might need a bit of code to protect you from trigger happy users clicking away and thereby creating multiple records. there are a number of approaches to this problem (e.g. disable the button, enable on ajax success). You could also use a simple lock:
var save_lock = false;
$('#save_button').click(function(){
if(!save_lock){
//lock
save_lock=true;
$.ajax({
success:function()
//unlock
save_lock = false;
}
});
}
}
I'm not sure if that's the best approach and I would be interested to see how others handle mutual exclusion in javascript, but as far as i'm aware that's a simple mutex and it is handy.
JavaScript is single threaded... though Chrome may be a new beast (I think it is also single threaded, but each tab has it's own JavaScript thread... I haven't looked into it in detail, so don't quote me there).
However, one thing you DO need to worry about is how your JavaScript will handle multiple ajax requests coming back in not the same order you send them. So, all you really need to worry about is make sure your ajax calls are handled in a way that they won't step on eachother's feet if the results come back in a different order than you sent them.
This goes for timeouts too...
When JavaScript grows multithreading, then maybe worry about mutexes and the like....
JavaScript, the language, can be as multithreaded as you want, but browser embeddings of the javascript engine only runs one callback (onload, onfocus, <script>, etc...) at a time (per tab, presumably). William's suggestion of using a Mutex for changes between registering and receiving a callback should not be taken too literally because of this, as you wouldn't want to block in the intervening callback since the callback that will unlock it will be blocked behind the current callback! (Wow, English sucks for talking about threading.) In this case, you probably want to do something along the lines of redispatching the current event if a flag is set, either literally or with the likes of setTimeout().
If you are using a different embedding of JS, and that executes multiple threads at once, it can get a bit more dicey, but due to the way JS can use callbacks so easily and locks objects on property access explicit locking is not nearly as necessary. However, I would be surprised if an embedding designed for general code (eg, game scripting) that used multi threading didn't also give some explicit locking primitives as well.
Sorry for the wall of text!
Events are signaled, but JavaScript execution is still single-threaded.
My understanding is that when event is signaled the engine stops what it is executing at the moment to run event handler. After the handler is finished, script execution is resumed. If event handler changed some shared variables then resumed code will see these changes appearing "out of the blue".
If you want to "protect" shared data, simple boolean flag should be sufficient.