Force Synchronous function execution with callbacks - javascript

I have one opportunity to manipulate the DOM of my page before it's styled, rendered and shown to the user.
Obviously, I'd love to do all the dynamic fun stuff within this window as DOM manipulations are very expensive after the page as rendered. Since this is primarily targeted at mobile devices, this optimization is valuable to me.
Here's a basic overview / timeline:
function CalledBeforePageRendered(){
DoAsyncDataBaseWorkWithCallBack(AsyncDataBaseCallBack);
DoLocalizationsAndOtherSYNCRONOUSActions();
}
function AsyncDataBaseCallBack(results){
// code that processes the results element
// code that manipulates the DOM *before* it's styled.
}
The problem is that DoAsyncDataBaseWorkWithCallBack and DoLocalizationsAndOtherSYNCRONOUSActions
finish quickly and then CalledBeforePageRendered returns and subsequent styling is applied.
After the styling is applied, the page is shown to the user... and then AsyncDataBaseCallBack gets called
which then applies div tags and other DOM modifications. Only, I needed these modifications to take place before stylization
Is there any way I can make 'CalledBeforePageRendered' wait for 'AsyncDataBaseCallBack' to finish before returning? I know that a closure would usually work here, but I do not know how to make a closure work with a callback that is defined outside of the CalledBeforePageRendered function.

If you are trying to perform a synchronous JavaScript XmlHttpRequest (XHR) call - that's very possible, but assuming your trying to manipulate the DOM on the client side before the page is rendered - why don't you do that on the server side (since it's generating the HTML in the first place). If you can't I highly recommend you don't perform a synchronous JavaScript XHR to update the page before it's rendered. Doing so will lock up the browser window while the XHR is running which can not only add significant latency to page loading - it also frustrates end users (since they experience what appears to be a hard 'lock'). Manipulating the DOM before it's rendered isn't that costly. It's better to return the HTML you want in the first place though - instead of loading more after the page has loaded.
Again - I'd just like to emphasize - try to do this work on the server side (not client). If you can't and need to perform this as a secondary call - add a loading image and let XHR's run asynchronously like most JavaScript libraries pretty much enforce anyways. If I am misunderstanding your goals, please let me know.

Related

Detemining time to screen for a web app

A web application has certain timeliness constraints. How can I check the time from invocation of a JS function to having the information visible in the browser?
Clearly I can start a stopwatch, but on what event should I stop it?
Modern browsers offer the Navigation Timing API, which you can use to get this kind of information. Which information from it you use is up to you, probably domComplete or loadEventStart (or, of course, loadEventEnd if you want to know when everything is fully loaded, but you could do that with window.onload). This tutorial may be useful.
If you're talking about requesting something via ajax after page load (you've said "...from invocation of a JS function to having the informatin visible in the browser..."), adding that to the page, and seeing how long that took, you'd stop the timer after you were done appending the elements to the DOM, immediately before returning from your ajax onreadystatechange handler callback. Or if you want to be really sure the information has been rendered, after using setTimeout(function() { /*...end the timer...*/ }, 0); from that callback instead, which yields back to the browser for the minimum possible time, giving it a chance to render (if it doesn't render while JS is running).

What methods are blocking in Javascript?

I'm trying to override the standard confirm() method in Javascript (make a nice UI and stuff). I've read a 100 posts that it "can't be done", but I don't want to give up until I have given it a fair shot. :)
So, the real problem is of course that the confirm() method must block all javascript execution until the user selects an option. So, what are the methods in Javascript that have blocking behavior? I've been able to come up with 5:
alert() - does not suit me, because it displays an unwanted UI of its own;
confirm() - the same problem as alert();
infinite loop - even modern browsers will eat away CPU like crazy and display a "stop javascript?" prompt after a few seconds;
XmlHttpRequest in synchronous mode - sort of, but it involves server...
showModalDialog() - nice, but I need a specific design, plus there are some browser compatibility requirements...
The best approach I have so far is to create an <iframe> with the prompt (which then gets its own javascript execution thread) and block with XmlHttpRequest until the user has selected an option in the <iframe>. Unfortunately this involves passing the result forth and back between the server, and I'd like to make this 100% client-side. Also, it ties up a server thread while the dialog is opened, and there might be some browser-specific ajax-timeouts that apply.
Can anyone think of any other Javascript methods that block execution which might be (ab)used to achieve the desired effect?
No, it can't be done for a good reason. The arbitrary, custom-styled user interaction in the page is always asynchronous (event-based), and therefore does not work with any type of blocking behaviour (the event that would stop the infinite loop would only occur after the infinite loop has finished).
All those blocking methods that you mentioned do their user interaction in a different environment than the page - the alert/confirm/prompt popups controlled by the browser, the different page loaded by the showModalDialog - and that environment would need be able to gain focus while the first is frozen.
Creating a setup like this reliable is difficult enough. However, you could try almost every javascript functionality (that does not involve async callbacks), as all JS operations are synchronous by default. If you want to further experiment, I would suggest to look at those methods that deal with different DOM environments (window.open, document.write, iframe.contentWindow cross-frame-access) to see whether you can get any of these to spawn a second environment which runs in parallel reliably.

In PerformanceTiming, which part of the process model do AJAX request times contribute towards?

I'm using the PerformanceTiming interface to measure page load time.
Several of my pages have a long "Browser Time" (i.e. loadEventEnd - responseEnd), and I think this could be because of the Ajax requests from the pages.
My question is: where do Ajax requests fit in the PerformanceTiming process model? Is it in the "Processing" block?
If so, what's the best way to measure the Ajax execution time?
We're currently fighting this issue.
For a couple of years, we've had ajax requests that are fired with jQuery's "document ready" timer. This shouldn't have - technically speaking - extended the loadEventEnd. However, with absolute certainty, they did push out firing the loadEventEnd.
Now, we're digging to see what happened with our last release that moved those values outside of loadEventEnd.
To really measure those ajax requests, you'll want to use IE10 or - preferably - a chrome version greater than 28. Those both include the resourceTiming interface.
You can access metrics similar to the navigationTiming interface above for each resource loaded on a page.
From the javascript console in chrome (ctrl+shift+j on windows), enter:
window.performance.getEntries()
this will return all of the objects associated with your page. To get the ajax requests, you'll want initatorType = xmlhttprequest. You can find all of those events with the following:
for(var i=0; i<window.performance.getEntries().length;i++){
console.log(window.performance.getEntries()[i].initiatorType);
}
The Ajax Requests are after the end of the model.
See https://stackoverflow.com/a/16289733/1168884 for an example where an Ajax request runs and does not affect the properties on the performance object (which only reflects the loading of the page)
I guess the underlying issue is that, where as the page-loading events are quite defined (for example the DOM is now complete and available) and this is reflected in the model. The Ajax events are not (for example, there isn't really an event when you can say "all Ajax on the page has now completed running").
I haven't tried it, but there a project Boomerang which promises to allow the measuring of dynamically loaded content - http://lognormal.github.io/boomerang/doc/use-cases.html

How can I have a JS script update a page for everyone viewing it?

I'm creating a web application that allows users to make changes through Javascript. There is not yet any AJAX involved, so those changes to the DOM are being made purely in the user's local browser.
But how can I make those DOM changes occur in the browser of anyone else who is viewing that page at the time? I assume AJAX would be involved here. Perhaps the page could just send the entire, JS-modified source code back to the server and then the other people viewing would receive very frequent AJAX updates?
Screen sharing would obviously be an easy work-around, but I'm interested to know if there's a better way, such as described above.
Thanks!
You are talking about comet, for an easy implementation i'd suggest:
http://www.ape-project.org/
and also check these:
http://meteorserver.org/
http://activemq.apache.org/ajax.html
http://cometdaily.com/maturity.html
and new html5 way of it
http://dev.w3.org/html5/websockets/
Hope these help.
Max,
Ajax will have to be involved. If i may, I'd like to suggest jQuery as a starting point for this (i know you didn't tag as such, but i feel it'd be appropriate, even if only to prototype with). the basic semantics would involve running the ajax request in combination with a setInterval() timer to fire off the ajax request. this could be done in jQuery along the lines of:
$(document).ready(function() {
// run the initial request
GetFreshInfo();
// set the query to run every 15 seconds
setInterval(GetFreshInfo, 1500);
});
function GetFreshInfo(){
// do the ajax get call here (could be a .net or php page etc)
$.get('mypageinfostuff.php', null, function(data){$('#myDivToUpdate').html(data);});
}
that's the basic premise... i.e the webpage is loaded via GetFreshInfo() initially straight away, then it's requeried every 15 seconds. you can add logoc to only refresh the div if there is new data there, rather than always updating the page. as it's ajax, the page won't freeze and the process will be almost invisible to the user (unless you want to flag the changes in any way)
Hope this helps
jim

Are Mutexes needed in javascript?

I have seen this link: Implementing Mutual Exclusion in JavaScript.
On the other hand, I have read that there are no threads in javascript, but what exactly does that mean?
When events occur, where in the code can they interrupt?
And if there are no threads in JS, do I need to use mutexes in JS or not?
Specifically, I am wondering about the effects of using functions called by setTimeout() and XmlHttpRequest's onreadystatechange on globally accessible variables.
Javascript is defined as a reentrant language which means there is no threading exposed to the user, there may be threads in the implementation. Functions like setTimeout() and asynchronous callbacks need to wait for the script engine to sleep before they're able to run.
That means that everything that happens in an event must be finished before the next event will be processed.
That being said, you may need a mutex if your code does something where it expects a value not to change between when the asynchronous event was fired and when the callback was called.
For example if you have a data structure where you click one button and it sends an XmlHttpRequest which calls a callback the changes the data structure in a destructive way, and you have another button that changes the same data structure directly, between when the event was fired and when the call back was executed the user could have clicked and updated the data structure before the callback which could then lose the value.
While you could create a race condition like that it's very easy to prevent that in your code since each function will be atomic. It would be a lot of work and take some odd coding patterns to create the race condition in fact.
The answers to this question are a bit outdated though correct at the time they were given. And still correct if looking at a client-side javascript application that does NOT use webworkers.
Articles on web-workers:
multithreading in javascript using webworkers
Mozilla on webworkers
This clearly shows that javascript via web-workers has multithreading capabilities. As concerning to the question are mutexes needed in javascript? I am unsure of this. But this stackoverflow post seems relevant:
Mutual Exclusion for N Asynchronous Threads
Yes, mutexes can be required in Javascript when accessing resources that are shared between tabs/windows, like localStorage.
For example, if a user has two tabs open, simple code like the following is unsafe:
function appendToList(item) {
var list = localStorage["myKey"];
if (list) {
list += "," + item;
}
else {
list = item;
}
localStorage["myKey"] = list;
}
Between the time that the localStorage item is 'got' and 'set', another tab could have modified the value. It's generally unlikely, but possible - you'd need to judge for yourself the likelihood and risk associated with any contention in your particular circumstances.
See the following articles for a more detail:
Wait, Don't Touch That: Mutual Exclusion Locks & JavaScript - Medium Engineering
JavaScript concurrency and locking the HTML5 localStorage - Benjamin Dumke-von der Eh, Stackoverflow
As #william points out,
you may need a mutex if your code does something where it expects a
value not to change between when the asynchronous event was fired and
when the callback was called.
This can be generalised further - if your code does something where it expects exclusive control of a resource until an asynchronous request resolves, you may need a mutex.
A simple example is where you have a button that fires an ajax call to create a record in the back end. You might need a bit of code to protect you from trigger happy users clicking away and thereby creating multiple records. there are a number of approaches to this problem (e.g. disable the button, enable on ajax success). You could also use a simple lock:
var save_lock = false;
$('#save_button').click(function(){
if(!save_lock){
//lock
save_lock=true;
$.ajax({
success:function()
//unlock
save_lock = false;
}
});
}
}
I'm not sure if that's the best approach and I would be interested to see how others handle mutual exclusion in javascript, but as far as i'm aware that's a simple mutex and it is handy.
JavaScript is single threaded... though Chrome may be a new beast (I think it is also single threaded, but each tab has it's own JavaScript thread... I haven't looked into it in detail, so don't quote me there).
However, one thing you DO need to worry about is how your JavaScript will handle multiple ajax requests coming back in not the same order you send them. So, all you really need to worry about is make sure your ajax calls are handled in a way that they won't step on eachother's feet if the results come back in a different order than you sent them.
This goes for timeouts too...
When JavaScript grows multithreading, then maybe worry about mutexes and the like....
JavaScript, the language, can be as multithreaded as you want, but browser embeddings of the javascript engine only runs one callback (onload, onfocus, <script>, etc...) at a time (per tab, presumably). William's suggestion of using a Mutex for changes between registering and receiving a callback should not be taken too literally because of this, as you wouldn't want to block in the intervening callback since the callback that will unlock it will be blocked behind the current callback! (Wow, English sucks for talking about threading.) In this case, you probably want to do something along the lines of redispatching the current event if a flag is set, either literally or with the likes of setTimeout().
If you are using a different embedding of JS, and that executes multiple threads at once, it can get a bit more dicey, but due to the way JS can use callbacks so easily and locks objects on property access explicit locking is not nearly as necessary. However, I would be surprised if an embedding designed for general code (eg, game scripting) that used multi threading didn't also give some explicit locking primitives as well.
Sorry for the wall of text!
Events are signaled, but JavaScript execution is still single-threaded.
My understanding is that when event is signaled the engine stops what it is executing at the moment to run event handler. After the handler is finished, script execution is resumed. If event handler changed some shared variables then resumed code will see these changes appearing "out of the blue".
If you want to "protect" shared data, simple boolean flag should be sufficient.

Categories