I need a method to monitor user edit sessions, and one of the solutions I'm reviewing will have me using an unload event to send an ajax request to inform the server of the end of the edit session. (See: Monitoring User Sessions to Prevent Editing Conflict)
My (rather limited) reading on the unload event indicates that the code attached to this handler has to run quickly, and as such, is usually used for clearing objects to prevent memory leaks.
My question is, can this work reliably enough for this purpose?
PS. I know about the async: false option.
This method is fairly reliable, if your server is fast enough to respond. Something to really watch out for though. If you close the browser and send AJAX request on unload event, there's a very good chance that the response isn't going to come back from the server in time before the window object is destroyed. What happens in this case (at least with IE) is that it will orphan your connection object and not terminate it correctly until the connection timeout is hit. If your server doesn't have connection keep-alive turned on, after you close 2 windows (while still having another window open), you will run out of open connections to the server (for IE6-7, for IE8 - 6 windows) and you will not be able to open your website until your connection timeout is hit.
I ran into a situation like that before were I was opening a popup window that was sending an AJAX request on unload, it was very reliable, but it was plagued by the issued described above, and it took really long time for me to track it down and understand what's going on. After that, what I did, is I made sure that opening window would have the same code to call server, and on every unload checked for the opener and ran the code there if it was present.
It seems that if you close the very last browser window, IE will destroy connection properly, but if one other window is open, it will not.
P.S. And just to comment on the answer above, AJAX is not really async. At least JS implementation of it isn't. After you send a request, you JS code is still going to be waiting for response from the server. It's not going to block your code execution, but since the server might take a while to response (or long enough for Windows to terminate IE window object) you might and probably will run into the problem described above.
Have you tried to use
var i = new Image(1,1);
i.src='http://...'
And just returning some empty image from server. I think it should be reliable, script will block. BTW: nice to add timestamp to prevent caching.
We have a case where we needed that. It's a report page that needs serious memory on the server so we wanted to free it immediately as soon as they left the page. We created a frameset and added the unload handler there. The most reliable way was to set the src of an image to the freeing script. We actually used both the unload and onbeforeunload for cross browser compatibility. It didn't work in web kit nightlies but management was OK with that.
However, that was not my proposed solution. I would use a heartbeat approach which involves more work but is much more robust.
Your page should send out periodical heartbeat requests. Each request sets the last heartbeat from a page. You then need a thread that runs on the server and clears memory if the last heartbeat was too long ago.
This doesn't solve the problem of leaving the page up for a long time. For that you need some monitoring for user activity and leave that page after a period of inactivity (make sure you confirm with the user)
You'll have to do your own testing about whether or not your particular scenario works with the time you have in unload, but making the AJAX request is pretty fast, since AJAX is asynchronous. You just send the request and then you're done! (Maybe you'll have to clear the request object you just created, though.)
If you wanted to verify that the AJAX request made it, then you'd have to worry more/use the async:false option (like this discussion indicates). But, just sending is a quick boom-and-you're-done operation.
I had a case in which I only needed to notify server side about the unload and didn't care about the response.
If thats your case you can ignore_user_abort and then you know it'll happen "reliably"
Related
From my (poor) understanding of the Web Storage, the sessionStorage object is maintained per-tab, it survives page reloads and page navigation, and it gets destroyed on tab close or browser process termination.
Is there a way to listen to the sessionStorage destroy event?
I need to perform an HTTP call when the tab or window is being closed and it seems the sessionStorage is the single object which follow a similar lifecycle.
Is there a way to listen to the sessionStorage destroy event?
No, there is no "destroy" event for session storage.
I need to perform an HTTP call when the tab or window is being closed and it seems the sessionStorage is the single object which follow a similar lifecycle.
You can't differentiate between page reload and navigating away from the page.
The only thing I can think of to get close to what you want to do is to do this:
In beforeunload or unload, use sendBeacon to do your HTTP call (it has to be a POST). You can't just use standard ajax (XMLHttpRequest/fetch), browsers are actively disabling standard ajax in unload events. So use sendBeacon if it's there, falling back to a standard (and — ugh! — synchronous) ajax request if it isn't (as that suggests and older browser were it may still work).
On page load, check sessionStorage for a marker and:
If it's there, do an ajax call basically saying "never mind!" to say that if the server just received an "I'm leaving the page" ajax call, it should disregard it.
If it's not there, set the marker.
You'll need to be sure that the server handles the possibility that, because of the vagaries of network requests (particularly as beacons are always asynchronous), the two requests may be received by the server out of order. So include some serialization information in them (for instance, a value from performance.now(), falling back to Date.now() if necessary).
Or, of course, use polling when the page is open and a timeout to indicate the user has left the page. The tradeoffs between the approaches will be fun to weigh. :-)
The user window.document (interesting username!) points out that you may be able to use web sockets for this. I don't have much experience using web sockets (must fix that!) but I think the general idea is that you'll see a socket disconnect when the user leaves the page or refreshes, but (like the above) if it's a refresh, you'll see a socket connection again very soon thereafter — which is like the "never mind!" call above.
Service Worker seems to automatically stop at some point. This behaviour unintentionally closes the WebSocket connection established on activate.
When and Why does it stop? How can I programmatically disable this unexpected action to keep Service Worker stay running?
What you're seeing is the expected behavior, and it's not likely to change.
Service workers intentionally have very short lifespans. They are "born" in response to a specific event (install, activate, message, fetch, push, etc.), perform their task, and then "die" shortly thereafter. The lifespan is normally long enough that multiple events might be handled (i.e. an install might be followed by an activate followed by a fetch) before the worker dies, but it will die eventually. This is why it's very important not to rely on any global state in your scripts, and to bootstrap any state information you need via IndexedDB or the Cache Storage API when your service worker starts up.
Service workers are effectively background processes that get installed whenever you visit certain web pages. If those background processes were allowed to run indefinitely, there's an increased risk of negative impact on battery and performance of your device/computer. To mitigate this risk, your browser will only run those processes when it knows it's necessary, i.e. in response to an event.
A use case for WebSockets is having your client listen for some data from the server. For that use case, the service worker-friendly alternative to using WebSockets is to use the Push Messaging API and have your service worker respond to push events. Note that in the current Chrome implementation, you must show a user-visible notification when handling a push event. The "silent" push use case is not supported right now.
If instead of listening to data from the server, you were using WebSockets as a way of sending data from your client to your server, there's unfortunately no great service worker-friendly way of doing that. At some point in the future, there may be a way of registering your service worker to be woken up via a periodic/time-based event at which point your could use fetch() to send data to the server, but that's currently not supported in any browsers.
P.S.: Chrome (normally) won't kill a service worker while you have its DevTools interface open, but this is only to ease debugging and is not behavior you should rely on for a real application.
The Theory
Jeff's answer explains the theory part - why and how, in detail.
It also includes many good points on why you might not want to pursue this.
However, in my case, the downsides are nonexistent since my app will run on desktop machines which are reserved only to run my app. But I needed to keep the SW alive even when the browser window is minimized. So, if you are working on a web app which will run on variety of devices, keeping the SW alive might not be a good idea, for the things discussed in above answer.
With that said, let's move onto the actual, practical answer.
My "Practical" Solution
There should be many ways to keep the SW alive, since SWs stay alive a bit after responding to many different events. In my case, I've put a dummy file to server, cached it in the SW, and requested that file periodically from the document.
Therefore the steps are;
create a dummy file on the server, say ping.txt
cache the file on your SW
request that file from your html periodically to keep the SW alive
Example
// in index.html
setInterval(function(){
fetch('/ping.txt')
}, 20000)
The request will not actually hit the server, since it will be cached on the SW. Nonetheless, that will keep the SW alive, since it will respond to the fetch even evoked by the request.
PS: I've found 20 seconds to be a good interval to keep the SW alive, but it might change for you, you should experiment and see.
I'm gonna develop a framework for comet programming, and I can't use Web Sockets, or Server-Sent Events (because browser support really sucks). So, I need to keep the HTTP connection alive, and send chunked data back to the client.
However, problems show themselves as you get into the work:
Using XMLHttpRequest is not possible, due to the fact that IE doesn't give you xhr.responseText while the xhr.readyState is 3.
A hidden iframe can't be useful, because browser shows the loader while I send data back to the client.
I tried to send a JavaScript file back to the client, sending function execution commands each time, but browsers won't execute JavaScript till it's completely loaded.
However, when I look at Lightstreamer demo page, I see that it sends a JavaScript file back to the client little by little and in each step, it sends a call to the function and that function simply gets executed (I can't do this part). It seems that Lightstreamer uses AJAX, since the request simply shows up in Firebug's console tab, but it works like a charm in IE too.
I tried to use every HTTP header field they've set on their request, and no result. I also tried to use HTTP Post instead of HTTP Get, but still got no result.
I've read almost over 20 articles on how to implement comet, but none of'em appear to solve problems I have:
How to make it cross-browser?
How to get notified when new data is arrived from server (what event should I hook into)?
How to make my page appear as completely loaded to the user (how to implement it, so that browser doesn't show loading activity)?
Can anyone please help? I think there should be a very little tip or trick that I don't know here to glue all the concepts together. Does anyone know what lightstreamer do to overcome these problems?
SockJS author here.
How to make it cross-browser?
This is hard, expect to spend a few months on getting streaming transports on opera and IE.
How to get notified when new data is arrived from server (what event should I hook into)?
There are various techniques, depending on a particular browser. For a good intro take a look at different fallback protocols supported by Socket.IO and SockJS.
How to make my page appear as completely loaded to the user (how to implement it, so that browser doesn't show loading activity)?
Again, there are browser-specific tricks. One is to delay loading AJAX after onload event. Other is to bind-and-unbind an iframe from DOM. ETC. If you still feel interested read SockJS or Socket.io code.
Can anyone please help? I think there should be a very little tip or trick that I don't know here to glue all the concepts together. Does anyone know what lightstreamer do to overcome these problems?
Basically, unless you have a very strong reason to, don't reinvent the wheel. Use SockJS, Socket.io, faye, or any other of dozens projects that do solve this problem already.
The methods you want is the streaming.
How to make it cross-browser?
Considering most browsers, there is no consistent way. You have to choose a proper transport according to the browser. Even worse, you have to rely on the browser sniffing to recognize which browser is being used, and the feature detection counts for nothing about this. You can use XDomainRequest for IE8+, XMLHttpRequest for non-IE and Iframe for IE 6+. Avoid iframe transport if possible.
How to get notified when new data is arrived from server (what event should I hook into)?
This varies according to the transport being used. For example, XDomainRequest fires progress event, XMLHttpRequest fires readystatechange event when chunk is arrived except Opera and IE.
How to make my page appear as completely loaded to the user (how to implement it, so that browser doesn't show loading activity)?
I don't know this issue with iframe, but still occurs in WebKit based browsers such as Chrome and Safari with XMLHttpRequest. The only way to avoid this is to connect after the onload event of window, but, in case of Safari, this does not work.
There are some issues you have to consider besides the above questions.
Event-driven server - The server should be able to process asynchronously.
Transport requirements - The server behaves differently for required transport.
Stream format - If the server is going to send big message or multiple messages in a single chunk, a single chunk does not mean a single data. It could be fragment of a single data or concatenation of multiple data. To recognize what is data, the response should be formatted.
Error handling - Iframe transport does not provide any evidence for disconnection.
...
Last but not least, to implement streaming is pretty tiresome than it looks unlike with long polling. I recommend you use solid framework for doing that such as socketio, sockjs and jquery socket which I've created and managed.
Good luck.
but browsers won't execute JavaScript till it's completely loaded.
Have you tried sending back code wrapped in <script> tags? For example, instead of:
<script type="text/javascript">
f(...data1...);
f(...data2...);
try
<script type="text/javascript">f(...data1...);</script>
<script type="text/javascript">f(...data2...);</script>
The best option in your case would be to use JSONP + Long Pulling on server side. You just have to remember to reconnect any time connection drops (times out) or you receive response.
Example code in jquery:
function myJSONP(){
$.getScript(url, {
success: function(){
myJSONP(); //re-connect
},
failure: function(){
myJSONP(); //re-connect
}
})
}
Obviously your response from server has to be javascript code that will call on of your global functions.
Alternatively you can use some jquery JSONP plugin.
Or take a look on this project http://www.meteor.com/ (really cool, but didn't try it)
I've recently taken over a project that uses COMET to perform some collaborative work and handle a simple chat room. The guys who originally wrote this thing made up some classes on top of STOMP and Oribited to handle all the actual chatting and messaging and logging.
The problem is that if a user closes the window or navigates to a different page or terminates the connection for whatever other reason, it takes a while for all the other users to see that he has logged off. The other users have to wait for the timestamp of the exited-user's last ping to exceed a certain duration before it registers that the user is no longer connected to the system.
The solution that I can think of requires sending out a notification in the onuload event that the user has left, so that it would notify all the other users without having to wait for a timeout. The problem with this is that since onunload will immediately terminate the connection before it's completed. From what I understand this is a problem with AJAX as well.
Now, I also have read that a Synchronous request in unload will delay the window-close/navigation until the request has finished.
So, my questions is this: does anyone know of a way to temporarily make the comet request synchronous in selected instances so it has time to finish the request before terminating? Or is there another way to solve this problem that I'm not thinking of? Thanks for your help.
Oh, also, onbeforeunload won't work because if it sends the request and the user selects "No, I want to stay on this page" it will have already have notified the other users that he has exited the chat.
tl;dr: Need a way to successfully fire a COMET request in the Unload event. We're using STOMP and Orbited for the COMET stuff.
The 'onbeforeunload' function produces a yes-no dialog only if some value is returned from it. So what you have to do is to use a SYNCHRONOUS XMLHttpRequest (AJAX) request inside the onbeforeunload function without returning anything. And you have to set the asynchronous flag of the request to false as seen in the AJAX GET request shown below:-
AJAXObject.open("GET", 'http://yourdomain/logout?somevar=something', false);
AJAXObject.send(null);
It will prevent the browser from closing until request completes and as I remember, Opera doesn't support 'onbeforeunload', so it won't work for Opera. But it works fine on IE,FF,Chrome.
If you are using comet, then you should control the server. The idea with comet is that it is not constant polling of the server. Every client should have a constant open connection to the server. As such, when the connection closes, the server should be able to send out a notification to the other clients.
I use jQuery for AJAX. My question is simple - why cache AJAX? At work and in every tutorial I read, they always say to set caching to false. What happens if you don't, will the server "store" such requests and get "clogged up"? I can find no good answer anywhere - just links telling you how to set caching to false!
It's not that the server stores requests (though they may do some caching, especially higher volume sites, like SO does for anonymous users).
The issue is that the browser will store the response it gets if instructed to (or in IE's case, even when it's not instructed to). Basically you set cache: false if you don't want to user's browser to show stale data it fetched X minutes ago for example.
If it helps, look at what cache: false does, it appends _=190237921749817243 as a query string pair (random number, the actual one is the current time, so it's always....current). This forces the browser to make the request to the server for data again, since it doesn't know what that query string means, it may be a different page...and since it can't know or be sure, it has to fetch again.
The server won't cache the requests, the browser will. Remember that browsers are built to display pages quickly, so they have a cache that maps URLs to the results last returned by those URLs. Ajax requests are URLs returning results, so they could also be cached.
But usually, Ajax requests are meant to do something, you don't want to skip them ever, even if they look like the same URL as a previous request.
If the browser cached Ajax requests, you'd have stale responses, and server actions being skipped.
If you don't turn it off you'll have issues trying to figure why you AJAX works but your functions aren't responding as you'd like them to. Forced re-validation at the header level is probably the best way to gain a cache-less assimilation of the data being AJAX'd in.
Here's a hypothetical scenario. Say you want the user to be able to click any word on your page and see a tooltip with the definition for that word. The definition is not going to change, so it's fine to cache it.
The main problem with caching requests in any kind of dynamic environment is that you'll get stale data back some of the time. And it can be unpredictable when you'll get a 'fresh' pull vs. a cached pull.
If you're pulling static content via AJAX, you could maybe leave caching on, but how sure are you that you'll never want to change that fetched content?
The problem is, as always, Internet Explorer. IE will usually cache the whole request. So, if you are repeatedly firing the same AJAX request then IE will only do it once and always show the first result (even though subsequent requests could return different results).
The browser caches the information, not the server. The point in using Ajax is usually because you're going to be getting information that changes. If there's a part of a website or something you know isn't going to change, you don't bother with it more than once (in which case, caching is ok), that's the beauty of Ajax. Since you should only be dealing with information that may be changing, you want to get the new information. Therefore, you don't want the browser to cache.
For example, Gmail uses Ajax. If caching was simply left on you wouldn't see your new e-mail for quite awhile, which would be bad.