What is the best method to detect offline mode in the browser? - javascript

I have a web application where there are number of Ajax components which refresh themselves every so often inside a page (it's a dashboard of sorts).
Now, I want to add functionality to the page so that when there is no Internet connectivity, the current content of the page doesn't change and a message appears on the page saying that the page is offline (currently, as these various gadgets on the page try to refresh themselves and find that there is no connectivity, their old data vanishes).
So, what is the best way to go about this?

navigator.onLine
That should do what you're asking.
You probably want to check that in whatever code you have that updates the page. Eg:
if (navigator.onLine) {
updatePage();
} else {
displayOfflineWarning();
}

It seems like you've answered your own question. If the gadgets send an asynch request and it times out, don't update them. If enough of them do so, display the "page is offline" message.

See the HTML 5 draft specification. You want navigator.onLine. Not all browsers support it yet. Firefox 3 and Opera 9.5 do.
It sounds as though you are trying to cover up the problem rather than solve it. If a failed request causes your widgets to clear their data, then you should fix your code so that it doesn't attempt to update your widgets unless it receives a response, rather than attempting to figure out whether the request will succeed ahead of time.

One way to handle this might be to extend the XmlHTTPRequest object with an explicit timeout method, then use that to determine if you're working in offline mode (that is, for browsers that don't support navigator.onLine). Here's how I implemented Ajax timeouts on one site (a site that uses the Prototype library). After 10 seconds (10,000 milliseconds), it aborts the call and calls the onFailure method.
/**
* Monitor AJAX requests for timeouts
* Based on the script here: http://codejanitor.com/wp/2006/03/23/ajax-timeouts-with-prototype/
*
* Usage: If an AJAX call takes more than the designated amount of time to return, we call the onFailure
* method (if it exists), passing an error code to the function.
*
*/
var xhr = {
errorCode: 'timeout',
callInProgress: function (xmlhttp) {
switch (xmlhttp.readyState) {
case 1: case 2: case 3:
return true;
// Case 4 and 0
default:
return false;
}
}
};
// Register global responders that will occur on all AJAX requests
Ajax.Responders.register({
onCreate: function (request) {
request.timeoutId = window.setTimeout(function () {
// If we have hit the timeout and the AJAX request is active, abort it and let the user know
if (xhr.callInProgress(request.transport)) {
var parameters = request.options.parameters;
request.transport.abort();
// Run the onFailure method if we set one up when creating the AJAX object
if (request.options.onFailure) {
request.options.onFailure(request.transport, xhr.errorCode, parameters);
}
}
},
// 10 seconds
10000);
},
onComplete: function (request) {
// Clear the timeout, the request completed ok
window.clearTimeout(request.timeoutId);
}
});

Hmm actually, now I look into it a bit, it's a bit more complicated than that. Have a read of these links on John Resig's blog and the Mozilla site. The above poster may also have a good point - you're making requests anyway, so you should be able to work out when they fail.. That might be a much more reliable way to go.

Make a call to a reliable destination, or perhaps a series of calls, ones that should go through and return if the user has an active net connection - even something as simple as a token ping to google, yahoo, and msn, or something like that. If at least one comes back green, you know you're connected.

I think google gears have such functionality, maybe you could check how they did that.

Use the relevant HTML5 API: online/offline status/events.

One possible solution is that if the page and the cached page have a different url to just look and see what url you are on. If you are on the url of the cached page then you are in offline mode. This blog makes a good point about why navigator.online is broke

Related

ClearInterval js stop current execution

I have a Vue 2.0 app in which I use this line in order to call this.refreshState() every min.
this.scheduler = setInterval(() => this.refreshState(), 60 * 1000)
Later in the code I need to make sure that the execution loop is stopped and also that if there's an instance of this.refreshState() currently running (from the setInterval scheduler) it's stopped as well (even if it's in the middle of doing stuff).
So far I'm using :
clearInterval(this.scheduler)
as per (https://developer.mozilla.org/en-US/docs/Web/API/clearInterval)
The question I'm having is does clearInterval blocks the current execution if any? I can't find the answer in the doc unfortunately.
FYI the code of refreshState:
refreshState: function () {
// API call to backend
axios.get("/api/refreshState")
.then(response => {
this.states = response.data.states
})
.catch((err) => console.log(err)
}
Here's my use case :
alterState: function (incremental_state) {
clearInterval(this.scheduler) // ???
axios.post("/api/alterState", incremental_state)
.then(() => {
this.refreshState()
this.scheduler = setInterval(() => this.refreshState(), 60 * 1000)
})
.catch((err) => { console.log(error) })
}
I want to make sure that when i exit alterState , the variable this.states takes into account the addition of incremental state.
From...
I want to make sure that when i exit alterState, the variable this.states takes into account the addition of incremental state.
...I understand you're performing a change on backend and you want it reflected on frontend. And that currently that doesn't happen, although you're calling this.refreshState() right after getting a successful response from /api/alterState. 1
To achieve this functionality, it's not enough to call this.refreshState(), because your browser, by default, caches the result (it remembers the recent calls and their results, so it serves the previous result from cache, instead of calling the server again), unless the endpoint is specifically configured to disable caching.
To disable caching for a particular endpoint, you could either
configure the endpoint (server side) to tell browsers: "Hey, my stuff is time sensitive, don't cache it!" (won't go into how, as I have no idea what technology you're using on backend and it varies). Roughly it means setting appropriate response headers.
or call the endpoint with a unique param, each time. This makes the endpoint "change" from browser's POV, so it's always going to request from server:
axios
.get(`/api/refreshState?v=${Date.now()}`)
.then...
I recommend the second option, it's reliable, predictable and does not depend on server configuration.
And, unless something else, other than the current app instance (some other user, or other server scripts, etc...) make changes to the data, you don't actually need a setInterval. I suggest removing it.
But If you do have other sources changing server-side data, (and you do want to refresh it regardless of user interactions with the app), what you have works perfectly fine, there's no need to even cancel the existing interval when you make a change + refreshState()). 2
1 - if I misunderstood your question and that is not your problem, please clarify your question, right now it's a bit unclear
2 - as side-note and personal preference, I suggest renaming refreshState() to getState()

NodeJS and Electron - request-promise in back-end freezes CSS animation in front-end

Note: Additional information appended to end of original question as Edit #1, detailing how request-promise in the back-end is causing the UI freeze. Keep in mind that a pure CSS animation is hanging temporarily, and you can probably just skip to the edit (or read all for completeness)
The setup
I'm working on a desktop webapp, using Electron.
At one point, the user is required to enter and submit some data. When they click "submit", I use JS to show this css loading animation (bottom-right loader), and send data asynchronously to the back-end...
- HTML -
<button id="submitBtn" type="submit" disabled="true">Go!</button>
<div class="submit-loader">
<div class="loader _hide"></div>
</div>
- JS -
form.addEventListener('submit', function(e) {
e.preventDefault();
loader.classList.remove('_hide');
setTimeout(function() {
ipcRenderer.send('credentials:submit', credentials);
}, 0)
});
where ._hide is simply
._hide {
visibility: hidden;
}
and where ipcRenderer.send() is an async method, without option to set otherwise.
The problem
Normally, the 0ms delay is sufficient to allow the DOM to be changed before the blocking event takes place. But not here. Whether using the setTimeout() or not, there is still a delay.
So, add a tiny delay...
loader.classList.remove('_hide');
setTimeout(function() {
ipcRenderer.send('credentials:submit', credentials);
}, 100);
Great! The loader displays immediately upon submitting! But... after 100ms, the animation stops dead in its tracks, for about 500ms or so, and then gets back to chooching.
This working -> not working -> working pattern happens regardless of the delay length. As soon as the ipcRenderer starts doing stuff, everything is halted.
So... Why!?
This is the first time I've seen this kind of behavior. I'm pretty well-versed in HTML/CSS/JS, but am admittedly new to NodeJS and Electron. Why is my pure CSS animation being halted by the ipcRenderer, and what can I do to remedy this?
Edit #1 - Additional Info
In the back-end (NodeJS), I am using request-promise to make a call to an external API. This happens when the back-end receives the ipcRenderer message.
var rp = require('request-promise');
ipcMain.on('credentials:submit', function(e, credentials) {
var options = {
headers : {
... api-key...
},
json: true,
url : url,
method : 'GET'
};
return rp(options).then(function(data) {
... send response to callback...
}).catch(function(err) {
... send error to callback...
});
}
The buggy freezing behavior only happens on the first API call. Successive API calls (i.e. refreshing the desktop app without restarting the NodeJS backend), do not cause the hang-up. Even if I call a different API method, there are no issues.
For now, I've implemented the following hacky workaround:
First, initialize the first BrowserWindow with show:false...
window = new BrowserWindow({
show: false
});
When the window is ready, send a ping to the external API, and only display the window after a successful response...
window.on('ready-to-show', function() {
apiWrapper.ping(function(response) {
if(response.error) {
app.quit();
}else {
window.show(true);
}
});
});
This extra step means that there is about 500ms delay before the window appears, but then all successive API calls (whether .ping() or otherwise) no longer block the UI. We're getting to the verge of callback hell, but this isn't too bad.
So... this is a request-promise issue (which is asynchronous, as far as I can tell from the docs). Not sure why this behavior is only showing-up on the first call, so please feel free to let me know if you know! Otherwise, the little hacky bit will have to do for now.
(Note: I'm the only person who will ever use this desktop app, so I'm not too worried about displaying a "ping failed" message. For a commercial release, I would alert the user to a failed API call.)
Worth to check how does request-promise internally setups up module loading. reading it, it seems like there is kind of lazy loading (https://github.com/request/request-promise/blob/master/lib/rp.js#L10-L12) when request is being called. Quick try out
const convertHrtime = require('convert-hrtime');
const a = require('request-promise');
const start = process.hrtime();
a({uri: 'https://requestb.in/17on4me1'});
const end = process.hrtime(start);
console.log(convertHrtime(end));
const start2 = process.hrtime();
a({uri: 'https://requestb.in/17on4me1'});
const end2 = process.hrtime(start2);
console.log(convertHrtime(end2));
returns value like below:
{ seconds: 0.00421092,
milliseconds: 4.21092,
nanoseconds: 4210920 }
{ seconds: 0.000511664,
milliseconds: 0.511664,
nanoseconds: 511664 }
first call is obviously taking longer than subsequent. (number of course may vary, I ran this on bare node.js on relatively fast cpu) If module loading is major cost for first call, then it'll block main process until module is loaded (cause node.js require resolve is synchronous)
I'm not able to say this is concrete reason, but worth to check. As suggested in comment, try other lib or bare internal module (like Electron's net) to rule out.

simultaneous runs of two JS webworkers: one gets stuck

I'm working on a closed system web application to aid companies in their everyday online commerce chores. That means on the one hand that it won't be open to the public, on the other: it will have to deal with large amounts of data while maintaining a fluent work experience.
This is why I turned to web workers in JS to run all sorts of database access and data loading in the background.
My understanding is, that not only the main UI/main JS remains uninterrupted but also the different web workers run without hindering each other.
I now have the following setup:
mainJS: function statusCheck which runs on pageload:
function statusCheck() {
if(typeof(w__statusCheck) == "undefined") {
var w__statusCheck = new Worker("...statusCheck.js");
w__statusCheck.postMessage("go");
w__statusCheck.onmessage = function(e) {
var message = JSON.parse(e.data);
if(message.text!=undefined) displayMessage(message.text);
}
}
statusCheck.js which is the worker simply goes like this:
function checkStatus() {
console.log("statusCheck started");
// I will leave standard parts out:
// creating and testing the ajax variable against different browsers
ajaxRequest.onreadystatechange = function() {
if(ajaxRequest.readyState == 4) {
self.postMessage(ajaxRequest.responseText);
var timer;
timer = self.setTimeout(function(){
checkStatus();
}, 1000);
}
}
ajaxRequest.open("GET", "...worker_statusCheck.php", true);
ajaxRequest.send(null);
}
this.onmessage = function(e){
checkStatus();
};
As you can see, this restarts itself every second (for now). The intervall might be longer in production.
worker_statusCheck.php simply gets different things from the database and knits them into a JSON object which gives me the system status.
This works beautifully.
Now I have another worker which is supposed to get initiated by a click on a link to effectively call some php to perform actions:
mainJS loadWorker
function loadWorker(url="") {
console.log("loadWorker started");
if(url!="") {
var uniqueID = "XXX" // creating a random ID based on timestamp and Math.random()
if(typeof(window[uniqueID]) == "undefined") {
var variables = { ajaxURL: url };
window[uniqueID] = new Worker("....loadWorker.js");
window[uniqueID].postMessage(JSON.stringify(variables));
window[uniqueID].onmessage = function(e) {
var message = JSON.parse(e.data);
if(message["success"]!=undefined) {
variables["close"] = "yes";
window[uniqueID].postMessage(JSON.stringify(variables));
}
}
}
With every click on a certain link this gets called, creates a uniquely named worker, runs it, receives the data and tells the worker to close().
The php again does its thing and writes a progress update in the DB after each step of the lengthy procedure. These progress updates I fetch from the DB with the above repeating statusCheck.
Now, I can see the entries in the DB with timestamp, so I know they get written each at their time.
So, both workers do their job and run reliably. But I have noticed, that whenever I initiate the manual (randomly named) worker the statusCheck actually stops performing. It just gets stuck... I was able to confirm this with console output from both workers. So it's not the main JS that seems stuck, but the statusCheck actually pauses... and resumes when loadWorker is done.
Am I missing something fundamental here? Any insight would be appreciated since I'm new to this concept of web workers.
Thanx :)
Your question lacks resources to truly figure out what exactly goes wrong. I can concur that two web workers can operate at the same time, even with synchronous operations. I tested this for both for loops and sync XHR requests.
There are multiple things I would recommend though.
First - unless you're processing the data with some CPU heavy algorithm, web workers are waste of time. XHR requests do not block main thread (unless you explicitly ask them to).
In statusCheck() you declare var w__statusCheck which means a local variable. Therefore it will always be null as seen from outer scope. It might get garbage-collected once no code is running in the worker.
Do not use XMLHttpRequest.onreadystatechange. Use onload and onerror.
Random unique ID's for variables are almost always wrong. If you need to store the worker refference at all, either give it a reasonable name (eg. the url it's supposed to load) or use incremental id.
Do NOT stringify data that you post to web worker. It's already done for you by the browser, possibly in more optimal manner. Converting the data to something is a single most common stupid thing people do with web workers.
Also when posting question, at least make sure the code makes some sense. In your post curly braces do not match.
Alright.. I figured it out:
I was looking in all the wrong places. Turns out, I had initialized my php session in all the php scripts which are called by the workers. And my two parallel workers both called one. So the session file was locked by the first php script and the second had to wait until it was back open again. It was not the workers or the JS being hindered, it was the php.
I now took out the session initialization from my statusCheck.php and it works like a charm. I will keep it in those others that handle the user input responses because there it actually makes sense: user clicks on button "compile data XY" which is run by the worker and takes a while. Impatient as he is he already clicks the next button "show this data"... and due to the locked session file I have sort of a neat queue for those actions. :)
I still will take above recommendations to heart and see to it to improve my code. :)

Trigger an event when user navigates away

I need to call a JavaScript/jQuery function which has a few lines of code in it, on a PHP page when the user closes his window/tab or navigates away by clicking a link. I've tried the onbeforeunload function but only the return "blah blah blah;" part executes and everything else is ignored. I've also tried the .unload method from jQuery but for some reason this code doesn't run.
$(window).unload(function() {
alert('blah blah blah');
});
Please suggest alternatives. Thanks..
Here is a simple working example. Whatever you return from the unload callback will be displayed in a browser popup confirmation.
Working example sending Ajax request before unload
http://jsfiddle.net/alexflav23/hujQs/7/
The easiest way to do this:
window.onbeforeunload = function(event) {
// do stuff here
return "you have unsaved changes. Are you sure you want to navigate away?";
};
in jQuery:
$(window).on("beforeunload", function() {
$.ajax("someURL", {
async: false,
data: "test",
success: function(event) {
console.log("Ajax request executed");
}
});
return "This is a jQuery version";
});
Look into the Network tab of the browser. You will see how the request is being sent as you wanted to do. Just send the appropriate data.
Bear in mind all operations triggered must be synchronous, so you can only make synchronous ajax requests for instance. However, the above is not entirely reliable for any purpose.
Opt for periodic back-up of user data to localStorage and sync with the server automatically . Keep window.onbeforeunload just as an extra precaution, but not as a main mechanism. It's well known to cause problems.
This is an old question, but I wanted to share an alternative approach that has the benefit of working with high consistency:
Establish a WebSocket connection to the server, and when the client navigates away the WebSocket connection will be closed. Server-side, you can detect the closed connection in a callback and run whatever code you need on the server.
Executing Javascript on page unload is often unreliable (as discussed in the other answer) because it's inherently at odds with the user's intention. This method will always work, although it is admittedly quite a bit more cumbersome to implement.
This does change the context of your "run before leaving" code from client-side to server-side, but I imagine for most cases the difference is inconsequential. Anything you want to run client-side before the client leaves your page is probably not going to change anything the client sees, so it's probably fine to run it server side. If there is specific data you need from the client you can send it through the WebSocket to the server.
The only situation I can think of off the top of my head where this might cause unexpected behavior is if the user loses the WS connection without actually navigating away, e.g. they lose internet or put their computer to sleep. Whether or not that's a big deal is probably highly dependent on what kind of code you're trying to execute.
In many projects of mine, the mentioned methods here are instable. The only thing that works for me is to bind the event as original attribute on the body element.
<body onunload="my_function_unload()">
jQuery method:
$('body').attr('onunload', 'my_function_unload()');
From an iframe:
<body onunload="window.parent.my_function_unload()">
jQuery method:
$('<iframe />').load(function(){
$body = $(this).contents().find('body');
$body.attr('onunload', 'window.parent.my_function_unload()');
}
Also, important, no arguments in the attribute, and the function must be in the global window scope, otherwise nothing happens.
For example, common mistake If your my_function_unload() are wrapped inside a ;( function( $ ) {... OR $(document).ready(function(){... AS my_function_unload() must be outside that private scope. And dont forget to use jQuery instead of $ prefix then. (Working with Wordpress for example)
This is kind of a pain, as Chrome, at least in Version 92.0.4515.131, seems to be clamping the security screws on what you can get away with in beforeunload. I'm unable to make a synchronous ajax request, for example.
If there's any chance the user will be back to your site and you can wait until then to deal with their having closed a window (which does work in my use case), know that setting cookies does currently seem to be fair game during the beforeunload event. Even works when I close the browser. Covers most anything but power cycling the computer, it appears.
Here's a reference example (with getCookie stolen from this SO question):
function setCookie(name, value) {
document.cookie =
'{0}={1};expires=Fri, 31 Dec 9999 23:59:59 GMT;path=/;SameSite=Lax'
.replace("{0}", name)
.replace("{1}", value);
}
// https://stackoverflow.com/a/25490531/1028230
function getCookie(cookieName) {
var b = document.cookie.match('(^|;)\\s*' + cookieName + '\\s*=\\s*([^;]+)');
return b ? b.pop() : '';
}
window.addEventListener('beforeunload', function (e) {
console.log('cookie value before reset: ' + getCookie('whenItHappened'));
var now = +new Date();
console.log("value to be set: " + now);
setCookie('whenItHappened', now);
return "some string if you want the 'are you sure you want to leave' dialog to appear";
});

How to detect when several AJAX requests have completed or failed?

I have an app that loads several resources when it's first run, which are stored in localStorage. I have a function that checks whether all the local storage variables are set, so that part is working okay.
My method of working is like this:
Display a loading message.
Initialize the AJAX requests.
Start a timer interval to check if everything has loaded.
When the data has loaded, initialize the application etc.
If the data did not load, display an error message.
The problem is with #5 - how to detect if there was an error? For example if there was a connection problem or the sever sent back invalid data for whatever reason. Here is my current code - downloadData just performs a basic AJAX request:
// check local storage and download if any missing
if ( !checkLocalStorage() )
{
$('#content').before( '<div class="notice" id="downloading">Downloading data, please wait...</div>' );
for ( var i in db_tables )
{
if ( localStorage[db_tables[i]] == null )
downloadData( db_tables[i] );
}
}
// check progress
var timer = setInterval( function() {
if ( checkLocalStorage() )
{
// everything is downloaded
$('#downloading').hide();
clearInterval(timer);
initApp();
}
}, 500 );
Could you turn it around a bit? Something like this (with sensible variable names and a "real" API) would simplify things:
Display a loading message.
Instantiate an application initializer, ai.
Crank up the AJAX requests:
Success handlers call ai.finished(task).
Error handlers call ai.error(task).
Register with the initializer, ai.register(task), in case a "you're taking too long" check is desired.
Once all the AJAX requests have called ai.finished, initialize the application etc.
If any of the AJAX tasks called ai.error, then display an error message and start cleaning things up.
This way you wouldn't need to setInterval() and the individual AJAX tasks will tell you when they have finished or fallen over. You might still want the interval to deal with tasks that are taking too long but most of the logic would be notification based rather than polling based.
Seeing your actual ajax calls in downloadData would help, but I suggest you look over the jquery AJAX API again. Ajax calls have callbacks not just for overall completion but specifically for success and failure including errors. Try to do something like retrying if there is an error and if it continues to fail you can warn the user. You can also use these callbacks to notify your application when the loading is done instead of using an interval timer.

Categories