What is the best way to check internet connection - javascript

I made a CMS which during operation pulls large amounts of data.
CMS is made in PHP, MySQL, jQuery, Bootstrap and use AJAX.
The problem is if you lose your internet connection can cause problems on displaying and scrolling.
I would love if there is a good way to show the error and blocks all functions on the site when there is no internet connection. When the connection is established it should be all function allowed on the site.
Thanks!
(Sorry for my bad English.)

If you are using jQuery, you can just hook on the global error handler and lock up your application when an error occurs. The lock up screen could simply ask to try again.
$( document ).ajaxError(function() {
// lock your UI here
});
Also, once the UI is locked, you could execute a function that would ping your server in an Exponential Backoff fashion and automatically unlock the application on network restore.
Locking your app can easily be done with jQuery's blockUI plugin.
Example
(function ($) {
var locked = false;
var errorRetryCount = 0;
var blockUiOptions = { message: "Oops! Could not reach the server!" };
// change this function to adjust the exponential backoff delay
function backoff(n) {
return Math.pow(2, n) * 100;
}
$(function () {
$( document ).ajaxError(function () {
var req = this;
errorRetryCount += 1;
if (!locked) {
locked = true;
$.blockUI(blockUiOptions);
}
// retry to send the request...
setTimeout(function () { $.ajax(req); }, backoff(errorRetryCount));
}).ajaxSuccess(function () {
locked && $.unblockUI();
locked = false;
errorRetryCount = 0;
});
});
})(jQuery);
Note: You may not want to retry indefinitely your request upon network failure, and would want to quit retrying at some point. Since this is out of the scope of this question, I'll leave it as it is. However, you may take a look at this related question, which may help you sort this part out.

If you're using jQuery already, you could create a simple ajax call to your server, and if it fails within a couple of seconds, either your server or the clients internet connection is down.
Something like this:
setInterval(function() {
$.ajax({
url: "https://cms.example.com/ping",
})
.fail(function( data ) {
alert('Connection lost?');
// remember do to something smart which shows the error just once
// instead of every five seconds. Increasing the interval every
// time it fails seems a good start.
});
}, 5*1000);

Using plain JavaScript and simple code:
window.navigator.onLine ? 'on' : 'off'
It supports by almost every browser, please check Can I use

edit: re-read your question and misunderstood my first pass through so this wouldn't be valid for continuous monitoring... but i'll leave it here anyways as it may be useful for someone else.
i would suggest loading a small js file that adds a class to an element of your page and then checking if that class is applied after the fact... assuming you are using jQuery
file on the remote server loaded into your page after jQuery via script tag
$('html').addClass('connected');
local code
if($('html').hasClass('connected')) {
// connected
} else {
// not connected
}

Related

NodeJS and Electron - request-promise in back-end freezes CSS animation in front-end

Note: Additional information appended to end of original question as Edit #1, detailing how request-promise in the back-end is causing the UI freeze. Keep in mind that a pure CSS animation is hanging temporarily, and you can probably just skip to the edit (or read all for completeness)
The setup
I'm working on a desktop webapp, using Electron.
At one point, the user is required to enter and submit some data. When they click "submit", I use JS to show this css loading animation (bottom-right loader), and send data asynchronously to the back-end...
- HTML -
<button id="submitBtn" type="submit" disabled="true">Go!</button>
<div class="submit-loader">
<div class="loader _hide"></div>
</div>
- JS -
form.addEventListener('submit', function(e) {
e.preventDefault();
loader.classList.remove('_hide');
setTimeout(function() {
ipcRenderer.send('credentials:submit', credentials);
}, 0)
});
where ._hide is simply
._hide {
visibility: hidden;
}
and where ipcRenderer.send() is an async method, without option to set otherwise.
The problem
Normally, the 0ms delay is sufficient to allow the DOM to be changed before the blocking event takes place. But not here. Whether using the setTimeout() or not, there is still a delay.
So, add a tiny delay...
loader.classList.remove('_hide');
setTimeout(function() {
ipcRenderer.send('credentials:submit', credentials);
}, 100);
Great! The loader displays immediately upon submitting! But... after 100ms, the animation stops dead in its tracks, for about 500ms or so, and then gets back to chooching.
This working -> not working -> working pattern happens regardless of the delay length. As soon as the ipcRenderer starts doing stuff, everything is halted.
So... Why!?
This is the first time I've seen this kind of behavior. I'm pretty well-versed in HTML/CSS/JS, but am admittedly new to NodeJS and Electron. Why is my pure CSS animation being halted by the ipcRenderer, and what can I do to remedy this?
Edit #1 - Additional Info
In the back-end (NodeJS), I am using request-promise to make a call to an external API. This happens when the back-end receives the ipcRenderer message.
var rp = require('request-promise');
ipcMain.on('credentials:submit', function(e, credentials) {
var options = {
headers : {
... api-key...
},
json: true,
url : url,
method : 'GET'
};
return rp(options).then(function(data) {
... send response to callback...
}).catch(function(err) {
... send error to callback...
});
}
The buggy freezing behavior only happens on the first API call. Successive API calls (i.e. refreshing the desktop app without restarting the NodeJS backend), do not cause the hang-up. Even if I call a different API method, there are no issues.
For now, I've implemented the following hacky workaround:
First, initialize the first BrowserWindow with show:false...
window = new BrowserWindow({
show: false
});
When the window is ready, send a ping to the external API, and only display the window after a successful response...
window.on('ready-to-show', function() {
apiWrapper.ping(function(response) {
if(response.error) {
app.quit();
}else {
window.show(true);
}
});
});
This extra step means that there is about 500ms delay before the window appears, but then all successive API calls (whether .ping() or otherwise) no longer block the UI. We're getting to the verge of callback hell, but this isn't too bad.
So... this is a request-promise issue (which is asynchronous, as far as I can tell from the docs). Not sure why this behavior is only showing-up on the first call, so please feel free to let me know if you know! Otherwise, the little hacky bit will have to do for now.
(Note: I'm the only person who will ever use this desktop app, so I'm not too worried about displaying a "ping failed" message. For a commercial release, I would alert the user to a failed API call.)
Worth to check how does request-promise internally setups up module loading. reading it, it seems like there is kind of lazy loading (https://github.com/request/request-promise/blob/master/lib/rp.js#L10-L12) when request is being called. Quick try out
const convertHrtime = require('convert-hrtime');
const a = require('request-promise');
const start = process.hrtime();
a({uri: 'https://requestb.in/17on4me1'});
const end = process.hrtime(start);
console.log(convertHrtime(end));
const start2 = process.hrtime();
a({uri: 'https://requestb.in/17on4me1'});
const end2 = process.hrtime(start2);
console.log(convertHrtime(end2));
returns value like below:
{ seconds: 0.00421092,
milliseconds: 4.21092,
nanoseconds: 4210920 }
{ seconds: 0.000511664,
milliseconds: 0.511664,
nanoseconds: 511664 }
first call is obviously taking longer than subsequent. (number of course may vary, I ran this on bare node.js on relatively fast cpu) If module loading is major cost for first call, then it'll block main process until module is loaded (cause node.js require resolve is synchronous)
I'm not able to say this is concrete reason, but worth to check. As suggested in comment, try other lib or bare internal module (like Electron's net) to rule out.

Batching requests to minimize cell drain

This article hit the top of HackerNews recently: http://highscalability.com/blog/2013/9/18/if-youre-programming-a-cell-phone-like-a-server-youre-doing.html#
In which it states:
The cell radio is one of the biggest battery drains on a phone. Every time you send data, no matter how small, the radio is powered on for up for 20-30 seconds. Every decision you make should be based on minimizing the number of times the radio powers up. Battery life can be dramatically improved by changing the way your apps handle data transfers. Users want their data now, the trick is balancing user experience with transferring data and minimizing power usage. A balance is achieved by apps carefully bundling all repeating and intermittent transfers together and then aggressively prefetching the intermittent transfers.
I would like to modify $.ajax to add an option like "doesn't need to be done right now, just do this request when another request is launched". What would be a good way to go about this?
I started with this:
(function($) {
var batches = [];
var oldAjax = $.fn.ajax;
var lastAjax = 0;
var interval = 5*60*1000; // Should be between 2-5 minutes
$.fn.extend({batchedAjax: function() {
batches.push(arguments);
}});
var runBatches = function() {
var now = new Date().getTime();
var batched;
if (lastAjax + interval < now) {
while (batched = batches.pop()) {
oldAjax.apply(null, batched);
}
}
}
setInterval(runBatches, interval);
$.fn.ajax = function() {
runBatches();
oldAjax.apply(null, arguments);
lastAjax = now;
};
})(jQuery);
I can't tell by the wording of the paper, I guess a good batch "interval" is 2-5 minutes, so I just used 5.
Is this a good implementation?
How can I make this a true modification of just the ajax method, by adding a {batchable:true} option to the method? I haven't quite figured that out either.
Does setInterval also keep the phone awake all the time? Is that a bad thing to do? Is there a better way to not do that?
Are there other things here that would cause a battery to drain faster?
Is this kind of approach even worthwhile? There are so many things going on at once in a modern smartphone, that if my app isn't using the cell, surely some other app is. Javascript can't detect if the cell is on or not, so why bother? Is it worth bothering?
I made some progress on adding the option to $.ajax, started to edit the question, and realized it's better as an answer:
(function($) {
var batches = [];
var oldAjax = $.fn.ajax;
var lastAjax = 0;
var interval = 5*60*1000; // Should be between 2-5 minutes
var runBatches = function() {
var now = new Date().getTime();
var batched;
if (lastAjax + interval < now) {
while (batched = batches.pop()) {
oldAjax.apply(null, batched);
}
}
}
setInterval(runBatches, interval);
$.fn.ajax = function(url, options) {
if (options.batchable) {
batches.push(arguments);
return;
}
runBatches();
oldAjax.apply(null, arguments);
lastAjax = now;
};
})(jQuery);
That was actually fairly straightforward. Is love to see a better answer though.
Does setInterval also keep the phone awake all the time? Is that a bad thing to do? Is there a better way to not do that?
From an iPhone 4, iOS 6.1.0 Safari environment:
A wrote an app with a countdown timer that updated an element's text on one-second intervals. The DOM tree had about medium complexity. The app was a relatively-simple calculator that didn't do any AJAX. However, I always had a sneaking suspicion that those once-per-second reflows were killing me. My battery sure seemed to deplete rather quickly, whenever I left it turned-on on a table, with Safari on the app's webpage.
And there were only two timeouts in that app. Now, I don't have any quantifiable proof that the timeouts were draining my battery, but losing about 10% every 45 minutes from this dopey calculator was a little unnerving. (Who knows though, maybe it was the backlight.)
On that note: You may want to build a test app that does AJAX on intervals, other things on intervals, etc, and compare how each function drains your battery under similar conditions. Getting a controlled environment might be tricky, but if there is a big enough difference in drain, then even "imperfect" testing conditions will yield noticeable-enough results for you to draw a conclusion.
However, I found out an interesting thing about how iOS 6.1.0 Safari handles timeouts:
The timeouts don't run their callbacks if you turn off the screen.
Consequentially, long-term timeouts will "miss their mark."
If my app's timer was to display the correct time (even after I closed and reopened the screen), then I couldn't go the easy route and do secondsLeft -= 1. If I turned off the screen, then the secondsLeft (relative to my starting time) would have been "behind," and thus incorrect. (The setTimeout callback did not run while the screen was turned off.)
The solution was that I had to recalculate timeLeft = fortyMinutes - (new Date().getTime() - startTime) on each interval.
Also, the timer in my app was supposed to change from green, to lime, to yellow, to red, as it got closer to expiry. Since, at this point, I was worried about the efficiency of my interval-code, I suspected that it would be better to "schedule" my color changes for their appropriate time (lime: 20 minutes after starting time, yellow: 30 mins, red: 35) (this seemed preferable to a quadruple-inequality-check on every interval, which would be futile 99% of the time).
However, if I scheduled such a color change, and my phone's screen was turned off at the target time, then that color change would never happen.
The solution was to check, on each interval, if the time elapsed since the last 1-second timer update had been ">= 2 seconds". (This way, the app could know if my phone had had its screen turned off; it was able to realize when it had "fallen behind.") At that point, if necessary, I would "forcibly" apply a color change and schedule the next one.
(Needless to say, I later removed the color-changer...)
So, I believe this confirms my claim that
iOS 6.1.0 Safari does not execute setTimeout callback functions if the screen is turned off.
So keep this in mind when "scheduling" your AJAX calls, because you will probably be affected by this behavior as well.
And, using my proposition, I can answer your question:
At least for iOS, we know that setTimeout sleeps while the screen is off.
Thus setTimeout won't give your phone "nightmares" ("keep it awake").
Is this kind of approach even worthwhile? There are so many things going on at once in a modern smartphone, that if my app isn't using the cell, surely some other app is. Javascript can't detect if the cell is on or not, so why bother? Is it worth bothering?
If you can get this implementation to work correctly then it seems like it would be worthwhile.
You will incur latency for every AJAX request you make, which will slow down your app to some degree. (Latency is the bane of page loading time, after all.) So you will definitely achieve some gain by "bundling" requests. Extending $.ajax such that you can "batch" requests will definitely have some merit.
The article you've linked clearly focuses on optimizing power consumption for apps (yes, the weather widget example is horrifying). Actively using a browser is, by definition, a foreground task; plus something like ApplicationCache is already available to reduce the need for network requests. You can then programmatically update the cache as required and avoid DIY.
Sceptical side note: if you are using jQuery as part of your HTML5 app (perhaps wrapped in Sencha or similar), perhaps the mobile app framework has more to do with request optimization than the code itself. I have no proof whatsoever, but goddammit this sounds about right :)
How can I make this a true modification of just the ajax method, by
adding a {batchable:true} option to the method? I haven't quite
figured that out either.
A perfectly valid approach but to me this sounds like duck punching gone wrong. I wouldn't. Even if you correctly default batchable to false, personally I would rather use a facade (perhaps even in its own namespace?)
var gQuery = {}; //gQuery = green jQuery, patent pending :)
gQuery.ajax = function(options,callback){
//your own .ajax with blackjack and hooking timeouts, ultimately just calling
$.ajax(options);
}
Does setInterval also keep the phone awake all the time? Is that a
bad thing to do? Is there a better way to not do that?
Native implementations of setInterval and setTimeout are very similar afaik; think of the latter not firing while the website is in the background for online banking inactivity prompts; when a page is not in the foreground its execution is basically halted. If an API is available for such "deferrals" (the article mentions of some relevant iOS7 capabilities) then it's likely a preferable approach, otherwise I see no reason to avoid setInterval.
Are there other things here that would cause a battery to drain
faster?
I'd speculate that any heavy load would (from calculating pi to pretty 3d transitions perhaps). But this sounds like premature optimization to me and reminds me of an e-reader with battery-saving mode that turned the LCD screen completely off :)
Is this kind of approach even worthwhile? There are so many things
going on at once in a modern smartphone, that if my app isn't using
the cell, surely some other app is. Javascript can't detect if the
cell is on or not, so why bother? Is it worth bothering?
The article pointed out a weather app being unreasonably greedy, and that would concern me. It seems to be a development oversight though more than anything else, as in fetching data more often than it's really needed. In an ideal world, this should be nicely handled on OS level, otherwise you'd end up with an array of competing workarounds. IMO: don't bother until highscalability posts another article telling you to :)
Here is my version:
(function($) {
var batches = [],
ajax = $.fn.ajax,
interval = 5*60*1000, // Should be between 2-5 minutes
timeout = setTimeout($.fn.ajax, interval);
$.fn.ajax=function(url, options) {
var batched, returns;
if(typeof url === "string") {
batches.push(arguments);
if(options.batchable) {
return;
}
}
while (batched = batches.shift()) {
returns = ajax.apply(null, batched);
}
clearTimeout(timeout);
timeout = setTimeout($.fn.ajax, interval);
return returns;
}
})(jQuery);
I think this version has the following main advantages:
If there is a non-batchable ajax call, the connection is used to send all batches. This Resets the timer.
Returns the expected return value on direct ajax calls
A direct processing of the batches can be triggered by calling $.fn.ajax() without parameters
As far as hacking the $.ajax method, I would :
try to also preserve the Promise mechanism provided by $.ajax,
take advantage of one of the global ajax events to trigger ajax calls,
maybe add a timer, to have the batch being called anyways in case no "immediate" $.ajax call is made,
give a new name to this function (in my code : $.batchAjax) and keep the orginal $.ajax.
Here is my go :
(function ($) {
var queue = [],
timerID = 0;
function ajaxQueue(url, settings) {
// cutom deferred used to forward the $.ajax' promise
var dfd = new $.Deferred();
// when called, this function executes the $.ajax call
function call() {
$.ajax(url, settings)
.done(function () {
dfd.resolveWith(this, arguments);
})
.fail(function () {
dfd.rejectWith(this, arguments);
});
}
// set a global timer, which will trigger the dequeuing in case no ajax call is ever made ...
if (timerID === 0) {
timerID = window.setTimeout(ajaxCallOne, 5000);
}
// enqueue this function, for later use
queue.push(call);
// return the promise
return dfd.promise();
}
function ajaxCallOne() {
window.clearTimeout(timerID);
timerID = 0;
if (queue.length > 0) {
f = queue.pop();
// async call : wait for the current ajax events
//to be processed before triggering a new one ...
setTimeout(f, 0);
}
}
// use the two functions :
$(document).bind('ajaxSend', ajaxCallOne);
// or :
//$(document).bind('ajaxComplete', ajaxCallOne);
$.batchAjax = ajaxQueue;
}(jQuery));
In this example, the hard coded delay fo 5 seconds defeats the purpose of "if less than 20 seconds between calls, it drains the battery". You can put a bigger one (5 minutes ?), or remove it altogether - it all depends on your app really.
fiddle
Regarding the general question "How do I write a web app which doesn't burn a phone's battery in 5 minutes ?" : it will take more than one magic arrow to deal with that one. It is a whole set of design decisions you will have to take, which really depends on your app.
You will have to arbitrate between loading as much data as possible in one go (and possibly send data which won't be used) vs fetching what you need (and possibly send many small individual requests).
Some parameters to take into account are :
volume of data (you don't want to drain your clients data plan either ...),
server load,
how much can be cached,
importance of being "up to date" (5 minutes delay for a chat app won't work),
frequency of client updates (a network game will probably require lots of updates from the client, a news app probably less ...).
One rather general suggestion : you can add a "live update" checkbox, and store its state client side. When unchecked, the client should hit a "refresh" button to download new data.
Here is my go, it somewhat grew out of what #Joe Frambach posted but I wanted the following additions:
retain the jXHR and error/success callbacks if they were provided
Debounce identical requests (by url and options match) while still triggering the callbacks or jqXHRs provided for EACH call
Use AjaxSettings to make configuration easier
Don't have each non batched ajax flush the batch, those should be separate processes IMO, but thus supply an option to force a batch flush as well.
Either way, this sucker would mostly likely be better done as a separate plugin rather than overriding and affecting the default .ajax function... enjoy:
(function($) {
$.ajaxSetup({
batchInterval: 5*60*1000,
flushBatch: false,
batchable: false,
batchDebounce: true
});
var batchRun = 0;
var batches = {};
var oldAjax = $.fn.ajax;
var queueBatch = function(url, options) {
var match = false;
var dfd = new $.Deferred();
batches[url] = batches[url] || [];
if(options.batchDebounce || $.ajaxSettings.batchDebounce) {
if(!options.success && !options.error) {
$.each(batches[url], function(index, batchedAjax) {
if($.param(batchedAjax.options) == $.param(options)) {
match = index;
return false;
}
});
}
if(match === false) {
batches[url].push({options:options, dfds:[dfd]});
} else {
batches[url][match].dfds.push(dfd);
}
} else {
batches[url].push({options:options, dfds:[dfd]);
}
return dfd.promise();
}
var runBatches = function() {
$.each(batches, function(url, batchedOptions) {
$.each(batchedOptions, function(index, batchedAjax) {
oldAjax.apply(null, url, batchedAjax.options).then(
function(data, textStatus, jqXHR) {
var args = arguments;
$.each(batchedAjax.dfds, function(index, dfd) {
dfd.resolve(args);
});
}, function(jqXHR, textStatus, errorThrown) {
var args = arguments;
$.each(batchedAjax.dfds, function(index, dfd) {
dfd.reject(args);
});
}
)
});
});
batches = {};
batchRun = new Date.getTime();
}
setInterval(runBatches, $.ajaxSettings.batchInterval);
$.fn.ajax = function(url, options) {
if (options.batchable) {
var xhr = queueBatch(url, options);
if((new Date.getTime()) - batchRun >= options.batchInterval) {
runBatches();
}
return xhr;
}
if (options.flushBatch) {
runBatches();
}
return oldAjax.call(null, url, options);
};
})(jQuery);

Trigger an event when user navigates away

I need to call a JavaScript/jQuery function which has a few lines of code in it, on a PHP page when the user closes his window/tab or navigates away by clicking a link. I've tried the onbeforeunload function but only the return "blah blah blah;" part executes and everything else is ignored. I've also tried the .unload method from jQuery but for some reason this code doesn't run.
$(window).unload(function() {
alert('blah blah blah');
});
Please suggest alternatives. Thanks..
Here is a simple working example. Whatever you return from the unload callback will be displayed in a browser popup confirmation.
Working example sending Ajax request before unload
http://jsfiddle.net/alexflav23/hujQs/7/
The easiest way to do this:
window.onbeforeunload = function(event) {
// do stuff here
return "you have unsaved changes. Are you sure you want to navigate away?";
};
in jQuery:
$(window).on("beforeunload", function() {
$.ajax("someURL", {
async: false,
data: "test",
success: function(event) {
console.log("Ajax request executed");
}
});
return "This is a jQuery version";
});
Look into the Network tab of the browser. You will see how the request is being sent as you wanted to do. Just send the appropriate data.
Bear in mind all operations triggered must be synchronous, so you can only make synchronous ajax requests for instance. However, the above is not entirely reliable for any purpose.
Opt for periodic back-up of user data to localStorage and sync with the server automatically . Keep window.onbeforeunload just as an extra precaution, but not as a main mechanism. It's well known to cause problems.
This is an old question, but I wanted to share an alternative approach that has the benefit of working with high consistency:
Establish a WebSocket connection to the server, and when the client navigates away the WebSocket connection will be closed. Server-side, you can detect the closed connection in a callback and run whatever code you need on the server.
Executing Javascript on page unload is often unreliable (as discussed in the other answer) because it's inherently at odds with the user's intention. This method will always work, although it is admittedly quite a bit more cumbersome to implement.
This does change the context of your "run before leaving" code from client-side to server-side, but I imagine for most cases the difference is inconsequential. Anything you want to run client-side before the client leaves your page is probably not going to change anything the client sees, so it's probably fine to run it server side. If there is specific data you need from the client you can send it through the WebSocket to the server.
The only situation I can think of off the top of my head where this might cause unexpected behavior is if the user loses the WS connection without actually navigating away, e.g. they lose internet or put their computer to sleep. Whether or not that's a big deal is probably highly dependent on what kind of code you're trying to execute.
In many projects of mine, the mentioned methods here are instable. The only thing that works for me is to bind the event as original attribute on the body element.
<body onunload="my_function_unload()">
jQuery method:
$('body').attr('onunload', 'my_function_unload()');
From an iframe:
<body onunload="window.parent.my_function_unload()">
jQuery method:
$('<iframe />').load(function(){
$body = $(this).contents().find('body');
$body.attr('onunload', 'window.parent.my_function_unload()');
}
Also, important, no arguments in the attribute, and the function must be in the global window scope, otherwise nothing happens.
For example, common mistake If your my_function_unload() are wrapped inside a ;( function( $ ) {... OR $(document).ready(function(){... AS my_function_unload() must be outside that private scope. And dont forget to use jQuery instead of $ prefix then. (Working with Wordpress for example)
This is kind of a pain, as Chrome, at least in Version 92.0.4515.131, seems to be clamping the security screws on what you can get away with in beforeunload. I'm unable to make a synchronous ajax request, for example.
If there's any chance the user will be back to your site and you can wait until then to deal with their having closed a window (which does work in my use case), know that setting cookies does currently seem to be fair game during the beforeunload event. Even works when I close the browser. Covers most anything but power cycling the computer, it appears.
Here's a reference example (with getCookie stolen from this SO question):
function setCookie(name, value) {
document.cookie =
'{0}={1};expires=Fri, 31 Dec 9999 23:59:59 GMT;path=/;SameSite=Lax'
.replace("{0}", name)
.replace("{1}", value);
}
// https://stackoverflow.com/a/25490531/1028230
function getCookie(cookieName) {
var b = document.cookie.match('(^|;)\\s*' + cookieName + '\\s*=\\s*([^;]+)');
return b ? b.pop() : '';
}
window.addEventListener('beforeunload', function (e) {
console.log('cookie value before reset: ' + getCookie('whenItHappened'));
var now = +new Date();
console.log("value to be set: " + now);
setCookie('whenItHappened', now);
return "some string if you want the 'are you sure you want to leave' dialog to appear";
});

javascript Web worker - pass data to page thread

If I have a var on my main page, and have a worker thread trying to set this var, is there a way the page can access it? Assuming everything is synchronized?
var routeWorker = new Worker('getroute.js');
var checkPatrolRouteFoundTimer;
var rw_resultRoute;
var routeFound = false;
routeWorker.onmessage = function(e) {
rw_resultRoute = e.data.route;
routeFound = true;
}
function checkPatrolReady() {
if(!routeFound)
checkPatrolRouteFoundTimer = setTimeout("checkPatrolReady()", 1000);
}
function ForcePatrol(index) {
routeWorker.postMessage(index);
checkPatrolReady();
...
//do work on route
...
}
in this case, the var I'm talking about is rw_resultRoute, and I can see it get set correctly when debugging. But the only thing is that it's set in the worker thread, not in the page thread.
I flow through the ForcePatrol() method the way i'm expecting to, and it looks like the rw_resultRoute is being set, since routeFound evaluates to true after the worker finishes.
Technically, it doesn't make sense, since routeFound can be set by the worker and read by the page thread, but rw_resultRoute can only be accessed by the worker.
I truly hope this is possible, otherwise I don't see a purpose for worker threads other than showing alert() messages and updating page HTML.
I truly hope this is possible, otherwise I don't see a purpose for worker threads other than showing alert() messages and updating page HTML.
It is meant to handle processing that would normally lock up the browser. Great for crunching numbers for canvas and running hashing.
in this case, the var I'm talking about is rw_resultRoute, and I can see it get set correctly when debugging. But the only thing is that it's set in the worker thread, not in the page thread.
The worker is separate from the page that spawns it. Only way to pass data is through messaging. You need to send the data with postMessage and have the onMessage handle the result. If you are handling different things, set up a switch statement to handle the different message types.
I solved the problem. There was some synchronization I wasn't doing correctly. I was using the setTimeout in the wrong way.
var routeWorker = new Worker('getroute.js');
var checkPatrolRouteFoundTimer;
var rw_resultRoute;
var routeFound = false;
routeWorker.onmessage = function(e) {
rw_resultRoute = e.data.route;
routeFound = true;
}
function checkPatrolReady() {
if(routeFound) {
...
//do work on route
...
clearInterval(checkPatrolRouteFoundTimer);
} else {
// do any maint here?
}
}
function ForcePatrol(index) {
routeWorker.postMessage(index);
checkPatrolRouteFoundTimer = setInterval("checkPatrolReady()", 1000);
}
Any call to setTimeout/setInterval will flow through, and in the first example i was using setTimeout instead of setInterval.
In the new way, calling ForcePatrol will setup the timer, and checkPatrolReady() will evaluate the flag, doing the work and clearing the timer if it is true.
So there is indeed nothing fancy in getting the results from web workers, but I was essentially creating a race condition with the worker results.

What is the best method to detect offline mode in the browser?

I have a web application where there are number of Ajax components which refresh themselves every so often inside a page (it's a dashboard of sorts).
Now, I want to add functionality to the page so that when there is no Internet connectivity, the current content of the page doesn't change and a message appears on the page saying that the page is offline (currently, as these various gadgets on the page try to refresh themselves and find that there is no connectivity, their old data vanishes).
So, what is the best way to go about this?
navigator.onLine
That should do what you're asking.
You probably want to check that in whatever code you have that updates the page. Eg:
if (navigator.onLine) {
updatePage();
} else {
displayOfflineWarning();
}
It seems like you've answered your own question. If the gadgets send an asynch request and it times out, don't update them. If enough of them do so, display the "page is offline" message.
See the HTML 5 draft specification. You want navigator.onLine. Not all browsers support it yet. Firefox 3 and Opera 9.5 do.
It sounds as though you are trying to cover up the problem rather than solve it. If a failed request causes your widgets to clear their data, then you should fix your code so that it doesn't attempt to update your widgets unless it receives a response, rather than attempting to figure out whether the request will succeed ahead of time.
One way to handle this might be to extend the XmlHTTPRequest object with an explicit timeout method, then use that to determine if you're working in offline mode (that is, for browsers that don't support navigator.onLine). Here's how I implemented Ajax timeouts on one site (a site that uses the Prototype library). After 10 seconds (10,000 milliseconds), it aborts the call and calls the onFailure method.
/**
* Monitor AJAX requests for timeouts
* Based on the script here: http://codejanitor.com/wp/2006/03/23/ajax-timeouts-with-prototype/
*
* Usage: If an AJAX call takes more than the designated amount of time to return, we call the onFailure
* method (if it exists), passing an error code to the function.
*
*/
var xhr = {
errorCode: 'timeout',
callInProgress: function (xmlhttp) {
switch (xmlhttp.readyState) {
case 1: case 2: case 3:
return true;
// Case 4 and 0
default:
return false;
}
}
};
// Register global responders that will occur on all AJAX requests
Ajax.Responders.register({
onCreate: function (request) {
request.timeoutId = window.setTimeout(function () {
// If we have hit the timeout and the AJAX request is active, abort it and let the user know
if (xhr.callInProgress(request.transport)) {
var parameters = request.options.parameters;
request.transport.abort();
// Run the onFailure method if we set one up when creating the AJAX object
if (request.options.onFailure) {
request.options.onFailure(request.transport, xhr.errorCode, parameters);
}
}
},
// 10 seconds
10000);
},
onComplete: function (request) {
// Clear the timeout, the request completed ok
window.clearTimeout(request.timeoutId);
}
});
Hmm actually, now I look into it a bit, it's a bit more complicated than that. Have a read of these links on John Resig's blog and the Mozilla site. The above poster may also have a good point - you're making requests anyway, so you should be able to work out when they fail.. That might be a much more reliable way to go.
Make a call to a reliable destination, or perhaps a series of calls, ones that should go through and return if the user has an active net connection - even something as simple as a token ping to google, yahoo, and msn, or something like that. If at least one comes back green, you know you're connected.
I think google gears have such functionality, maybe you could check how they did that.
Use the relevant HTML5 API: online/offline status/events.
One possible solution is that if the page and the cached page have a different url to just look and see what url you are on. If you are on the url of the cached page then you are in offline mode. This blog makes a good point about why navigator.online is broke

Categories