This article hit the top of HackerNews recently: http://highscalability.com/blog/2013/9/18/if-youre-programming-a-cell-phone-like-a-server-youre-doing.html#
In which it states:
The cell radio is one of the biggest battery drains on a phone. Every time you send data, no matter how small, the radio is powered on for up for 20-30 seconds. Every decision you make should be based on minimizing the number of times the radio powers up. Battery life can be dramatically improved by changing the way your apps handle data transfers. Users want their data now, the trick is balancing user experience with transferring data and minimizing power usage. A balance is achieved by apps carefully bundling all repeating and intermittent transfers together and then aggressively prefetching the intermittent transfers.
I would like to modify $.ajax to add an option like "doesn't need to be done right now, just do this request when another request is launched". What would be a good way to go about this?
I started with this:
(function($) {
var batches = [];
var oldAjax = $.fn.ajax;
var lastAjax = 0;
var interval = 5*60*1000; // Should be between 2-5 minutes
$.fn.extend({batchedAjax: function() {
batches.push(arguments);
}});
var runBatches = function() {
var now = new Date().getTime();
var batched;
if (lastAjax + interval < now) {
while (batched = batches.pop()) {
oldAjax.apply(null, batched);
}
}
}
setInterval(runBatches, interval);
$.fn.ajax = function() {
runBatches();
oldAjax.apply(null, arguments);
lastAjax = now;
};
})(jQuery);
I can't tell by the wording of the paper, I guess a good batch "interval" is 2-5 minutes, so I just used 5.
Is this a good implementation?
How can I make this a true modification of just the ajax method, by adding a {batchable:true} option to the method? I haven't quite figured that out either.
Does setInterval also keep the phone awake all the time? Is that a bad thing to do? Is there a better way to not do that?
Are there other things here that would cause a battery to drain faster?
Is this kind of approach even worthwhile? There are so many things going on at once in a modern smartphone, that if my app isn't using the cell, surely some other app is. Javascript can't detect if the cell is on or not, so why bother? Is it worth bothering?
I made some progress on adding the option to $.ajax, started to edit the question, and realized it's better as an answer:
(function($) {
var batches = [];
var oldAjax = $.fn.ajax;
var lastAjax = 0;
var interval = 5*60*1000; // Should be between 2-5 minutes
var runBatches = function() {
var now = new Date().getTime();
var batched;
if (lastAjax + interval < now) {
while (batched = batches.pop()) {
oldAjax.apply(null, batched);
}
}
}
setInterval(runBatches, interval);
$.fn.ajax = function(url, options) {
if (options.batchable) {
batches.push(arguments);
return;
}
runBatches();
oldAjax.apply(null, arguments);
lastAjax = now;
};
})(jQuery);
That was actually fairly straightforward. Is love to see a better answer though.
Does setInterval also keep the phone awake all the time? Is that a bad thing to do? Is there a better way to not do that?
From an iPhone 4, iOS 6.1.0 Safari environment:
A wrote an app with a countdown timer that updated an element's text on one-second intervals. The DOM tree had about medium complexity. The app was a relatively-simple calculator that didn't do any AJAX. However, I always had a sneaking suspicion that those once-per-second reflows were killing me. My battery sure seemed to deplete rather quickly, whenever I left it turned-on on a table, with Safari on the app's webpage.
And there were only two timeouts in that app. Now, I don't have any quantifiable proof that the timeouts were draining my battery, but losing about 10% every 45 minutes from this dopey calculator was a little unnerving. (Who knows though, maybe it was the backlight.)
On that note: You may want to build a test app that does AJAX on intervals, other things on intervals, etc, and compare how each function drains your battery under similar conditions. Getting a controlled environment might be tricky, but if there is a big enough difference in drain, then even "imperfect" testing conditions will yield noticeable-enough results for you to draw a conclusion.
However, I found out an interesting thing about how iOS 6.1.0 Safari handles timeouts:
The timeouts don't run their callbacks if you turn off the screen.
Consequentially, long-term timeouts will "miss their mark."
If my app's timer was to display the correct time (even after I closed and reopened the screen), then I couldn't go the easy route and do secondsLeft -= 1. If I turned off the screen, then the secondsLeft (relative to my starting time) would have been "behind," and thus incorrect. (The setTimeout callback did not run while the screen was turned off.)
The solution was that I had to recalculate timeLeft = fortyMinutes - (new Date().getTime() - startTime) on each interval.
Also, the timer in my app was supposed to change from green, to lime, to yellow, to red, as it got closer to expiry. Since, at this point, I was worried about the efficiency of my interval-code, I suspected that it would be better to "schedule" my color changes for their appropriate time (lime: 20 minutes after starting time, yellow: 30 mins, red: 35) (this seemed preferable to a quadruple-inequality-check on every interval, which would be futile 99% of the time).
However, if I scheduled such a color change, and my phone's screen was turned off at the target time, then that color change would never happen.
The solution was to check, on each interval, if the time elapsed since the last 1-second timer update had been ">= 2 seconds". (This way, the app could know if my phone had had its screen turned off; it was able to realize when it had "fallen behind.") At that point, if necessary, I would "forcibly" apply a color change and schedule the next one.
(Needless to say, I later removed the color-changer...)
So, I believe this confirms my claim that
iOS 6.1.0 Safari does not execute setTimeout callback functions if the screen is turned off.
So keep this in mind when "scheduling" your AJAX calls, because you will probably be affected by this behavior as well.
And, using my proposition, I can answer your question:
At least for iOS, we know that setTimeout sleeps while the screen is off.
Thus setTimeout won't give your phone "nightmares" ("keep it awake").
Is this kind of approach even worthwhile? There are so many things going on at once in a modern smartphone, that if my app isn't using the cell, surely some other app is. Javascript can't detect if the cell is on or not, so why bother? Is it worth bothering?
If you can get this implementation to work correctly then it seems like it would be worthwhile.
You will incur latency for every AJAX request you make, which will slow down your app to some degree. (Latency is the bane of page loading time, after all.) So you will definitely achieve some gain by "bundling" requests. Extending $.ajax such that you can "batch" requests will definitely have some merit.
The article you've linked clearly focuses on optimizing power consumption for apps (yes, the weather widget example is horrifying). Actively using a browser is, by definition, a foreground task; plus something like ApplicationCache is already available to reduce the need for network requests. You can then programmatically update the cache as required and avoid DIY.
Sceptical side note: if you are using jQuery as part of your HTML5 app (perhaps wrapped in Sencha or similar), perhaps the mobile app framework has more to do with request optimization than the code itself. I have no proof whatsoever, but goddammit this sounds about right :)
How can I make this a true modification of just the ajax method, by
adding a {batchable:true} option to the method? I haven't quite
figured that out either.
A perfectly valid approach but to me this sounds like duck punching gone wrong. I wouldn't. Even if you correctly default batchable to false, personally I would rather use a facade (perhaps even in its own namespace?)
var gQuery = {}; //gQuery = green jQuery, patent pending :)
gQuery.ajax = function(options,callback){
//your own .ajax with blackjack and hooking timeouts, ultimately just calling
$.ajax(options);
}
Does setInterval also keep the phone awake all the time? Is that a
bad thing to do? Is there a better way to not do that?
Native implementations of setInterval and setTimeout are very similar afaik; think of the latter not firing while the website is in the background for online banking inactivity prompts; when a page is not in the foreground its execution is basically halted. If an API is available for such "deferrals" (the article mentions of some relevant iOS7 capabilities) then it's likely a preferable approach, otherwise I see no reason to avoid setInterval.
Are there other things here that would cause a battery to drain
faster?
I'd speculate that any heavy load would (from calculating pi to pretty 3d transitions perhaps). But this sounds like premature optimization to me and reminds me of an e-reader with battery-saving mode that turned the LCD screen completely off :)
Is this kind of approach even worthwhile? There are so many things
going on at once in a modern smartphone, that if my app isn't using
the cell, surely some other app is. Javascript can't detect if the
cell is on or not, so why bother? Is it worth bothering?
The article pointed out a weather app being unreasonably greedy, and that would concern me. It seems to be a development oversight though more than anything else, as in fetching data more often than it's really needed. In an ideal world, this should be nicely handled on OS level, otherwise you'd end up with an array of competing workarounds. IMO: don't bother until highscalability posts another article telling you to :)
Here is my version:
(function($) {
var batches = [],
ajax = $.fn.ajax,
interval = 5*60*1000, // Should be between 2-5 minutes
timeout = setTimeout($.fn.ajax, interval);
$.fn.ajax=function(url, options) {
var batched, returns;
if(typeof url === "string") {
batches.push(arguments);
if(options.batchable) {
return;
}
}
while (batched = batches.shift()) {
returns = ajax.apply(null, batched);
}
clearTimeout(timeout);
timeout = setTimeout($.fn.ajax, interval);
return returns;
}
})(jQuery);
I think this version has the following main advantages:
If there is a non-batchable ajax call, the connection is used to send all batches. This Resets the timer.
Returns the expected return value on direct ajax calls
A direct processing of the batches can be triggered by calling $.fn.ajax() without parameters
As far as hacking the $.ajax method, I would :
try to also preserve the Promise mechanism provided by $.ajax,
take advantage of one of the global ajax events to trigger ajax calls,
maybe add a timer, to have the batch being called anyways in case no "immediate" $.ajax call is made,
give a new name to this function (in my code : $.batchAjax) and keep the orginal $.ajax.
Here is my go :
(function ($) {
var queue = [],
timerID = 0;
function ajaxQueue(url, settings) {
// cutom deferred used to forward the $.ajax' promise
var dfd = new $.Deferred();
// when called, this function executes the $.ajax call
function call() {
$.ajax(url, settings)
.done(function () {
dfd.resolveWith(this, arguments);
})
.fail(function () {
dfd.rejectWith(this, arguments);
});
}
// set a global timer, which will trigger the dequeuing in case no ajax call is ever made ...
if (timerID === 0) {
timerID = window.setTimeout(ajaxCallOne, 5000);
}
// enqueue this function, for later use
queue.push(call);
// return the promise
return dfd.promise();
}
function ajaxCallOne() {
window.clearTimeout(timerID);
timerID = 0;
if (queue.length > 0) {
f = queue.pop();
// async call : wait for the current ajax events
//to be processed before triggering a new one ...
setTimeout(f, 0);
}
}
// use the two functions :
$(document).bind('ajaxSend', ajaxCallOne);
// or :
//$(document).bind('ajaxComplete', ajaxCallOne);
$.batchAjax = ajaxQueue;
}(jQuery));
In this example, the hard coded delay fo 5 seconds defeats the purpose of "if less than 20 seconds between calls, it drains the battery". You can put a bigger one (5 minutes ?), or remove it altogether - it all depends on your app really.
fiddle
Regarding the general question "How do I write a web app which doesn't burn a phone's battery in 5 minutes ?" : it will take more than one magic arrow to deal with that one. It is a whole set of design decisions you will have to take, which really depends on your app.
You will have to arbitrate between loading as much data as possible in one go (and possibly send data which won't be used) vs fetching what you need (and possibly send many small individual requests).
Some parameters to take into account are :
volume of data (you don't want to drain your clients data plan either ...),
server load,
how much can be cached,
importance of being "up to date" (5 minutes delay for a chat app won't work),
frequency of client updates (a network game will probably require lots of updates from the client, a news app probably less ...).
One rather general suggestion : you can add a "live update" checkbox, and store its state client side. When unchecked, the client should hit a "refresh" button to download new data.
Here is my go, it somewhat grew out of what #Joe Frambach posted but I wanted the following additions:
retain the jXHR and error/success callbacks if they were provided
Debounce identical requests (by url and options match) while still triggering the callbacks or jqXHRs provided for EACH call
Use AjaxSettings to make configuration easier
Don't have each non batched ajax flush the batch, those should be separate processes IMO, but thus supply an option to force a batch flush as well.
Either way, this sucker would mostly likely be better done as a separate plugin rather than overriding and affecting the default .ajax function... enjoy:
(function($) {
$.ajaxSetup({
batchInterval: 5*60*1000,
flushBatch: false,
batchable: false,
batchDebounce: true
});
var batchRun = 0;
var batches = {};
var oldAjax = $.fn.ajax;
var queueBatch = function(url, options) {
var match = false;
var dfd = new $.Deferred();
batches[url] = batches[url] || [];
if(options.batchDebounce || $.ajaxSettings.batchDebounce) {
if(!options.success && !options.error) {
$.each(batches[url], function(index, batchedAjax) {
if($.param(batchedAjax.options) == $.param(options)) {
match = index;
return false;
}
});
}
if(match === false) {
batches[url].push({options:options, dfds:[dfd]});
} else {
batches[url][match].dfds.push(dfd);
}
} else {
batches[url].push({options:options, dfds:[dfd]);
}
return dfd.promise();
}
var runBatches = function() {
$.each(batches, function(url, batchedOptions) {
$.each(batchedOptions, function(index, batchedAjax) {
oldAjax.apply(null, url, batchedAjax.options).then(
function(data, textStatus, jqXHR) {
var args = arguments;
$.each(batchedAjax.dfds, function(index, dfd) {
dfd.resolve(args);
});
}, function(jqXHR, textStatus, errorThrown) {
var args = arguments;
$.each(batchedAjax.dfds, function(index, dfd) {
dfd.reject(args);
});
}
)
});
});
batches = {};
batchRun = new Date.getTime();
}
setInterval(runBatches, $.ajaxSettings.batchInterval);
$.fn.ajax = function(url, options) {
if (options.batchable) {
var xhr = queueBatch(url, options);
if((new Date.getTime()) - batchRun >= options.batchInterval) {
runBatches();
}
return xhr;
}
if (options.flushBatch) {
runBatches();
}
return oldAjax.call(null, url, options);
};
})(jQuery);
Related
I am testing latency of a call through javascript and developer console.
In JS the measurement is done simply by adding start time variables e.g:
var start_execution=Math.floor( new Date().getTime() );
// - Call a URL asynchronously
element = doc.createElement("script");
element.src = request_url;
doc.getElementsByTagName("script")[0].parentNode.appendChild(element);
//In response of the call initialize end time and call function to compute latency
var end_execution=Math.floor( new Date().getTime() );
// function call to generate latency
calculateLatency();
function calculateLatency(){
var latency= end_execution-start_execution;
}
The method works fine if run in isolation where the latency figure is inline with the browser's developer-console/network panel. But on actual website with lots of asynchronous content, the numbers measured by JS is inflated upto 5X.
One 1000ms latency computed through js shows as 200ms in network panel.
This behavior is very frequent and the difference varies.
I suspect there is some sort of browser queue which handles asynchronous processing and if in case of peak load the request/response gets stuck in queue.
The option I am exploring is Performance http://www.w3.org/TR/resource-timing , but the browser support is limited here.
I am looking for some explanations around the behavior and ability to compute actual latency in javascript (same as shown in net-panel). Also recommendation on how to effectively use JS cutoff time for network calls as in such cases the inflated values might lead to unexpected behavior.
Why I want to do this: Set out timeout for non performing network calls but it is not fair to use setTimeOut and reject calls when the actual cause of latency is browser processing overhead.
You are absolutely right in your suggestion.
Almost everything in JS is driven by events (except some cases like page parsing process).
Any browser has single thread per window for javascript's events and every event handler executes consequently and every event (including propagation/bubbling and defaults) will be processed completely before processing next event.
For more information refer to this tutorial
As for recommendations on effective usage of events queue, there are some advice:
Avoid long-running scripts
Avoid synchronous XMLHttpRequest
Do not allow scripts from different frames being controlling the same global state
Do not use alert dialog boxes for debugging since they may completely change your program logic.
Personally I would use http://momentjs.com/ to about anything that is time related. In addition to that I would use duration plugin https://github.com/jsmreese/moment-duration-format.
jQuery
To use it in jQuery manual style
var start_execution = moment();
var end_execution = moment();
var jqxhr = $.get( "google.com",
function(data) {
end_execution = moment();
})
.done(function() {
end_execution = moment();
})
.fail(function() {
end_execution = moment();
})
.always(function() {
var ms = start_execution.diff(end_execution);
var duration = moment.duration(ms);
console.log(duration);
});
This is correctly written and will work even if request fails or timeouts.
Just for clarification, It would be wrong to write :
var start_execution = moment();
var jqxhr = $.get( "google.com",
function(data) {
//do something with the data;
});
var end_execution = moment();
var ms = start_execution.diff(end_execution);
var duration = moment.duration(ms);
console.log(duration);
As it measures nothing other then how much time it takes jQuery to create request initialization, most likely end_execution happens before actual request for that asset/url is even sent out.
Angular
With Angular you write httpInterceptorService, that can log the times when events happened.
var HttpCallsApp = angular.module("HttpCallsApp", []);
HttpCallsApp.config(function ($provide, $httpProvider) {
$provide.factory("MyHttpInterceptor", function ($q) {
var log = ApplicationNamespace.Util.logger;
return {
request: function (config) {
log.info("Ajax %s request [%s] initialized", config.method, config.url);
return config || $q.when(config);
},
response: function (response) {
log.info("Ajax %s response [%s] compleated : %s ( %s )",
response.config.method, response.config.url, response.status, response.statusText);
return response || $q.when(response);
},
requestError: function (rejection) {
log.info(rejection);
// Return the promise rejection.
return $q.reject(rejection);
},
responseError: function (rejection) {
log.info(rejection);
// Return the promise rejection.
return $q.reject(rejection);
}
};
});
$httpProvider.interceptors.push("MyHttpInterceptor");
});
In angular case application namespace contains application scope logger instance timestamps that i set in app config with logEnhancerProvider.datetimePattern = "HH:mm:ss.SSS";. From code quality perspective angular case is order of magnitude better, but I prefer not to go in details - it is not that you can not write same thing in jQuery, but it is not your default option.
Chrome adhoc test (or about any other modern browser)
ctrl + shift + n (opens new incognito window, ensures assets are not cached client side)
F12 (opens developer tools)
network (shows assets requests)
set to record network log
enter your url you want to test
Click XHR filter
Open the item and "Timing"
You should see something like :
Fiddler
If you don't trust your webbrowser or javascript is ran out of browser - in flash, .net, java etc program. You can still get the request timings. In that case you monitor packets sent.
You can see about anything you would want to know:
As a personal preference I have changed completed time time-stamp format.
Instead of using datetime, where milliseconds can vary depending on system factors you could use console.time() and console.timeEnd() (does not exist in old ie). Even better if you can use performance.now, but it has it's own problems. That is why I prefer to use momentjs.
If you want to do this accurately and in legacy "browsers" then at least in Google they have used following approach : you add a flash component, that can do this accurately. This would bring other problems, like data pipeline limits if you log a lot, but they are easier problems to solve then create support for legacy IE.
I made a CMS which during operation pulls large amounts of data.
CMS is made in PHP, MySQL, jQuery, Bootstrap and use AJAX.
The problem is if you lose your internet connection can cause problems on displaying and scrolling.
I would love if there is a good way to show the error and blocks all functions on the site when there is no internet connection. When the connection is established it should be all function allowed on the site.
Thanks!
(Sorry for my bad English.)
If you are using jQuery, you can just hook on the global error handler and lock up your application when an error occurs. The lock up screen could simply ask to try again.
$( document ).ajaxError(function() {
// lock your UI here
});
Also, once the UI is locked, you could execute a function that would ping your server in an Exponential Backoff fashion and automatically unlock the application on network restore.
Locking your app can easily be done with jQuery's blockUI plugin.
Example
(function ($) {
var locked = false;
var errorRetryCount = 0;
var blockUiOptions = { message: "Oops! Could not reach the server!" };
// change this function to adjust the exponential backoff delay
function backoff(n) {
return Math.pow(2, n) * 100;
}
$(function () {
$( document ).ajaxError(function () {
var req = this;
errorRetryCount += 1;
if (!locked) {
locked = true;
$.blockUI(blockUiOptions);
}
// retry to send the request...
setTimeout(function () { $.ajax(req); }, backoff(errorRetryCount));
}).ajaxSuccess(function () {
locked && $.unblockUI();
locked = false;
errorRetryCount = 0;
});
});
})(jQuery);
Note: You may not want to retry indefinitely your request upon network failure, and would want to quit retrying at some point. Since this is out of the scope of this question, I'll leave it as it is. However, you may take a look at this related question, which may help you sort this part out.
If you're using jQuery already, you could create a simple ajax call to your server, and if it fails within a couple of seconds, either your server or the clients internet connection is down.
Something like this:
setInterval(function() {
$.ajax({
url: "https://cms.example.com/ping",
})
.fail(function( data ) {
alert('Connection lost?');
// remember do to something smart which shows the error just once
// instead of every five seconds. Increasing the interval every
// time it fails seems a good start.
});
}, 5*1000);
Using plain JavaScript and simple code:
window.navigator.onLine ? 'on' : 'off'
It supports by almost every browser, please check Can I use
edit: re-read your question and misunderstood my first pass through so this wouldn't be valid for continuous monitoring... but i'll leave it here anyways as it may be useful for someone else.
i would suggest loading a small js file that adds a class to an element of your page and then checking if that class is applied after the fact... assuming you are using jQuery
file on the remote server loaded into your page after jQuery via script tag
$('html').addClass('connected');
local code
if($('html').hasClass('connected')) {
// connected
} else {
// not connected
}
$(document).ready(function () {
var searchValue = "";
setInterval(checkTextboxChanged, 0.5);
function checkTextboxChanged() {
var currentValue = $('#dept').val();
if (currentValue != searchValue) {
searchValue = currentValue;
TextboxChanged();
}
}
function TextboxChanged() {
$.ajax({
url: "<?php echo base_url();?>check_price.html",
data: "dept="+$("#dept").val()+"&arrive="+$("#arrive").val()+"&parking="+$("#parking").val(),
success: function(result){
$("#check_price").html(result);
}
});
}
});
This is working fine in Chrome,firefox but not in IE.. is it any problem in setInterval method? is it supports IE?
setInterval with a small timeout is a really really bad idea, whichever browser you're using.
The thing with setInterval is that the event is triggered on the specified interval regardless of whether the page is ready for it.
With a short interval time, this can very easily lead to a pile-up of events that are fired faster than the site can deal with them.
Your usage here, with ajax involved is a classic example of this: If the ajax event takes longer than half a second to complete (which is easily possible), then you'll end up with multiple events being called virtually simulataneously. This will lead to your ajax service being swamped with simultaneous calls, which will make its response time slow down, and in turn make the problem in the browser even worse.
With this kind of thing, it is almost always better to use a self re-firing setTimeout call, which will ensure that the event is never triggered until the previous one has completed.
However, either way, your interval of 0.5ms is a crazy short time span for any kind of interval handling. You will very likely have performance issues with a timeout that short, whatever it is you're doing.
I suspect that you actually intended it to be half a second rather than half a millisecond. If that's the case, you should change it to 500 rather than 0.5.
There has to be an easy way to do this, but I'm new to JS.
I have a javascript program that (1) takes user input, (2) updates the webpage based on that input, then (3) performs a lengthy calculation. The trouble is that the webpage doesn't register the update till after the lengthy calculation. Isn't there a way to pause execution so that the page can update before the long calculation?
I've tried setTimeout and window.setTimeout, but they made no difference.
The program is for playing a game: the user inputs a move, the script updates the position, then calculates its next move. postMessage prints text messages using div.innerHTML; buttonFn takes the input from the user, updates the position, prints a message, then starts the computer calculating.
function buttonFn(arg){
var hst = histButt;
hst.push(arg);
var nwmv = hst.clone();
postMessage("New move: " + nwmv.join());
if(status == opposite(comp) && !pauseQ){
var mvsposs = movesFromPos(posCur,status);
if(mvsposs.has(nwmv)){
updatePosCur(nwmv);
//waitasec();
if(comp == status && !pauseQ){
compTurn();
};
}
else{
histButt = nwmv;
};
};
};
yes there is, call your function like this. Using setTimeout will allow a page reflow prior to your JS executing.
function buttonFn(arg){
var hst = histButt;
hst.push(arg);
var nwmv = hst.clone();
postMessage("New move: " + nwmv.join());
if(status == opposite(comp) && !pauseQ){
var mvsposs = movesFromPos(posCur,status);
if(mvsposs.has(nwmv)){
updatePosCur(nwmv);
//waitasec();
if(comp == status && !pauseQ){
setTimeout(function(){
compTurn();
},0);
};
}
else{
histButt = nwmv;
};
};
};
Remember, JS is very event driven friendly. If you can move things off, and call them later do it. Thats the only way we can support multi-threaded like behavior.
setTimeout
If you only need to support modern browsers (or if you use a transpiler), you can now use ES6 features to make this much easier and more in the style the original questioner was trying to do. (I realize the question is 8 years old - no harm in a new, more current answer!)
For example you can do something like this:
// helper function to use a setTimeout as a promise.
function allowUpdate() {
return new Promise((f) => {
setTimeout(f, 0);
});
}
// An infinitely looping operation that doesn't crash the browser.
async function neverStopUpdating(someElement) {
let i = 0;
while (true) {
someElement.innerText = i;
i++;
await allowUpdate();
}
}
If you're trying to do a hard computation you'll want to make sure not to do this await too frequently - in this example, in Chrome at time of writing, i only increments by about 150 per second because the context switch of a setTimeout is not fast (where you'd get hundreds of thousands in a second if you didn't yield for updates). You'd likely want to find a balance, either always perform some number of iterations before allowing an update, or maybe eg. call Date.now() in your loop and yield for an update whenever 100ms have passed since the last time you allowed an update.
You can do the update, wait for a bit of time, than do the calculation.
OR
You can use webworkers on browsers that support them.
Without having actual code, that is the best answer that I can give you.
JavaScript is single threaded. If you do your calc server side you could get the results via ajax which is called asynchronously, not blocking your ui.
So I made some timers for a quiz. The thing is, I just realized when I put
javascript: alert("blah");
in the address, the popup alert box pauses my timer. Which is very unwanted in a quiz.
I don't think there is any way to stop this behaviour... but I'll ask anyway.
If there is not, mind suggesting what should I do?
Never, ever rely on javascript (or any other client-side time) to calculate elapsed times for operations done between postbacks, or different pages.
If you always compare server dates, it will be hard for people to cheat:
first page request, store the server time
ping with javascript calls each N seconds, compare the 2 server times, and return the elapsed (just for show)
when the user submits the form, compare the 2 server times, calculate the elapsed time, and discard the ones which took too long (ie: possible cheaters)
Apparently the preview rendering differs from the posted rendering. This paragraph is here to make sure the next two lines show up as code.
// Preserve native alert() if you need it for something special
window.nativeAlert = window.alert;
window.alert = function(msg) {
// Do something with msg here. I always write mine to console.log,
// but then I have rarely found a use for a real modal dialog,
// and most can be handled by the browser (like window.onbeforeunload).
};
No, there is no way to prevent alert from stopping the single thread in JavaScript. Probably you can use some other way of user notification, for example a floating layer.
It's modal and stops execution. Consider an alternative which does not pause execution like a Lightbox technique.
I think the question asker is trying to prevent cheating. Since a user can type javascript: alert("paused"); into the address bar, or make a bookmarklet to do that, it's easy to pause the quiz and cheat.
The only thing I can think of is to use Date() to get the current time, and check it again when the timer fires. Then if the time difference is not reasonably close to the intended timer duration, show an admonishment and disqualify the answer to that question or let them flunk the quiz. There is no way to prevent the user from pausing your quiz, but it should be possible to catch them.
Of course with any cheat-proofing, you motivate people to become better cheaters. A person could change the system time on their PC, and fool the javascript Date() constructor which gets the time from the operating system.
You can use an interval to do a repeated clock comparison against a one second interval length. The interval handler can also update a time-remaining field on the user's display. Then the users can feel the pressure build as time runs out on their quiz. Fun times!
The feedback loop on SyaZ's question has clarified the issues at stake.
Here's an attempt to summarize the good answers so far:
Client scripts are by nature are easy to manipulate to cheat an online quiz. SEE #Filini 's Server-side approach
window.alert = function(msg) {} will overriding alert() and perhaps defeat the low hanging fruit of putting in the addressbar: javascript:alert('Pausing page so I can google the answer') or I'll use my Phone-A-Friend now. Courtesy of #eyelidlessness
If you must use a client-side approach, instead of using setTimeOut(), you could use a custom date-compare-based pause function like this (concept by #Mnebuerquo, code example by me (#micahwittman)):
Example:
var beginDate = new Date();
function myTimeout(milsecs){
do { curDate = new Date(); }
while((curDate-beginDate) < milsecs);
}
function putDownYourPencils(milsecs){
myTimeout(milsecs);
var seconds = milsecs / 1000;
alert('Your ' + seconds + ' seconds are up. Quiz is over.');
}
putDownYourPencils(3000);
Ultimately, you cannot trust user input. Without keeping track of the time elapsed on the server, there's just no guarantee the data hasn't been manipulated.
However, if you're confident your quiz-takers aren't JavaScript-savvy, and are merely relying on a "trick" they found somewhere, you could test for cheating (pausing) with the following code, which doesn't require modifying window.alert:
var timer = {
startDatetime: null,
startSec: 0,
variance: 1,
exitOnPause: true,
count: function (config) {
var that = this;
if (typeof config == "object" && typeof parseInt(config.seconds) == "number" && !isNaN(parseInt(config.seconds)))
{
if (typeof parseFloat(config.variance) == "number" && !isNaN(parseFloat(config.variance))) this.variance = config.variance;
if (typeof config.exitOnPause == "boolean") this.exitOnPause = config.exitOnPause;
if (config.seconds > 0)
{
if (!this.startSec) this.startSec = config.seconds;
if (!this.startDatetime) this.startDatetime = new Date();
var currentDatetime = new Date();
if (currentDatetime.getTime() - this.startDatetime.getTime() > (this.startSec - config.seconds) * this.variance * 1000)
{
if (typeof config.onPause == "function") config.onPause();
if (!this.exitOnPause)
{
this.startDatetime = new Date();
this.startSec = config.seconds--;
window.setTimeout(function () { that.count(config); }, 1000);
}
}
else
{
config.seconds--;
window.setTimeout(function () { that.count(config); }, 1000);
}
}
else
{
if (typeof config.onFinish == "function") config.onFinish();
}
}
}
};
This timer object has a single method, count(), which accepts an object as input. It expects a seconds property in the input object at minimum.
For some reason, window.setTimeout doesn't always work as expected. Sometimes, on my machine, window.setTimeout(x, 1000), which should execute the code after 1 second, took more than 2 seconds. So, in a case like this, you should allow a variance, so people who aren't cheating don't get flagged as cheaters. The variance defaults to 1, but it can be overridden in the input object. Here's an example of how to use this code, which allows 2.5 seconds of "wiggle room" for slow-pokes:
timer.count({
seconds: 10,
onPause: function () { alert("You cheated!"); window.location.replace("cheatersAreBad.html"); },
onFinish: function () { alert("Time's up!"); },
variance: 2.5
});
With a solution like this, you could use Ajax to tell a server-side script that the user has paused the timer or redirect the user to a page explaining they were caught cheating, for example. If, for some reason, you wanted to allow the user to continue taking the quiz after they've been caught cheating, you could set exitOnPause to false:
timer.count({
seconds: 10,
exitOnPause: false,
onPause: function () { recordCheaterViaAjax(); },
onFinish: function () { alert("Time's up!"); },
variance: 2.5
});
The server session could be set to expire at say 1 hour. The javascript could be used as only a display tool for the user to know how much time is left. If he decides to cheat by pausing the timer, then he might be suprised when posting his test that his session has timed out.