Get Gmail thread subjects (in a sane way) - javascript

I'm trying to use the Gmail API to retrieve all the thread subjects in a gmail account.
That's easy with threads.list, but that mainly gets the ID of the thread, not the subject.
The only way I've found is by using threads.list then, for each thread, calling threads.get and fetching the subject from the headers in the payload metadata.
Obviously this makes a lot of API calls, i.e. 101 calls if there's 100 threads.
Is there a better way?
Here's the code I'm currently using:
var getIndivThread = function(threads) {
threads.threads.forEach(function(e) {
indivThreadRequst.id = e.id,
gmail.api.users.threads.get(indivThreadRequst).execute(showThread);
});
};
var indivThreadRequst= {
format: 'metadata',
metadataHeaders:['subject'],
userId: myUserId,
maxResults:1};
var showThread = function(thread) {
console.log(thread.messages[0].payload.headers[0].value);
};
gmail.api.users.threads.list({userId:myUserId}).execute(getIndivThread);

Unfortunately, there isn't a way to get more than one thread subject at a time through the current API. However, there are a few things you might do to improve the performance:
Use the API's paging feature to fetch limited amounts of threads at once.
Fetch batches of messages in parallel rather than attempting to fetch all at once or one at a time. Experiment for yourself, but 5 would be a good number to start with.
If this is a browser implementation, consider making a call to your back-end instead of making the calls from the browser. This way the client only makes 1 call per page, and it allows you to potentially add caching and pre-loading mechanisms to your server that will improve the customer experience. The potential downside here is scalability; as you get more clients, you'll need considerably more processing power than an fat-client approach.
As an example of #2, you could fetch 5 initially, and then have the callback for each function fire the next call so there are always 5 fetching concurrently:
var CONCURRENCY_LIMIT = 5;
function getThread(threadId, done) {
threadRequest.id = e.id;
gmail.api.users.threads.get(threadRequest).execute(function(thread) {
showThread();
done();
});
}
gmail.api.users.threads.list({userId:myUserId}).execute(function(threads) {
function fetchNextThread() {
var nextThread = threads.shift();
nextThread.id && getThread(nextThread.id, fetchNextThread);
}
for (var i = 0; i < CONCURRENCY_LIMIT; i++) {
fetchNextThread();
}
});

Related

How to run an infinite blocking process in NodeJS?

I have a set of API endpoints in Express. One of them receives a request and starts a long running process that blocks other incoming Express requests.
My goal to make this process non-blocking. To understand better inner logic of Node Event Loop and how I can do it properly, I want to replace this long running function with my dummy long running blocking function that would start when I send a request to its endpoint.
I suppose, that different ways of making the dummy function blocking could cause Node manage these blockings differently.
So, my question is - how can I make a basic blocking process as a function that would run infinitely?
You can use node-webworker-threads.
var Worker, i$, x$, spin;
Worker = require('webworker-threads').Worker;
for (i$ = 0; i$ < 5; ++i$) {
x$ = new Worker(fn$);
x$.onmessage = fn1$;
x$.postMessage(Math.ceil(Math.random() * 30));
}
(spin = function(){
return setImmediate(spin);
})();
function fn$(){
var fibo;
fibo = function(n){
if (n > 1) {
return fibo(n - 1) + fibo(n - 2);
} else {
return 1;
}
};
return this.onmessage = function(arg$){
var data;
data = arg$.data;
return postMessage(fibo(data));
};
}
function fn1$(arg$){
var data;
data = arg$.data;
console.log("[" + this.thread.id + "] " + data);
return this.postMessage(Math.ceil(Math.random() * 30));
}
https://github.com/audreyt/node-webworker-threads
So, my question is - how can I make a basic blocking process as a function that would run infinitely?
function block() {
// not sure why you need that though
while(true);
}
I suppose, that different ways of making the dummy function blocking could cause Node manage these blockings differently.
Not really. I can't think of a "special way" to block the engine differently.
My goal to make this process non-blocking.
If it is really that long running you should really offload it to another thread.
There are short cut ways to do a quick fix if its like a one time thing, you can do it using a npm module that would do the job.
But the right way to do it is setting up a common design pattern called 'Work Queues'. You will need to set up a queuing mechanism, like rabbitMq, zeroMq, etc. How it works is, whenever you get a computation heavy task, instead of doing it in the same thread, you send it to the queue with relevant id values. Then a separate node process commonly called a 'worker' process will be listening for new actions on the queue and will process them as they arrive. This is a worker queue pattern and you can read up on it here:
https://www.rabbitmq.com/tutorials/tutorial-one-javascript.html
I would strongly advise you to learn this pattern as you would come across many tasks that would require this kind of mechanism. Also with this in place you can scale both your node servers and your workers independently.
I am not sure what exactly your 'long processing' is, but in general you can approach this kind of problem in two different ways.
Option 1:
Use the webworker-threads module as #serkan pointed out. The usual 'thread' limitations apply in this scenario. You will need to communicate with the Worker in messages.
This method should be preferable only when the logic is too complicated to be broken down into smaller independent problems (explained in option 2). Depending on complexity you should also consider if native code would better serve the purpose.
Option 2:
Break down the problem into smaller problems. Solve a part of the problem, schedule the next part to be executed later, and yield to let NodeJS process other events.
For example, consider the following example for calculating the factorial of a number.
Sync way:
function factorial(inputNum) {
let result = 1;
while(inputNum) {
result = result * inputNum;
inputNum--;
}
return result;
}
Async way:
function factorial(inputNum) {
return new Promise(resolve => {
let result = 1;
const calcFactOneLevel = () => {
result = result * inputNum;
inputNum--;
if(inputNum) {
return process.nextTick(calcFactOneLevel);
}
resolve(result);
}
calcFactOneLevel();
}
}
The code in second example will not block the node process. You can send the response when returned promise resolves.

Interrupt `request` In a `forEach` Loop to Improve Efficiency

I'm building a simple web crawler to automate a newsletter, which means I only need to scape a set amount of pages. In this example, it is not a big deal because the script will only crawl 3 extra pages. But for a different case, this would be hugely inefficient.
So my question is, would there be a way to stop executing request() in this forEach loop?
Or would I need to change my approach to crawl pages one-by-one, as outlined in this guide.
Script
'use strict';
var request = require('request');
var cheerio = require('cheerio');
var BASEURL = 'https://jobsite.procore.com';
scrape(BASEURL, getMeta);
function scrape(url, callback) {
var pages = [];
request(url, function(error, response, body) {
if(!error && response.statusCode == 200) {
var $ = cheerio.load(body);
$('.left-sidebar .article-title').each(function(index) {
var link = $(this).find('a').attr('href');
pages[index] = BASEURL + link;
});
callback(pages, log);
}
});
}
function getMeta(pages, callback) {
var meta = [];
// using forEach's index does not work, it will loop through the array before the first request can execute
var i = 0;
// using a for loop does not work here
pages.forEach(function(url) {
request(url, function(error, response, body) {
if(error) {
console.log('Error: ' + error);
}
var $ = cheerio.load(body);
var desc = $('meta[name="description"]').attr('content');
meta[i] = desc.trim();
i++;
// Limit
if (i == 6) callback(meta);
console.log(i);
});
});
}
function log(arr) {
console.log(arr);
}
Output
$ node crawl.js
1
2
3
4
5
6
[ 'Find out why fall protection (or lack thereof) lands on the Occupational Safety and Health Administration (OSHA) list of top violations year after year.',
'noneChances are you won’t be seeing any scented candles on the jobsite anytime soon, but what if it came in a different form? The allure of smell has conjured up some interesting scent technology in recent years. Take for example the Cyrano, a brushed-aluminum cylinder that fits in a cup holder. It’s Bluetooth-enabled and emits up to 12 scents or smelltracks that can be controlled using a smartphone app. Among the smelltracks: “Thai Beach Vacation.”',
'The premise behind the hazard communication standard is that employees have a right to know the toxic substances and chemical hazards they could encounter while working. They also need to know the protective things they can do to prevent adverse effects of working with those substances. Here are the steps to comply with the standard.',
'The Weitz Company has been using Procore on its projects for just under two years. Within that time frame, the national general contractor partnered with Procore to implement one of the largest technological advancements in its 163-year history. Click here to learn more about their story and their journey with Procore.',
'MGM Resorts International is now targeting Aug. 24 as the new opening date for the $960 million hotel and casino complex it has been building in downtown Springfield, Massachusetts.',
'So, what trends are taking center stage this year? Below are six of the most prominent. Some of them are new, and some of them are continuations of current trends, but they are all having a substantial impact on construction and the structures people live and work in.' ]
7
8
9
Aside from using slice to limit your selection, you can also refactor the code to reuse some functionality.
Sorry, I couldn't help myself after thinking about this for a second.
We can begin with the refactor:
const rp = require('request-promise-native');
const {load} = require('cheerio');
function scrape(uri, transform) {
const options = {
uri,
transform: load
};
return rp(options).then(transform);
}
scrape(
'https://jobsite.procore.com',
($) => $('.left-sidebar .article-title a').toArray().slice(0,6).map((linkEl) => linkEl.attribs.href)
).then((links) => Promise.all(
links.map(
(link) => scrape(
`https://jobsite.procore.com/${link}`,
($) => $('meta[name="description"]').attr('content').trim()
)
)
)).then(console.log).catch(console.error);
While this does make the code a bit more DRY and concise, it points out one part that might need to be improved upon: the requesting of the links.
Currently it will fire off a request for all (or up to) 6 links found on the original page nearly at once. This may or may not be what you want depending on how many links this will be requesting at some other point that you alluded to.
Another potential concern is error management. As the refactor stands, if any one of the requests fail then all of the requests will be discarded.
Just a couple of points to consider if you like this approach. Both can be resolved in a variety of ways.
There's no way of stopping a forEach. You can simulate a stop by checking a flag inside the forEach, but that will still loop through all the elements. By the way, using a loop for an io operation is not optimal.
As you have stated, the best way to process a set of increasing data to process is to do it one-by-one, but I'll add a twist: Threaded-one-by-one.
NOTE: With thread I don't mean actual threads. Take it more of a
definition of "multiple lines of work". As IO operations don't lock
the main thread, while one or more requests are waiting for the data,
other "line of work" can run the JavaScript to process the data
received, as JavaScript is single threaded (Not talking about
WebWorkers).
Is as easy as having an array of pages, which receives pages to be crawled on the fly, and one function that reads one page of that array, process the result and then returns to the starting point (loading the next page of the array and processing the result).
Now you just call that function the amount of threads that you want to run, and done. Pseudo-code:
var pages = [];
function loadNextPage() {
if (pages.length == 0) {
console.log("Thread ended");
return;
}
var page = shift(); // get the first element
loadAndProcessPage(page, loadNextPage);
}
loadAndProcessPage(page, callback) {
requestOrWhatever(page, (error, data) => {
if (error) {
// retry or whatever
} else {
processData(data);
callback();
}
});
}
function processData(data) {
// Process the data and push new links to the pages array
pages.push(data.link1);
pages.push(data.link2);
pages.push(data.link3);
}
console.log("Start new thread");
loadNextPage();
console.log("And another one");
loadNextPage();
console.log("And another one");
loadNextPage();
console.log("And another thread");
loadNextPage();
This code will stop when no more pages are in the array, and if at some point happens to be less pages than the amount of threads, the threads will close. Needs some tweaks here and there, but you get the point.
I'm assuming you're trying to stop executing after some amount of pages (it looks like six in you're example). As some other replies have stated you can't prevent executing the callback from a Array.prototype.forEach(), however on each execution you could prevent running the request call.
function getMeta(pages, callback) {
var meta = []
var i = 0
pages.forEach(url => {
// MaxPages you were looking for
if(i <= maxPages)
request((err, res, body) => {
// ... Request logic
})
})
You could also use a while loop to wrap to iterate over each page and once i hits the value you want the loop will exit and no run on the additional pages

Calculating when multiple writes to a file will cause inaccuracies?

in my node server I have a variable,
var clicks = 0;
each time a user clicks in the webapp, a websocket event sends a message. on the server,
clicks++;
if (clicks % 10 == 0) {
saveClicks();
}
function saveClicks() {
var placementData = JSON.stringify({'clicks' : clicks});
fs.writeFile( __dirname + '/clicks.json', placementData, function(err) {
});
}
At what rate do I have to start worrying about overwrites? How would I calculate this math?
(I'm looking at creating a MongoDB json object for each click but I'm curious what a native solution can offer).
From the node.js doc for fs.writeFile():
Note that it is unsafe to use fs.writeFile() multiple times on the
same file without waiting for the callback. For this scenario,
fs.createWriteStream() is strongly recommended.
This isn't a math problem to figure out when this might cause a problem - it's just bad code that gives you the chance of a conflict in circumstances that cannot be predicted. The node.js doc clearly states that this can cause a conflict.
To make sure you don't have a conflict, write the code in a different way so a conflict cannot happen.
If you want to make sure that all writes happen in the proper order of incoming requests so the last request to arrive is always the one who ends up in the file, then you make need to queue your data as it arrives (so order is preserved) and write to the file in a way that opens the file for exclusive access so no other request can write while that prior request is still writing and handle contention errors appropriately.
This is an issue that databases mostly do for you automatically so it may be one reason to use a database.
Assuming you weren't using clustering and thus do not have multiple processes trying to write to this file and that you just want to make sure the last value sent is the one written to the file by this process, you could do something like this:
var saveClicks = (function() {
var isWriting = false;
var lastData;
return function() {
// always save most recent data here
lastData = JSON.stringify({'clicks' : clicks});
if (!isWriting) {
writeData(lastData);
}
function writeData(data) {
isWriting = true;
lastData = null;
fs.writeFile(__dirname + '/clicks.json', data, function(err) {
isWriting = false;
if (err) {
// decide what to do if an error occurs
}
// if more data arrived while we were writing this, then write it now
if (lastData) {
writeData(lastData);
}
});
}
}
})();
#jfriend00 is definitely right about createWriteStream and already made a point about the database, and everything's pretty much said, but I would like to emphasize on the point about databases because basically the file-saving approach seems weird to me.
So, use databases.
Not only would this save you from the headache of tracking such things, but would significantly speed up things (remember that the way stuff is done in node, the numerous file reading-writing processes would be parallelized in a single thread, so basically if one of them lasts for ages, it might slightly affect the overall performance).
Redis is a perfect solution to store key-value data, so you can store data like clicks per user in a Redis database which you'll have to get running alongside anyway when your get enough traffic :)
If you're not convinced yet, take a look at this simple benchmark:
Redis:
var async = require('async');
var redis = require("redis"),
client = redis.createClient();
console.time("To Redis");
async.mapLimit(new Array(100000).fill(0), 1, (el, cb) => client.set("./test", 777, cb), () => {
console.timeEnd("To Redis");
});
To Redis: 5410.383ms
fs:
var async = require('async');
var fs = require('fs');
console.time("To file");
async.mapLimit(new Array(100000).fill(0), 1, (el, cb) => fs.writeFile("./test", 777, cb), () => {
console.timeEnd("To file");
});
To file: 20344.749ms
And, by the way, you can significantly increase the number of clicks after which the progress would be stored (now it's 10) by simply adding this "click-saver" to the socket socket.on('disconnect', ....

Batching requests to minimize cell drain

This article hit the top of HackerNews recently: http://highscalability.com/blog/2013/9/18/if-youre-programming-a-cell-phone-like-a-server-youre-doing.html#
In which it states:
The cell radio is one of the biggest battery drains on a phone. Every time you send data, no matter how small, the radio is powered on for up for 20-30 seconds. Every decision you make should be based on minimizing the number of times the radio powers up. Battery life can be dramatically improved by changing the way your apps handle data transfers. Users want their data now, the trick is balancing user experience with transferring data and minimizing power usage. A balance is achieved by apps carefully bundling all repeating and intermittent transfers together and then aggressively prefetching the intermittent transfers.
I would like to modify $.ajax to add an option like "doesn't need to be done right now, just do this request when another request is launched". What would be a good way to go about this?
I started with this:
(function($) {
var batches = [];
var oldAjax = $.fn.ajax;
var lastAjax = 0;
var interval = 5*60*1000; // Should be between 2-5 minutes
$.fn.extend({batchedAjax: function() {
batches.push(arguments);
}});
var runBatches = function() {
var now = new Date().getTime();
var batched;
if (lastAjax + interval < now) {
while (batched = batches.pop()) {
oldAjax.apply(null, batched);
}
}
}
setInterval(runBatches, interval);
$.fn.ajax = function() {
runBatches();
oldAjax.apply(null, arguments);
lastAjax = now;
};
})(jQuery);
I can't tell by the wording of the paper, I guess a good batch "interval" is 2-5 minutes, so I just used 5.
Is this a good implementation?
How can I make this a true modification of just the ajax method, by adding a {batchable:true} option to the method? I haven't quite figured that out either.
Does setInterval also keep the phone awake all the time? Is that a bad thing to do? Is there a better way to not do that?
Are there other things here that would cause a battery to drain faster?
Is this kind of approach even worthwhile? There are so many things going on at once in a modern smartphone, that if my app isn't using the cell, surely some other app is. Javascript can't detect if the cell is on or not, so why bother? Is it worth bothering?
I made some progress on adding the option to $.ajax, started to edit the question, and realized it's better as an answer:
(function($) {
var batches = [];
var oldAjax = $.fn.ajax;
var lastAjax = 0;
var interval = 5*60*1000; // Should be between 2-5 minutes
var runBatches = function() {
var now = new Date().getTime();
var batched;
if (lastAjax + interval < now) {
while (batched = batches.pop()) {
oldAjax.apply(null, batched);
}
}
}
setInterval(runBatches, interval);
$.fn.ajax = function(url, options) {
if (options.batchable) {
batches.push(arguments);
return;
}
runBatches();
oldAjax.apply(null, arguments);
lastAjax = now;
};
})(jQuery);
That was actually fairly straightforward. Is love to see a better answer though.
Does setInterval also keep the phone awake all the time? Is that a bad thing to do? Is there a better way to not do that?
From an iPhone 4, iOS 6.1.0 Safari environment:
A wrote an app with a countdown timer that updated an element's text on one-second intervals. The DOM tree had about medium complexity. The app was a relatively-simple calculator that didn't do any AJAX. However, I always had a sneaking suspicion that those once-per-second reflows were killing me. My battery sure seemed to deplete rather quickly, whenever I left it turned-on on a table, with Safari on the app's webpage.
And there were only two timeouts in that app. Now, I don't have any quantifiable proof that the timeouts were draining my battery, but losing about 10% every 45 minutes from this dopey calculator was a little unnerving. (Who knows though, maybe it was the backlight.)
On that note: You may want to build a test app that does AJAX on intervals, other things on intervals, etc, and compare how each function drains your battery under similar conditions. Getting a controlled environment might be tricky, but if there is a big enough difference in drain, then even "imperfect" testing conditions will yield noticeable-enough results for you to draw a conclusion.
However, I found out an interesting thing about how iOS 6.1.0 Safari handles timeouts:
The timeouts don't run their callbacks if you turn off the screen.
Consequentially, long-term timeouts will "miss their mark."
If my app's timer was to display the correct time (even after I closed and reopened the screen), then I couldn't go the easy route and do secondsLeft -= 1. If I turned off the screen, then the secondsLeft (relative to my starting time) would have been "behind," and thus incorrect. (The setTimeout callback did not run while the screen was turned off.)
The solution was that I had to recalculate timeLeft = fortyMinutes - (new Date().getTime() - startTime) on each interval.
Also, the timer in my app was supposed to change from green, to lime, to yellow, to red, as it got closer to expiry. Since, at this point, I was worried about the efficiency of my interval-code, I suspected that it would be better to "schedule" my color changes for their appropriate time (lime: 20 minutes after starting time, yellow: 30 mins, red: 35) (this seemed preferable to a quadruple-inequality-check on every interval, which would be futile 99% of the time).
However, if I scheduled such a color change, and my phone's screen was turned off at the target time, then that color change would never happen.
The solution was to check, on each interval, if the time elapsed since the last 1-second timer update had been ">= 2 seconds". (This way, the app could know if my phone had had its screen turned off; it was able to realize when it had "fallen behind.") At that point, if necessary, I would "forcibly" apply a color change and schedule the next one.
(Needless to say, I later removed the color-changer...)
So, I believe this confirms my claim that
iOS 6.1.0 Safari does not execute setTimeout callback functions if the screen is turned off.
So keep this in mind when "scheduling" your AJAX calls, because you will probably be affected by this behavior as well.
And, using my proposition, I can answer your question:
At least for iOS, we know that setTimeout sleeps while the screen is off.
Thus setTimeout won't give your phone "nightmares" ("keep it awake").
Is this kind of approach even worthwhile? There are so many things going on at once in a modern smartphone, that if my app isn't using the cell, surely some other app is. Javascript can't detect if the cell is on or not, so why bother? Is it worth bothering?
If you can get this implementation to work correctly then it seems like it would be worthwhile.
You will incur latency for every AJAX request you make, which will slow down your app to some degree. (Latency is the bane of page loading time, after all.) So you will definitely achieve some gain by "bundling" requests. Extending $.ajax such that you can "batch" requests will definitely have some merit.
The article you've linked clearly focuses on optimizing power consumption for apps (yes, the weather widget example is horrifying). Actively using a browser is, by definition, a foreground task; plus something like ApplicationCache is already available to reduce the need for network requests. You can then programmatically update the cache as required and avoid DIY.
Sceptical side note: if you are using jQuery as part of your HTML5 app (perhaps wrapped in Sencha or similar), perhaps the mobile app framework has more to do with request optimization than the code itself. I have no proof whatsoever, but goddammit this sounds about right :)
How can I make this a true modification of just the ajax method, by
adding a {batchable:true} option to the method? I haven't quite
figured that out either.
A perfectly valid approach but to me this sounds like duck punching gone wrong. I wouldn't. Even if you correctly default batchable to false, personally I would rather use a facade (perhaps even in its own namespace?)
var gQuery = {}; //gQuery = green jQuery, patent pending :)
gQuery.ajax = function(options,callback){
//your own .ajax with blackjack and hooking timeouts, ultimately just calling
$.ajax(options);
}
Does setInterval also keep the phone awake all the time? Is that a
bad thing to do? Is there a better way to not do that?
Native implementations of setInterval and setTimeout are very similar afaik; think of the latter not firing while the website is in the background for online banking inactivity prompts; when a page is not in the foreground its execution is basically halted. If an API is available for such "deferrals" (the article mentions of some relevant iOS7 capabilities) then it's likely a preferable approach, otherwise I see no reason to avoid setInterval.
Are there other things here that would cause a battery to drain
faster?
I'd speculate that any heavy load would (from calculating pi to pretty 3d transitions perhaps). But this sounds like premature optimization to me and reminds me of an e-reader with battery-saving mode that turned the LCD screen completely off :)
Is this kind of approach even worthwhile? There are so many things
going on at once in a modern smartphone, that if my app isn't using
the cell, surely some other app is. Javascript can't detect if the
cell is on or not, so why bother? Is it worth bothering?
The article pointed out a weather app being unreasonably greedy, and that would concern me. It seems to be a development oversight though more than anything else, as in fetching data more often than it's really needed. In an ideal world, this should be nicely handled on OS level, otherwise you'd end up with an array of competing workarounds. IMO: don't bother until highscalability posts another article telling you to :)
Here is my version:
(function($) {
var batches = [],
ajax = $.fn.ajax,
interval = 5*60*1000, // Should be between 2-5 minutes
timeout = setTimeout($.fn.ajax, interval);
$.fn.ajax=function(url, options) {
var batched, returns;
if(typeof url === "string") {
batches.push(arguments);
if(options.batchable) {
return;
}
}
while (batched = batches.shift()) {
returns = ajax.apply(null, batched);
}
clearTimeout(timeout);
timeout = setTimeout($.fn.ajax, interval);
return returns;
}
})(jQuery);
I think this version has the following main advantages:
If there is a non-batchable ajax call, the connection is used to send all batches. This Resets the timer.
Returns the expected return value on direct ajax calls
A direct processing of the batches can be triggered by calling $.fn.ajax() without parameters
As far as hacking the $.ajax method, I would :
try to also preserve the Promise mechanism provided by $.ajax,
take advantage of one of the global ajax events to trigger ajax calls,
maybe add a timer, to have the batch being called anyways in case no "immediate" $.ajax call is made,
give a new name to this function (in my code : $.batchAjax) and keep the orginal $.ajax.
Here is my go :
(function ($) {
var queue = [],
timerID = 0;
function ajaxQueue(url, settings) {
// cutom deferred used to forward the $.ajax' promise
var dfd = new $.Deferred();
// when called, this function executes the $.ajax call
function call() {
$.ajax(url, settings)
.done(function () {
dfd.resolveWith(this, arguments);
})
.fail(function () {
dfd.rejectWith(this, arguments);
});
}
// set a global timer, which will trigger the dequeuing in case no ajax call is ever made ...
if (timerID === 0) {
timerID = window.setTimeout(ajaxCallOne, 5000);
}
// enqueue this function, for later use
queue.push(call);
// return the promise
return dfd.promise();
}
function ajaxCallOne() {
window.clearTimeout(timerID);
timerID = 0;
if (queue.length > 0) {
f = queue.pop();
// async call : wait for the current ajax events
//to be processed before triggering a new one ...
setTimeout(f, 0);
}
}
// use the two functions :
$(document).bind('ajaxSend', ajaxCallOne);
// or :
//$(document).bind('ajaxComplete', ajaxCallOne);
$.batchAjax = ajaxQueue;
}(jQuery));
In this example, the hard coded delay fo 5 seconds defeats the purpose of "if less than 20 seconds between calls, it drains the battery". You can put a bigger one (5 minutes ?), or remove it altogether - it all depends on your app really.
fiddle
Regarding the general question "How do I write a web app which doesn't burn a phone's battery in 5 minutes ?" : it will take more than one magic arrow to deal with that one. It is a whole set of design decisions you will have to take, which really depends on your app.
You will have to arbitrate between loading as much data as possible in one go (and possibly send data which won't be used) vs fetching what you need (and possibly send many small individual requests).
Some parameters to take into account are :
volume of data (you don't want to drain your clients data plan either ...),
server load,
how much can be cached,
importance of being "up to date" (5 minutes delay for a chat app won't work),
frequency of client updates (a network game will probably require lots of updates from the client, a news app probably less ...).
One rather general suggestion : you can add a "live update" checkbox, and store its state client side. When unchecked, the client should hit a "refresh" button to download new data.
Here is my go, it somewhat grew out of what #Joe Frambach posted but I wanted the following additions:
retain the jXHR and error/success callbacks if they were provided
Debounce identical requests (by url and options match) while still triggering the callbacks or jqXHRs provided for EACH call
Use AjaxSettings to make configuration easier
Don't have each non batched ajax flush the batch, those should be separate processes IMO, but thus supply an option to force a batch flush as well.
Either way, this sucker would mostly likely be better done as a separate plugin rather than overriding and affecting the default .ajax function... enjoy:
(function($) {
$.ajaxSetup({
batchInterval: 5*60*1000,
flushBatch: false,
batchable: false,
batchDebounce: true
});
var batchRun = 0;
var batches = {};
var oldAjax = $.fn.ajax;
var queueBatch = function(url, options) {
var match = false;
var dfd = new $.Deferred();
batches[url] = batches[url] || [];
if(options.batchDebounce || $.ajaxSettings.batchDebounce) {
if(!options.success && !options.error) {
$.each(batches[url], function(index, batchedAjax) {
if($.param(batchedAjax.options) == $.param(options)) {
match = index;
return false;
}
});
}
if(match === false) {
batches[url].push({options:options, dfds:[dfd]});
} else {
batches[url][match].dfds.push(dfd);
}
} else {
batches[url].push({options:options, dfds:[dfd]);
}
return dfd.promise();
}
var runBatches = function() {
$.each(batches, function(url, batchedOptions) {
$.each(batchedOptions, function(index, batchedAjax) {
oldAjax.apply(null, url, batchedAjax.options).then(
function(data, textStatus, jqXHR) {
var args = arguments;
$.each(batchedAjax.dfds, function(index, dfd) {
dfd.resolve(args);
});
}, function(jqXHR, textStatus, errorThrown) {
var args = arguments;
$.each(batchedAjax.dfds, function(index, dfd) {
dfd.reject(args);
});
}
)
});
});
batches = {};
batchRun = new Date.getTime();
}
setInterval(runBatches, $.ajaxSettings.batchInterval);
$.fn.ajax = function(url, options) {
if (options.batchable) {
var xhr = queueBatch(url, options);
if((new Date.getTime()) - batchRun >= options.batchInterval) {
runBatches();
}
return xhr;
}
if (options.flushBatch) {
runBatches();
}
return oldAjax.call(null, url, options);
};
})(jQuery);

How can I work with typed arrays without using for?

var sendBuffer = new ArrayBuffer(4096);
var dv = new DataView(sendBuffer);
dv.setInt32(0, 1234);
var service = svcName;
for (var i = 0; i < service.length; i++)
{
dv.setUint8(i + 4, service.charCodeAt(i));
}
ws.send(sendBuffer);
how to workout this wihout using for loop. for loop decreasing performance while works with huge amount of data.
As based on the comments your real problem is that the loop will make your UI to block.
The split answer above does not give you a proper way to prevent blocking. Everything done in the Javascript main thread will block the UI.
You need to use Web Workers (separate threads) for processing your data, so the processing does not block the UI thread:
http://updates.html5rocks.com/2011/09/Workers-ArrayBuffer
You post the data to the separate worker for processing using postMessage() and then post the resulting data back to the main thread using another postMessage().
In javascript, for() loops are very tight and efficient relative to other operations. Doing this sequentially on every permutation (i.e. getting rid of the for() loop) would be inelegant and also not save you very many cycles.
If an operation is likely to cause a client to grind to a halt, you need to split the problem into smaller components and give a warning to the user that performing the operation will take some time.
I would recommend splitting this operation into smaller chunks instead of trying to find another algorithm that doesn't use for().
Perhaps like this, using callbacks prevent the code from blocking:
var split = service.length/4;
function alpha(split, position, callback) {
for (var i = split*(position-1); i < split*(position); i++) {
dv.setUint8(i + 4, service.charCodeAt(i));}
if(callback && (typeof(callback) == 'function') {
callback();}
}
var split = service.length/4;
alpha(split, 1, function() {
// poll here for other information or to confirm user wishes to proceed
alpha(split, 2, function() {
// poll here for other information or to confirm user wishes to proceed
alpha(split, 3, function() {
// poll here for other information or to confirm user wishes to proceed
alpha(split, 4);}
}
}
This is an enormously simplified and not the best way to implement this solution. But it will give you a chance to optimize the processing going on and prioritize the operations in relation to other ops.

Categories