NodeJS and asynchronous hell - javascript

I just came to this awful situation where I have an array of strings each representing a possibly existing file (e.g. var files = ['file1', 'file2', 'file3']. I need to loop through these file names and try to see if it exists in the current directory, and if it does, stop looping and forget the rest of the remaining files. So basically I want to find the first existing file of those, and fallback to a hard-coded message if nothing was found.
This is what I currently have:
var found = false;
files.forEach(function(file) {
if (found) return false;
fs.readFileSync(path + file, function(err, data) {
if (err) return;
found = true;
continueWithStuff();
});
});
if (found === false) {
// Handle this scenario.
}
This is bad. It's blocking (readFileSync) thus it's slow.
I can't just supply callback methods for fs.readFile, it's not that simple because I need to take the first found item... and the callbacks may be called at any random order. I think one way would be to have a callback that increases a counter and keeps a list of found/not found information and when it reaches the files.length count, then it checks through the found/not found info and decides what to do next.
This is painful. I do see the performance greatness in evented IO, but this is unacceptable. What choices do I have?

Don't use sync stuff in a normal server environment -- things are single threaded and this will completely lock things up while it waits for the results of this io bound loop. CLI utility = probably fine, server = only okay on startup.
A common library for asynchronous flow control is
https://github.com/caolan/async
async.filter(['file1','file2','file3'], path.exists, function(results){
// results now equals an array of the existing files
});
And if you want to say, avoid the extra calls to path.exists, then you could pretty easily write a function 'first' that did the operations until some test succeeded. Similar to https://github.com/caolan/async#until - but you're interested in the output.

The async library is absolutely what you are looking for. It provides pretty much all the types of iteration that you'd want in a nice asynchronous way. You don't have to write your own 'first' function though. Async already provides a 'some' function that does exactly that.
https://github.com/caolan/async#some
async.some(files, path.exists, function(result) {
if (result) {
continueWithStuff();
}
else {
// Handle this scenario
}
});
If you or someone reading this in the future doesn't want to use Async, you can also do your own basic version of 'some.'
function some(arr, func, cb) {
var count = arr.length-1;
(function loop() {
if (count == -1) {
return cb(false);
}
func(arr[count--], function(result) {
if (result) cb(true);
else loop();
});
})();
}
some(files, path.exists, function(found) {
if (found) {
continueWithStuff();
}
else {
// Handle this scenario
}
});

You can do this without third-party libraries by using a recursive function. Pass it the array of filenames and a pointer, initially set to zero. The function should check for the existence of the indicated (by the pointer) file name in the array, and in its callback it should either do the other stuff (if the file exists) or increment the pointer and call itself (if the file doesn't exist).

Use async.waterfall for controlling the async call in node.js for example:
by including async-library and use waterfall call in async:
var async = require('async');
async.waterfall(
[function(callback)
{
callback(null, taskFirst(rootRequest,rootRequestFrom,rootRequestTo, callback, res));
},
function(arg1, callback)
{
if(arg1!==undefined )
{
callback(null, taskSecond(arg1,rootRequest,rootRequestFrom,rootRequestTo,callback, res));
}
}
])

(Edit: removed sync suggestion because it's not a good idea, and we wouldn't want anyone to copy/paste it and use it in production code, would we?)
If you insist on using async stuff, I think a simpler way to implement this than what you described is to do the following:
var path = require('path'), fileCounter = 0;
function existCB(fileExists) {
if (fileExists) {
global.fileExists = fileCounter;
continueWithStuff();
return;
}
fileCounter++;
if (fileCounter >= files.length) {
// none of the files exist, handle stuff
return;
}
path.exists(files[fileCounter], existCB);
}
path.exists(files[0], existCB);

Related

switching from Jquery post requests to modern Promises

I'm working on a web application project with Flask+Python on the back-end, and Javascript on the front-end. I'd like to take advantage of some of the more modern (ES6/7) styles of things, such as Promises.
I've currently been writing all my javascript using Jquery 3+. Most of the time I'm making single Ajax requests to the server at a time. I've been specifically writing my Ajax requests using $.post and .done() and .fail(), which I know is already promise-based, or promise-like. Most of my code is in the style of
do function setup stuff and checks
make single ajax request
on success
good status, run several success code bits
bad status, run failure code
on failure - run failure code
I always seem to have to account for cases of server failures + cases of server success but it returned the wrong thing, which I usually control with a status argument. I've been looking into the straight Promise syntax with then, catch, resolve, reject, and I have some questions.
Is there any advantage to me switching to this format, from what I currently have, given my simple Ajax requests?
Can it be used to simplify the way I currently write my requests and handle my failure cases?
Here is a simple login example that I have, with a function that is called when a login button is clicked.
$('#loginsubmit').on('click', this, this.login);
// Login function
login() {
const form = $('#loginform').serialize();
$.post(Flask.url_for('index_page.login'), form, 'json')
.done((data)=>{
if (data.result.status < 0) {
// bad submit
this.resetLogin();
} else {
// good submit
if (data.result.message !== ''){
const stat = (data.result.status === 0) ? 'danger' : 'success';
const htmlstr = `<div class='alert alert-${stat}' role='alert'><h4>${data.result.message}</h4></div>`;
$('#loginmessage').html(htmlstr);
}
if (data.result.status === 1){
location.reload(true);
}
}
})
.fail((data)=>{ alert('Bad login attempt'); });
}
And a typical more complex example that I have. In this case, some interactive elements are initialized when a button is toggled on and off.
this.togglediv.on('change', this, this.initDynamic);
// Initialize the Dynamic Interaction upon toggle - makes loading an AJAX request
initDynamic(event) {
let _this = event.data;
if (!_this.togglediv.prop('checked')){
// Turning Off
_this.toggleOff();
} else {
// Turning On
_this.toggleOn();
// check for empty divs
let specempty = _this.graphdiv.is(':empty');
let imageempty = _this.imagediv.is(':empty');
let mapempty = _this.mapdiv.is(':empty');
// send the request if the dynamic divs are empty
if (imageempty) {
// make the form
let keys = ['plateifu', 'toggleon'];
let form = m.utils.buildForm(keys, _this.plateifu, _this.toggleon);
_this.toggleload.show();
$.post(Flask.url_for('galaxy_page.initdynamic'), form, 'json')
.done(function(data) {
let image = data.result.image;
let spaxel = data.result.spectra;
let spectitle = data.result.specmsg;
let maps = data.result.maps;
let mapmsg = data.result.mapmsg;
// Load the Image
_this.initOpenLayers(image);
_this.toggleload.hide();
// Try to load the spaxel
if (data.result.specstatus !== -1) {
_this.loadSpaxel(spaxel, spectitle);
} else {
_this.updateSpecMsg(`Error: ${spectitle}`, data.result.specstatus);
}
// Try to load the Maps
if (data.result.mapstatus !== -1) {
_this.initHeatmap(maps);
} else {
_this.updateMapMsg(`Error: ${mapmsg}`, data.result.mapstatus);
}
})
.fail(function(data) {
_this.updateSpecMsg(`Error: ${data.result.specmsg}`, data.result.specstatus);
_this.updateMapMsg(`Error: ${data.result.mapmsg}`, data.result.mapstatus);
_this.toggleload.hide();
});
}
}
}
I know this is already roughly using promises, but can I make improvements to my code flow by switching to the Promise then catch syntax? As you can see, I end up repeating a lot of the failure case code for real failures and successful failures. Most of my code looks like this, but I've been having a bit of trouble trying to convert these into something that's like
promise_ajax_call
.then(do real success)
.catch(all failure cases)
I always use Bluebird Promises. They have a Promise.resolve function that you can utilize with ajax. One thing to know about Promises, if you throw an error in a then, it will be caught in a chained catch. One way to clean this up a bit might be something like this (keep in mind, this is pseudo)
Promise.resolve($.ajax(...some properties..))
.then((data)=>{
if(data.result.status < 0){
//throw some error
}
// process the data how you need it
})
.catch((error){
// either the ajax failed, or you threw an error in your then. either way, it will end up in this catch
});

Calculating when multiple writes to a file will cause inaccuracies?

in my node server I have a variable,
var clicks = 0;
each time a user clicks in the webapp, a websocket event sends a message. on the server,
clicks++;
if (clicks % 10 == 0) {
saveClicks();
}
function saveClicks() {
var placementData = JSON.stringify({'clicks' : clicks});
fs.writeFile( __dirname + '/clicks.json', placementData, function(err) {
});
}
At what rate do I have to start worrying about overwrites? How would I calculate this math?
(I'm looking at creating a MongoDB json object for each click but I'm curious what a native solution can offer).
From the node.js doc for fs.writeFile():
Note that it is unsafe to use fs.writeFile() multiple times on the
same file without waiting for the callback. For this scenario,
fs.createWriteStream() is strongly recommended.
This isn't a math problem to figure out when this might cause a problem - it's just bad code that gives you the chance of a conflict in circumstances that cannot be predicted. The node.js doc clearly states that this can cause a conflict.
To make sure you don't have a conflict, write the code in a different way so a conflict cannot happen.
If you want to make sure that all writes happen in the proper order of incoming requests so the last request to arrive is always the one who ends up in the file, then you make need to queue your data as it arrives (so order is preserved) and write to the file in a way that opens the file for exclusive access so no other request can write while that prior request is still writing and handle contention errors appropriately.
This is an issue that databases mostly do for you automatically so it may be one reason to use a database.
Assuming you weren't using clustering and thus do not have multiple processes trying to write to this file and that you just want to make sure the last value sent is the one written to the file by this process, you could do something like this:
var saveClicks = (function() {
var isWriting = false;
var lastData;
return function() {
// always save most recent data here
lastData = JSON.stringify({'clicks' : clicks});
if (!isWriting) {
writeData(lastData);
}
function writeData(data) {
isWriting = true;
lastData = null;
fs.writeFile(__dirname + '/clicks.json', data, function(err) {
isWriting = false;
if (err) {
// decide what to do if an error occurs
}
// if more data arrived while we were writing this, then write it now
if (lastData) {
writeData(lastData);
}
});
}
}
})();
#jfriend00 is definitely right about createWriteStream and already made a point about the database, and everything's pretty much said, but I would like to emphasize on the point about databases because basically the file-saving approach seems weird to me.
So, use databases.
Not only would this save you from the headache of tracking such things, but would significantly speed up things (remember that the way stuff is done in node, the numerous file reading-writing processes would be parallelized in a single thread, so basically if one of them lasts for ages, it might slightly affect the overall performance).
Redis is a perfect solution to store key-value data, so you can store data like clicks per user in a Redis database which you'll have to get running alongside anyway when your get enough traffic :)
If you're not convinced yet, take a look at this simple benchmark:
Redis:
var async = require('async');
var redis = require("redis"),
client = redis.createClient();
console.time("To Redis");
async.mapLimit(new Array(100000).fill(0), 1, (el, cb) => client.set("./test", 777, cb), () => {
console.timeEnd("To Redis");
});
To Redis: 5410.383ms
fs:
var async = require('async');
var fs = require('fs');
console.time("To file");
async.mapLimit(new Array(100000).fill(0), 1, (el, cb) => fs.writeFile("./test", 777, cb), () => {
console.timeEnd("To file");
});
To file: 20344.749ms
And, by the way, you can significantly increase the number of clicks after which the progress would be stored (now it's 10) by simply adding this "click-saver" to the socket socket.on('disconnect', ....

Nested queries in Node JS / MongoDB

My userlist table in mongo is set up like this:
email: email#email.com, uniqueUrl:ABC, referredUrl: ...
I have the following code where I query all of the users in my database, and for each of those users, find out how many other's users' referredUrl's equal the current user's unique url:
exports.users = function(db) {
return function(req, res) {
db.collection('userlist').find().toArray(function (err, items) {
for(var i = 0; i<= items.length; i++) {
var user = items[i];
console.log(user.email);
db.collection('userlist').find({referredUrl:user.uniqueUrl}).count(function(err,count) {
console.log(count);
});
}
});
};
};
Right now I'm first logging the user's email, then the count associated with the user. So the console should look as such:
bob#bob.com
1
chris#chris.com
3
grant#grant.com
2
Instead, it looks like this:
bob#bob.com
chris#chris.com
grant#grant.com
1
3
2
What's going on? Why is the nested query only returning after the first query completes?
Welcome to asynchronous programming and callbacks.
What you are expecting to happen is that everything works in a linear order, but that is not how node works. The whole subject is a little too broad for here, but could do with some reading up on.
Luckily the methods invoked by the driver all key of process.nextTick, which gives you something to look up and search on. But there is a simple way to remedy the code due to the natural way that things are queued.
db.collection('userlist').find().toArray(function(err,items) {
var processing = function(user) {
db.collection('userlist').find({ referredUrl: user.uniqueUrl })
.count(function(err,count) {
console.log( user.email );
console.log( count );
});
};
for( var i = 0; i < items.length; i++) {
var user = items[i];
processing(user);
}
});
Now of course that is really an oversimplified way of explaining this, but understand here that you are passing parameters through to your repeating .find() and then doing all the output there.
As said, fortunately some of the work is done for you in the API functions and the event stack is maintained as you added the calls. But mostly, now the output calls are made together and are not occurring within different sets of events.
For a detailed explanation of event loops and callbacks, I'm sure there a much better ones out there than I could write here.
Callbacks are asynchronous in node.js. So, your count function (function(err,count) { console.log(count); }) is not executed immediately after console.log(user.email);. Therefore, the output is normal, nothing wrong with it. What the wrong is the coding style. You shouldn't call callbacks consecutively to get same result when you call functions in same manner in python (in single thread). To get desired result, you should do all work in single callback. But before doing that, I recommend you to understand how callbacks work in nodejs. This will significantly help your coding in nodejs

Handling interdependent and/or layered asynchronous calls

As an example, suppose I want to fetch a list of files from somewhere, then load the contents of these files and finally display them to the user. In a synchronous model, it would be something like this (pseudocode):
var file_list = fetchFiles(source);
if (!file_list) {
display('failed to fetch list');
} else {
for (file in file_list) { // iteration, not enumeration
var data = loadFile(file);
if (!data) {
display('failed to load: ' + file);
} else {
display(data);
}
}
}
This provides decent feedback to the user and I can move pieces of code into functions if I so deem necessary. Life is simple.
Now, to crush my dreams: fetchFiles() and loadFile() are actually asynchronous. The easy way out is to transform them into synchronous functions. But this is not good if the browser locks up waiting for calls to complete.
How can I handle multiple interdependent and/or layered asynchronous calls without delving deeper and deeper into an endless chain of callbacks, in classic reductio ad spaghettum fashion? Is there a proven paradigm to cleanly handle these while keeping code loosely coupled?
Deferreds are really the way to go here. They capture exactly what you (and a whole lot of async code) want: "go away and do this potentially expensive thing, don't bother me in the meantime, and then do this when you get back."
And you don't need jQuery to use them. An enterprising individual has ported Deferred to underscore, and claims you don't even need underscore to use it.
So your code can look like this:
function fetchFiles(source) {
var dfd = _.Deferred();
// do some kind of thing that takes a long time
doExpensiveThingOne({
source: source,
complete: function(files) {
// this informs the Deferred that it succeeded, and passes
// `files` to all its success ("done") handlers
dfd.resolve(files);
// if you know how to capture an error condition, you can also
// indicate that with dfd.reject(...)
}
});
return dfd;
}
function loadFile(file) {
// same thing!
var dfd = _.Deferred();
doExpensiveThingTwo({
file: file,
complete: function(data) {
dfd.resolve(data);
}
});
return dfd;
}
// and now glue it together
_.when(fetchFiles(source))
.done(function(files) {
for (var file in files) {
_.when(loadFile(file))
.done(function(data) {
display(data);
})
.fail(function() {
display('failed to load: ' + file);
});
}
})
.fail(function() {
display('failed to fetch list');
});
The setup is a little wordier, but once you've written the code to handle the Deferred's state and stuffed it off in a function somewhere you won't have to worry about it again, you can play around with the actual flow of events very easily. For example:
var file_dfds = [];
for (var file in files) {
file_dfds.push(loadFile(file));
}
_.when(file_dfds)
.done(function(datas) {
// this will only run if and when ALL the files have successfully
// loaded!
});
Events
Maybe using events is a good idea. It keeps you from creating code-trees and de-couples your code.
I've used bean as the framework for events.
Example pseudo code:
// async request for files
function fetchFiles(source) {
IO.get(..., function (data, status) {
if(data) {
bean.fire(window, 'fetched_files', data);
} else {
bean.fire(window, 'fetched_files_fail', data, status);
}
});
}
// handler for when we get data
function onFetchedFiles (event, files) {
for (file in files) {
var data = loadFile(file);
if (!data) {
display('failed to load: ' + file);
} else {
display(data);
}
}
}
// handler for failures
function onFetchedFilesFail (event, status) {
display('Failed to fetch list. Reason: ' + status);
}
// subscribe the window to these events
bean.on(window, 'fetched_files', onFetchedFiles);
bean.on(window, 'fetched_files_fail', onFetchedFilesFail);
fetchFiles();
Custom events and this kind of event handling is implemented in virtually all popular JS frameworks.
Sounds like you need jQuery Deferred. Here is some untested code that might help point you in the right direction:
$.when(fetchFiles(source)).then(function(file_list) {
if (!file_list) {
display('failed to fetch list');
} else {
for (file in file_list) {
$.when(loadFile(file)).then(function(data){
if (!data) {
display('failed to load: ' + file);
} else {
display(data);
}
});
}
}
});
I also found another decent post which gives a few uses cases for the Deferred object
If you do not want to use jQuery, what you could use instead are web workers in combination with synchronous requests. Web workers are supported across every major browser with the exception of any Internet Explorer version before 10.
Web Worker browser compatability
Basically, if you're not entirely certain what a web worker is, think of it as a way for browsers to execute specialized JavaScript on a separate thread without impacting the main thread (Caveat: On a single-core CPU, both threads will run in an alternating fashion. Luckily, most computers nowadays come equipped with dual-core CPUs). Usually, web workers are reserved for complex computations or some intense processing task. Just keep in mind that any code within the web worker CANNOT reference the DOM nor can it reference any global data structures that have not been passed to it. Essentially, web workers run independent of the main thread. Any code that the worker executes should be kept separate from the rest of your JavaScript code base, within its own JS file. Furthermore, if the web workers need specific data in order to properly work, you need to pass that data into them upon starting them up.
Yet another important thing worth noting is that any JS libraries that you need to use to load the files will need to be copied directly into the JavaScript file that the worker will execute. That means these libraries should first be minified(if they haven't been already), then copied and pasted into the top of the file.
Anyway, I decided to write up a basic template to show you how to approach this. Check it out below. Feel free to ask questions/criticize/etc.
On the JS file that you want to keep executing on the main thread, you want something like the following code below in order to invoke the worker.
function startWorker(dataObj)
{
var message = {},
worker;
try
{
worker = new Worker('workers/getFileData.js');
}
catch(error)
{
// Throw error
}
message.data = dataObj;
// all data is communicated to the worker in JSON format
message = JSON.stringify(message);
// This is the function that will handle all data returned by the worker
worker.onMessage = function(e)
{
display(JSON.parse(e.data));
}
worker.postMessage(message);
}
Then, in a separate file meant for the worker (as you can see in the code above, I named my file getFileData.js), write something like the following...
function fetchFiles(source)
{
// Put your code here
// Keep in mind that any requests made should be synchronous as this should not
// impact the main thread
}
function loadFile(file)
{
// Put your code here
// Keep in mind that any requests made should be synchronous as this should not
// impact the main thread
}
onmessage = function(e)
{
var response = [],
data = JSON.parse(e.data),
file_list = fetchFiles(data.source),
file, fileData;
if (!file_list)
{
response.push('failed to fetch list');
}
else
{
for (file in file_list)
{ // iteration, not enumeration
fileData = loadFile(file);
if (!fileData)
{
response.push('failed to load: ' + file);
}
else
{
response.push(fileData);
}
}
}
response = JSON.stringify(response);
postMessage(response);
close();
}
PS: Also, I dug up another thread which would better help you understand the pros and cons of using synchronous requests in combination with web workers.
Stack Overflow - Web Workers and Synchronous Requests
async is a popular asynchronous flow control library often used with node.js. I've never personally used it in the browser, but apparently it works there as well.
This example would (theoretically) run your two functions, returning an object of all the filenames and their load status. async.map runs in parallel, while waterfall is a series, passing the results of each step on to the next.
I am assuming here that your two async functions accept callbacks. If they do not, I'd require more info as to how they're intended to be used (do they fire off events on completion? etc).
async.waterfall([
function (done) {
fetchFiles(source, function(list) {
if (!list) done('failed to fetch file list');
else done(null, list);
});
// alternatively you could simply fetchFiles(source, done) here, and handle
// the null result in the next function.
},
function (file_list, done) {
var loadHandler = function (memo, file, cb) {
loadFile(file, function(data) {
if (!data) {
display('failed to load: ' + file);
} else {
display(data);
}
// if any of the callbacks to `map` returned an error, it would halt
// execution and pass that error to the final callback. So we don't pass
// an error here, but rather a tuple of the file and load result.
cb(null, [file, !!data]);
});
};
async.map(file_list, loadHandler, done);
}
], function(err, result) {
if (err) return display(err);
// All files loaded! (or failed to load)
// result would be an array of tuples like [[file, bool file loaded?], ...]
});
waterfall accepts an array of functions and executes them in order, passing the result of each along as the arguments to the next, along with a callback function as the last argument, which you call with either an error, or the resulting data from the function.
You could of course add any number of different async callbacks between or around those two, without having to change the structure of the code at all. waterfall is actually only 1 of 10 different flow control structures, so you have a lot of options (although I almost invariably end up using auto, which allows you to mix parallel and series execution in the same function via a Makefile like requirements syntax).
I had this issue with a webapp I'm working on and here's how I solved it (with no libraries).
Step 1: Wrote a very lightweight pubsub implementation. Nothing fancy. Subscribe, Unsubscribe, Publish and Log. Everything (with comments) adds up 93 lines of Javascript. 2.7kb before gzip.
Step 2: Decoupled the process you were trying to accomplish by letting the pubsub implementation do the heavy lifting. Here's an example:
// listen for when files have been fetched and set up what to do when it comes in
pubsub.notification.subscribe(
"processFetchedResults", // notification to subscribe to
"fetchedFilesProcesser", // subscriber
/* what to do when files have been fetched */
function(params) {
var file_list = params.notificationParams.file_list;
for (file in file_list) { // iteration, not enumeration
var data = loadFile(file);
if (!data) {
display('failed to load: ' + file);
} else {
display(data);
}
}
);
// trigger fetch files
function fetchFiles(source) {
// ajax call to source
// on response code 200 publish "processFetchedResults"
// set publish parameters as ajax call response
pubsub.notification.publish("processFetchedResults", ajaxResponse, "fetchFilesFunction");
}
Of course this is very verbose in the setup and scarce on the magic behind the scenes.
Here's some technical details:
I'm using setTimeout to handle triggering subscriptions. This way they run in a non-blocking fashion.
The call is effectively decoupled from the processing. You can write a different subscription to the notification "processFetchedResults" and do multiple things once the response comes through (for example logging and processing) while keeping them in very separate, tiny and easily-managed code blocks.
The above code sample doesn't address fallbacks or run proper checks. I'm sure it will require a bit of tooling to get to production standards. Just wanted to show you how possible it is and how library-independent your solution can be.
Cheers!

Loop calling an asynchronous function

Introduction to the problem
I need to call an asynchronous function within a loop until a condition is satisfied. This particular function sends a POST request to a website form.php and performs some operations with the response, which is a JSON string representing an object with an id field. So, when that id is null, the outer loop must conclude. The function does something like the following:
function asyncFunction(session) {
(new Request({
url: form.php,
content: "sess=" + session,
onComplete: function (response) {
var response = response.json;
if (response.id) {
doStaff(response.msg);
} else {
// Break loop
}
}
})).get();
}
Note: Although I've found the problem implementing an add-on for Firefox, I think that this is a general javascript question.
Implementing the loop recursively
I've tried implementing the loop by recursivity but it didn't work and I'm not sure that this is the right way.
...
if (response.id) {
doStaff(response.msg);
asyncFunction(session);
} else {
// Break loop
}
...
Using jsdeferred
I also have tried with the jsdeferred library:
Deferred.define(this);
//Instantiate a new deferred object
var deferred = new Deferred();
// Main loop: stops when we receive the exception
Deferred.loop(1000, function() {
asyncFunction(session, deferred);
return deferred;
}).
error(function() {
console.log("Loop finished!");
});
And then calling:
...
if (response.id) {
doStaff(response.msg);
d.call();
} else {
d.fail();
}
...
And I achieve serialization but it started repeating previous calls for every iteration. For example, if it was the third time that it called the asyncFunction, it would call the same function with the corresponding parameters in the iterations 1 and 2.
Your question is not exactly clear, but the basic architecture must be that the completion event handlers for the asynchronous operation must decide whether to try again or to simply return. If the results of the operation warrant another attempt, then the handler should call the parent function. If not, then by simply exiting the cycle will come to an end.
You can't code something like this in JavaScript with anything that looks like a simple "loop" structure, for the very reason that the operations are asynchronous. The results of the operation don't happen in such a way as to allow the looping mechanism to perform a test on the results; the loop may run thousands of iterations before the result is available. To put it another way, you don't "wait" for an asynchronous operation with code. You wait by doing nothing, and allowing the registered event handler to take over when the results are ready.
Thank you guys for your help. This is what I ended doing:
var sess = ...;
Deferred.define(this);
function asyncFunction (session) {
Deferred.next(function() {
var d = new Deferred();
(new Request({
url: form.php,
content: "sess=" + session,
onComplete: function (response) {
d.call(response.json);
}
})).get();
return d;
}).next(function(resp) {
if (resp.id) {
asyncFunction(session);
console.log(resp.msg);
}
});
}
asyncFunction(sess);
Why wouldn't you just use a setInterval loop? In the case of an SDK-based extension, this would look like:
https://builder.addons.mozilla.org/addon/1065247/latest/
The big benefit of promises-like patterns over using timers is that you can do things in parallel, and use much more complicated dependencies for various tasks. A simple loop like this is done just as easily / neatly using setInterval.
If I correctly understand what you want to do, Deferred is a good approach. Here's an example using jQuery which has Deferred functionality built in (jQuery.Deferred)
A timeout is used to simulate an http request. When each timeout is complete (or http request is complete) a random number is returned which is equivalent to the result of your http request.
Based on the result of the request you can decide if you need another http request or want to stop.
Try out the below snippet. Include the jQuery file and then the snippet. It keeps printing values in the console and stops after a zero is reached.
This could take while to understand but useful.
$(function() {
var MAXNUM = 9;
function newAsyncRequest() {
var def = $.Deferred(function(defObject) {
setTimeout(function() {
defObject.resolve(Math.floor(Math.random() * (MAXNUM+1)));
}, 1000);
});
def.done(function(val) {
if (val !== 0)
newAsyncRequest();
console.log(val);
});
};
newAsyncRequest();
});
Update after suggestion from #canuckistani
#canuckistani is correct in his answer. For this problem the solution is simpler. Without using Deferred the above code snippet becomes the following. Sorry I led you to a tougher solution.
$(function() {
var MAXNUM = 9;
function newAsyncRequest() {
setTimeout(function() {
var val = Math.floor(Math.random() * (MAXNUM+1));
if (val !== 0)
newAsyncRequest();
console.log(val);
}, 1000);
}
newAsyncRequest();
});

Categories