I'm working on a web application project with Flask+Python on the back-end, and Javascript on the front-end. I'd like to take advantage of some of the more modern (ES6/7) styles of things, such as Promises.
I've currently been writing all my javascript using Jquery 3+. Most of the time I'm making single Ajax requests to the server at a time. I've been specifically writing my Ajax requests using $.post and .done() and .fail(), which I know is already promise-based, or promise-like. Most of my code is in the style of
do function setup stuff and checks
make single ajax request
on success
good status, run several success code bits
bad status, run failure code
on failure - run failure code
I always seem to have to account for cases of server failures + cases of server success but it returned the wrong thing, which I usually control with a status argument. I've been looking into the straight Promise syntax with then, catch, resolve, reject, and I have some questions.
Is there any advantage to me switching to this format, from what I currently have, given my simple Ajax requests?
Can it be used to simplify the way I currently write my requests and handle my failure cases?
Here is a simple login example that I have, with a function that is called when a login button is clicked.
$('#loginsubmit').on('click', this, this.login);
// Login function
login() {
const form = $('#loginform').serialize();
$.post(Flask.url_for('index_page.login'), form, 'json')
.done((data)=>{
if (data.result.status < 0) {
// bad submit
this.resetLogin();
} else {
// good submit
if (data.result.message !== ''){
const stat = (data.result.status === 0) ? 'danger' : 'success';
const htmlstr = `<div class='alert alert-${stat}' role='alert'><h4>${data.result.message}</h4></div>`;
$('#loginmessage').html(htmlstr);
}
if (data.result.status === 1){
location.reload(true);
}
}
})
.fail((data)=>{ alert('Bad login attempt'); });
}
And a typical more complex example that I have. In this case, some interactive elements are initialized when a button is toggled on and off.
this.togglediv.on('change', this, this.initDynamic);
// Initialize the Dynamic Interaction upon toggle - makes loading an AJAX request
initDynamic(event) {
let _this = event.data;
if (!_this.togglediv.prop('checked')){
// Turning Off
_this.toggleOff();
} else {
// Turning On
_this.toggleOn();
// check for empty divs
let specempty = _this.graphdiv.is(':empty');
let imageempty = _this.imagediv.is(':empty');
let mapempty = _this.mapdiv.is(':empty');
// send the request if the dynamic divs are empty
if (imageempty) {
// make the form
let keys = ['plateifu', 'toggleon'];
let form = m.utils.buildForm(keys, _this.plateifu, _this.toggleon);
_this.toggleload.show();
$.post(Flask.url_for('galaxy_page.initdynamic'), form, 'json')
.done(function(data) {
let image = data.result.image;
let spaxel = data.result.spectra;
let spectitle = data.result.specmsg;
let maps = data.result.maps;
let mapmsg = data.result.mapmsg;
// Load the Image
_this.initOpenLayers(image);
_this.toggleload.hide();
// Try to load the spaxel
if (data.result.specstatus !== -1) {
_this.loadSpaxel(spaxel, spectitle);
} else {
_this.updateSpecMsg(`Error: ${spectitle}`, data.result.specstatus);
}
// Try to load the Maps
if (data.result.mapstatus !== -1) {
_this.initHeatmap(maps);
} else {
_this.updateMapMsg(`Error: ${mapmsg}`, data.result.mapstatus);
}
})
.fail(function(data) {
_this.updateSpecMsg(`Error: ${data.result.specmsg}`, data.result.specstatus);
_this.updateMapMsg(`Error: ${data.result.mapmsg}`, data.result.mapstatus);
_this.toggleload.hide();
});
}
}
}
I know this is already roughly using promises, but can I make improvements to my code flow by switching to the Promise then catch syntax? As you can see, I end up repeating a lot of the failure case code for real failures and successful failures. Most of my code looks like this, but I've been having a bit of trouble trying to convert these into something that's like
promise_ajax_call
.then(do real success)
.catch(all failure cases)
I always use Bluebird Promises. They have a Promise.resolve function that you can utilize with ajax. One thing to know about Promises, if you throw an error in a then, it will be caught in a chained catch. One way to clean this up a bit might be something like this (keep in mind, this is pseudo)
Promise.resolve($.ajax(...some properties..))
.then((data)=>{
if(data.result.status < 0){
//throw some error
}
// process the data how you need it
})
.catch((error){
// either the ajax failed, or you threw an error in your then. either way, it will end up in this catch
});
Related
I would like to get the proper value for okLogin when the user is incorrect, and stop the form from going forward, but I've noticed that when I call the function the return is executed first than the ajax call, which should be backwards.
I want the ajax function to update my okLogin variable to false anytime a mismatch happens.
<form action="test.html" onsubmit="return loginProcess();">
<label for="usernameLogin" class="col-sm-4 col-form-label">Username</label>
<input class="form-control" type="text" id="usernameLogin" required>
<label for="passwordLogin" class="col-sm-4 col-form-label">Password</label>
<input class="form-control" type="password" id="passwordLogin" required>
<input type="submit" class="btn btn-primary btn-block" id="loginSubmitBtn" value="Login">
</form>
<script>
function loginProcess() {
let users, usernameLogin, passwordLogin;
let ajaxCall = new XMLHttpRequest();
let okLogin = true;
usernameLogin = document.getElementById("usernameLogin").value;
passwordLogin = document.getElementById("passwordLogin").value;
ajaxCall.onreadystatechange = function() {
if (ajaxCall.readyState == 4 && ajaxCall.status == 200) {
users = JSON.parse(ajaxCall.responseText);
for (let item in users) {
if (item === "username") {
if (usernameLogin !== users["username"]) {
okLogin = false;
}
console.log("Username: " + okLogin);
}
if (item === "password") {
if (passwordLogin !== users["password"]) {
okLogin = false;
}
}
}
}
}
ajaxCall.open("GET", "docs/users.json", true);
ajaxCall.send();
return okLogin;
}
</script>
As written, your code will always return true; the value of okLogin, which you define to be true, will never be anything but that. This is because ajaxCall.send() is asynchronous: it kicks off the AJAX call, but that's all; it doesn't wait for the call to succeed (or fail). So the next thing that happens, every time you call ajaxCall.send(), is that you return the current value of okLogin, which will be true every single time, because any code which might change its value won't have been executed yet. (It will only be executed after there is a response to your AJAX request.) Hope that makes sense; this is a common error when dealing with methods like send() which are asynchronous - that start something (in this case an AJAX request) but don't wait for it to finish before returning. (And in your case, returning a value which will be true every time, because every time, the code in your ajaxCall.onreadystatechange function will be executed after you return okLogin.)
One solution would be to make the AJAX call synchronous, by passing false instead of true as the last argument to open(). This will work, but it's a seriously bad idea; doing synchronous AJAX is deeply frowned upon, because it causes the user's browser to hang until a response is received. That could take a few milliseconds (not a problem) but if the response is slow in arriving, it could take a second (very bad), or several seconds (totally unacceptable).
I'd suggest this approach: change this...
<form action="test.html" onsubmit="return loginProcess();">
...to this:
<form class="main-form" action="test.html" onsubmit="loginProcess(); return false;">
In other words, when the user submits the form, call loginProcess(), which kicks off your AJAX call, then return false (which will, for now, keep the form from being submitted). We will now also ignore the return value of loginProcess(); it's not relevant. I've also added a class to your form, which we'll use in a minute.
Now, our strategy will be to have your callback (your ajaxCall.onreadystatechange code), which will be executed after a successful response, submit your form, only if okLogin is true. To do that, change your code as follows:
ajaxCall.onreadystatechange = function() {
if (ajaxCall.readyState == 4 && ajaxCall.status == 200) {
users = JSON.parse(ajaxCall.responseText);
for (let item in users) {
if (item === "username") {
if (usernameLogin !== users["username"]) {
okLogin = false;
}
console.log("Username: " + okLogin);
}
if (item === "password") {
if (passwordLogin !== users["password"]) {
okLogin = false;
}
}
}
// ***** Add the following three lines:
if (okLogin) {
document.querySelector('form.main-form').submit()
}
}
}
You could also use Promises (then(), etc.) but it's probably a good idea to understand callbacks before you tackle promises, if you're trying to understand asynchronous concepts.
Final note: you could also use async/await but before doing that, you'd want to understand Promises (and before that callbacks) - but more important, async/await may not be available in all of your target browsers; not IE, but also not earlier Edge versions.
hope this helps!
This will not work because ajax requests are asynchronous. In other words, the code within onreadystatechange is only being defined to be executed later, once the ajax call has returned. That is why your method is returning before the ajax call completes.
I would also note that authentication is an important thing to get right, and I think you would do well to look at some examples of proper authentication. From the looks of it, you are returning a list of all usernames and passwords in plaintext in the call. That's a very insecure design. If you have any intentions of using this for anything other than an experiment, you should not do things this way. In fact, even if you are only using this as an experiment, I would suggest trying to do it a different way just to get the experience of doing things how they ought to be done.
At any rate, the way to make this work as it stands would be, instead of returning, take some action within the callback. It's not clear in the code you submitted exactly what will happen when the loginProcess method returns (from the looks of it, nothing happens). You could, perhaps, call some other method with your onreadystatechange function with a true or false, and let that function handle whatever ought to happen based on a successful or unsuccessful login.
Example:
ajaxCall.onreadystatechange = function() {
if (ajaxCall.readyState == 4 && ajaxCall.status == 200) {
[your code to check the login here]
someOtherFunction(okLogin)
}
}
Also, I would suggest you take a look at a more modern way of doing ajax requests, such as using fetch and promises.
Welcome to SO. Your problem is due to the nature of asynchronous JavaScript. Your XMLHTTPRequest takes some time to get a response. Because of that you need to wait before returning a value until the request has been finished. Right now you instantly return okLogin without the request having a chance to change it.
So you need to delay that. You can use async / await to wait for the request to finish and continue with your process. This is all based on Promises, an object that can chain callbacks together with the then method.
Instead of the XMLHTTPRequest constructor, try the Fetch API. It's a more modern version of the same principle, but with a simpler, more powerful workflow and it works with promises.
Now with async / await you can make your function asynchronous, meaning it will take some time to complete, but do something else when it is done. The await keyword can be put in front of any promise returning function, like fetch, to await the function to complete and get the value from it.
In your code it would look like this.
async function loginProcess() {
let usernameLogin, passwordLogin, response, users;
let okLogin = true;
usernameLogin = document.getElementById("usernameLogin").value;
passwordLogin = document.getElementById("passwordLogin").value;
response = await fetch("docs/users.json");
users = await response.json();
for (let item in users) {
if (item === "username" && usernameLogin !== item) {
okLogin = false;
} else if (item === "password" && passwordLogin !== item) {
okLogin = false;
}
}
return okLogin;
}
loginProcess().then((loginStatus) => {
console.log(loginStatus)
});
The function fetches your data with fetch, waits for it to finish, then turn the result from JSON into workable value and continues from there.
And because the loginProcess itself is asynchronous, it returns a Promise. Read up on those how you can use them. They are a very powerful feature of JavaScript.
Also pay attention to #bodacious. He is making some very good points regarding the security concerns in your code. Do your validation server side, where the code is invisible to the user.
I have the need to do a main AJAX form submit. However, I want to perform series of other preliminary form submits and AJAX requests halfway, before continuing the main from submit.
Below is the idea, but with a lot of pseudocode. I want to call the ajaxFunction as shown, complete all its tasks, then proceed with the main form submission:
$('#mainform').submit(function(e){
e.preventDefault();
var main = this;
var data = $('#section :input', main).serialize();
//preliminary nested ajax requests
var mainresult = ajaxFunction('arg1', 'arg2');
alert("All preliminary AJAX done, proceeding...");
if(mainresult){
//final ajax
$.post('mainurl', data, function(result){
console.log(result);
});
}else{
//do nothing
}
});
function ajaxFunction(param1, param2){
//ajax1
ajaxFetchingFunction1('url1', function(){
//ajax2
ajaxFetchingFunction2('url2', function(){
//submit handler
$('#anotherform').submit(function(){
if(someparam === 1){
return true;
}else{
return false;
}
});
});
});
}
As it is now, I know it won't work as expected because of all the asynchronous nested AJAX calls. What I get is that alert("All preliminary AJAX done, proceeding..."); executes even before any of the AJAX calls in ajaxFunction.
I believe that this is just the kind of scenario ("callback hell") for which the Deferred/Promise concept was introduced, but I've been struggling to wrap my head around this. How can I structure these different AJAX requests, such that code execution would wait until ajaxFunction completes and returns mainresult for subsequent use?
How can I structure these different AJAX requests, such that code
execution would wait until ajaxFunction completes and returns
mainresult for subsequent use?
You can't and you don't. Javascript will not "wait" for an asynchronous operation to complete. Instead, you move the code that wants to run after the async operation is done into a callback that is then called when the async operation is done. This is true whether using plain async callbacks or structured callbacks that are part of promises.
Asynchronous programming in Javascript requires a rethinking and restructing of the flow of control so that things that you want to run after an async operation is done are put into a callback function rather than just sequentially on the next line of code. Async operations are chained in sequence through a series of callbacks. Promises are a means of simplifying the management of those callbacks and particularly simplifying the propagation of errors and/or the synchronization of multiple async operations.
If you stick with callbacks, then you can communicate completion of ajaxFunction() with a completion callback:
function ajaxFunction(param1, param2, doneCallback){
//ajax1
ajaxFetchingFunction1('url1', function(){
//ajax2
ajaxFetchingFunction2('url2', function(){
doneCallback(someResult);
});
});
}
And, then use it here:
$('#mainform').submit(function(e){
e.preventDefault();
var main = this;
var data = $('#section :input', main).serialize();
//preliminary nested ajax requests
ajaxFunction('arg1', 'arg2', function(result) {
// process result here
alert("All preliminary AJAX done, proceeding...");
if(result){
//final ajax
$.post('mainurl', data, function(result){
console.log(result);
});
}else{
//do nothing
}
});
});
Note: I removed your $('#anotherform').submit() from the code because inserting an event handler in a function that will be called repeatedly is probably the wrong design here (since it ends up creating multiple identical event handlers). You can insert it back if you're sure it's the right thing to do, but it looked wrong to me.
This would generally be a great place to use promises, but your code is a bit abstract to show you exactly how to use promises. We would need to see the real code for ajaxFetchingFunction1() and ajaxFetchingFunction2() to illustrate how to make this work with promises since those async functions would need to create and return promises. If you're using jQuery ajax inside of them, then that will be easy because jQuery already creates a promise for an ajax call.
If both ajaxFetchingFunction1() and ajaxFetchingFunction2() are modified to return a promise, then you can do something like this:
function ajaxFunction(param1, param2){
return ajaxFetchingFunction1('url1').then(function() {
return ajaxFetchingFunction2('url2');
});
}
And, then use it here:
$('#mainform').submit(function(e){
e.preventDefault();
var main = this;
var data = $('#section :input', main).serialize();
//preliminary nested ajax requests
ajaxFunction('arg1', 'arg2').then(function(result) {
// process result here
alert("All preliminary AJAX done, proceeding...");
if(result){
//final ajax
$.post('mainurl', data, function(result){
console.log(result);
});
}else{
//do nothing
}
});
});
Promises make the handling of multiple ajax requests really trivial, however the implications of "partial forms" on GUI design are maybe more of a challenge. You have to consider things like :
One form divided into sections, or one form per partial?
Show all partials at the outset, or reveal them progressively?
Lock previously validated partials to prevent meddling after validation?
Revalidate all partials at each stage, or just the current partial?
One overall submit button or one per per partial?
How should the submit button(s) be labelled (to help the user understand the process he is involved in)?
Let's assume (as is the case for me but maybe not the OP) that we don't know the answers to all those questions yet, but that they can be embodied in two functions - validateAsync() and setState(), both of which accept a stage parameter.
That allows us to write a generalised master routine that will cater for as yet unknown validation calls and a variety of GUI design decisions.
The only real assumption needed at this stage is the selector for the form/partials. Let's assume it/they all have class="partialForm" :
$('.partialForm').on('submit', function(e) {
e.preventDefault();
$.when(setState(1)) // set the initial state, before any validation has occurred.
.then(validateAsync.bind(null, 1)).then(setState.bind(null, 2))
.then(validateAsync.bind(null, 2)).then(setState.bind(null, 3))
.then(validateAsync.bind(null, 3)).then(setState.bind(null, 4))
.then(function aggregateAndSubmit() {
var allData = ....; // here aggregate all three forms' into one serialization.
$.post('mainurl', allData, function(result) {
console.log(result);
});
}, function(error) {
console.log('validation failed at stage: ' + error.message);
// on screen message for user ...
return $.when(); //inhibit .fail() handler below.
})
.fail(function(error) {
console.log(error);
// on screen message for user ...
});
});
It's syntactically convenient here to call setState() as a then callback although it's (probably) synchronous
Sample validateAsync() :
function validateAsync(stage) {
var data, jqXHR;
switch(stage) {
case 1:
data = $("#form1").serialize();
jqXHR = $.ajax(...);
break;
case 2:
data = $("#form2").serialize();
jqXHR = $.ajax(...);
break;
case 3:
data = $("#form3").serialize();
jqXHR = $.ajax(...);
}
return jqXHR.then(null, function() {
return new Error(stage);
});
}
Sample setState() :
function setState(stage) {
switch(stage) {
case 1: //initial state, ready for input into form1
$("#form1").disableForm(false);
$("#form2").disableForm(true);
$("#form3").disableForm(true);
break;
case 2: //form1 validated, ready for input into form2
$("#form1").disableForm(true);
$("#form2").disableForm(false);
$("#form3").disableForm(true);
break;
case 3: //form1 and form2 validated, ready for input into form3
$("#form1").disableForm(true);
$("#form2").disableForm(true);
$("#form3").disableForm(false);
break;
case 4: //form1, form2 and form3 validated, ready for final submission
$("#form1").disableForm(true);
$("#form2").disableForm(true);
$("#form3").disableForm(true);
}
return stage;
}
As written setState(), will need the jQuery plugin .disableForm() :
jQuery.fn.disableForm = function(bool) {
return this.each(function(i, form) {
if(!$(form).is("form")) return true; // continue
$(form.elements).each(function(i, el) {
el.readOnly = bool;
});
});
}
As I say, validateAsync() and setState() above are just rudimentary samples. As a minimum, you will need to :
flesh out validateAsync()
modify setState() to reflect the User Experience of your choice.
I'm using Google App Engine with Java and Google Cloud Endpoints. In my JavaScript front end, I'm using this code to handle initialization, as recommended:
var apisToLoad = 2;
var url = '//' + $window.location.host + '/_ah/api';
gapi.client.load('sd', 'v1', handleLoad, url);
gapi.client.load('oauth2', 'v2', handleLoad);
function handleLoad() {
// this only executes once,
if (--apisToLoad === 0) {
// so this is not executed
}
}
How can I detect and handle when gapi.client.load fails? Currently I am getting an error printed to the JavaScript console that says: Could not fetch URL: https://webapis-discovery.appspot.com/_ah/api/static/proxy.html). Maybe that's my fault, or maybe it's a temporary problem on Google's end - right now that is not my concern. I'm trying to take advantage of this opportunity to handle such errors well on the client side.
So - how can I handle it? handleLoad is not executed for the call that errs, gapi.client.load does not seem to have a separate error callback (see the documentation), it does not actually throw the error (only prints it to the console), and it does not return anything. What am I missing? My only idea so far is to set a timeout and assume there was an error if initialization doesn't complete after X seconds, but that is obviously less than ideal.
Edit:
This problem came up again, this time with the message ERR_CONNECTION_TIMED_OUT when trying to load the oauth stuff (which is definitely out of my control). Again, I am not trying to fix the error, it just confirms that it is worth detecting and handling gracefully.
I know this is old but I came across this randomly. You can easily test for a fail (at least now).
Here is the code:
gapi.client.init({}).then(() => {
gapi.client.load('some-api', "v1", (err) => { callback(err) }, "https://someapi.appspot.com/_ah/api");
}, err, err);
function callback(loadErr) {
if (loadErr) { err(loadErr); return; }
// success code here
}
function err(err){
console.log('Error: ', err);
// fail code here
}
Example
Unfortunately, the documentation is pretty useless here and it's not exactly easy to debug the code in question. What gapi.client.load() apparently does is inserting an <iframe> element for each API. That frame then provides the necessary functionality and allows accessing it via postMessage(). From the look of it, the API doesn't attach a load event listener to that frame and rather relies on the frame itself to indicate that it is ready (this will result in the callback being triggered). So the missing error callback is an inherent issue - the API cannot see a failure because no frame will be there to signal it.
From what I can tell, the best thing you can do is attaching your own load event listener to the document (the event will bubble up from the frames) and checking yourself when they load. Warning: While this might work with the current version of the API, it is not guaranteed to continue working in future as the implementation of that API changes. Currently something like this should work:
var framesToLoad = apisToLoad;
document.addEventListener("load", function(event)
{
if (event.target.localName == "iframe")
{
framesToLoad--;
if (framesToLoad == 0)
{
// Allow any outstanding synchronous actions to execute, just in case
window.setTimeout(function()
{
if (apisToLoad > 0)
alert("All frames are done but not all APIs loaded - error?");
}, 0);
}
}
}, true);
Just to repeat the warning from above: this code makes lots of assumptions. While these assumptions might stay true for a while with this API, it might also be that Google will change something and this code will stop working. It might even be that Google uses a different approach depending on the browser, I only tested in Firefox.
This is an extremely hacky way of doing it, but you could intercept all console messages, check what is being logged, and if it is the error message you care about it, call another function.
function interceptConsole(){
var errorMessage = 'Could not fetch URL: https://webapis-discovery.appspot.com/_ah/api/static/proxy.html';
var console = window.console
if (!console) return;
function intercept(method){
var original = console[method];
console[method] = function() {
if (arguments[0] == errorMessage) {
alert("Error Occured");
}
if (original.apply){
original.apply(console, arguments)
}
else {
//IE
var message = Array.prototype.slice.apply(arguments).join(' ');
original(message)
}
}
}
var methods = ['log', 'warn', 'error']
for (var i = 0; i < methods.length; i++)
intercept(methods[i])
}
interceptConsole();
console.log('Could not fetch URL: https://webapis-discovery.appspot.com/_ah/api/static/proxy.html');
//alerts "Error Occured", then logs the message
console.log('Found it');
//just logs "Found It"
An example is here - I log two things, one is the error message, the other is something else. You'll see the first one cause an alert, the second one does not.
http://jsfiddle.net/keG7X/
You probably would have to run the interceptConsole function before including the gapi script as it may make it's own copy of console.
Edit - I use a version of this code myself, but just remembered it's from here, so giving credit where it's due.
I use a setTimeout to manually trigger error if the api hasn't loaded yet:
console.log(TAG + 'api loading...');
let timer = setTimeout(() => {
// Handle error
reject('timeout');
console.error(TAG + 'api loading error: timeout');
}, 1000); // time till timeout
let callback = () => {
clearTimeout(timer);
// api has loaded, continue your work
console.log(TAG + 'api loaded');
resolve(gapi.client.apiName);
};
gapi.client.load('apiName', 'v1', callback, apiRootUrl);
As an example, suppose I want to fetch a list of files from somewhere, then load the contents of these files and finally display them to the user. In a synchronous model, it would be something like this (pseudocode):
var file_list = fetchFiles(source);
if (!file_list) {
display('failed to fetch list');
} else {
for (file in file_list) { // iteration, not enumeration
var data = loadFile(file);
if (!data) {
display('failed to load: ' + file);
} else {
display(data);
}
}
}
This provides decent feedback to the user and I can move pieces of code into functions if I so deem necessary. Life is simple.
Now, to crush my dreams: fetchFiles() and loadFile() are actually asynchronous. The easy way out is to transform them into synchronous functions. But this is not good if the browser locks up waiting for calls to complete.
How can I handle multiple interdependent and/or layered asynchronous calls without delving deeper and deeper into an endless chain of callbacks, in classic reductio ad spaghettum fashion? Is there a proven paradigm to cleanly handle these while keeping code loosely coupled?
Deferreds are really the way to go here. They capture exactly what you (and a whole lot of async code) want: "go away and do this potentially expensive thing, don't bother me in the meantime, and then do this when you get back."
And you don't need jQuery to use them. An enterprising individual has ported Deferred to underscore, and claims you don't even need underscore to use it.
So your code can look like this:
function fetchFiles(source) {
var dfd = _.Deferred();
// do some kind of thing that takes a long time
doExpensiveThingOne({
source: source,
complete: function(files) {
// this informs the Deferred that it succeeded, and passes
// `files` to all its success ("done") handlers
dfd.resolve(files);
// if you know how to capture an error condition, you can also
// indicate that with dfd.reject(...)
}
});
return dfd;
}
function loadFile(file) {
// same thing!
var dfd = _.Deferred();
doExpensiveThingTwo({
file: file,
complete: function(data) {
dfd.resolve(data);
}
});
return dfd;
}
// and now glue it together
_.when(fetchFiles(source))
.done(function(files) {
for (var file in files) {
_.when(loadFile(file))
.done(function(data) {
display(data);
})
.fail(function() {
display('failed to load: ' + file);
});
}
})
.fail(function() {
display('failed to fetch list');
});
The setup is a little wordier, but once you've written the code to handle the Deferred's state and stuffed it off in a function somewhere you won't have to worry about it again, you can play around with the actual flow of events very easily. For example:
var file_dfds = [];
for (var file in files) {
file_dfds.push(loadFile(file));
}
_.when(file_dfds)
.done(function(datas) {
// this will only run if and when ALL the files have successfully
// loaded!
});
Events
Maybe using events is a good idea. It keeps you from creating code-trees and de-couples your code.
I've used bean as the framework for events.
Example pseudo code:
// async request for files
function fetchFiles(source) {
IO.get(..., function (data, status) {
if(data) {
bean.fire(window, 'fetched_files', data);
} else {
bean.fire(window, 'fetched_files_fail', data, status);
}
});
}
// handler for when we get data
function onFetchedFiles (event, files) {
for (file in files) {
var data = loadFile(file);
if (!data) {
display('failed to load: ' + file);
} else {
display(data);
}
}
}
// handler for failures
function onFetchedFilesFail (event, status) {
display('Failed to fetch list. Reason: ' + status);
}
// subscribe the window to these events
bean.on(window, 'fetched_files', onFetchedFiles);
bean.on(window, 'fetched_files_fail', onFetchedFilesFail);
fetchFiles();
Custom events and this kind of event handling is implemented in virtually all popular JS frameworks.
Sounds like you need jQuery Deferred. Here is some untested code that might help point you in the right direction:
$.when(fetchFiles(source)).then(function(file_list) {
if (!file_list) {
display('failed to fetch list');
} else {
for (file in file_list) {
$.when(loadFile(file)).then(function(data){
if (!data) {
display('failed to load: ' + file);
} else {
display(data);
}
});
}
}
});
I also found another decent post which gives a few uses cases for the Deferred object
If you do not want to use jQuery, what you could use instead are web workers in combination with synchronous requests. Web workers are supported across every major browser with the exception of any Internet Explorer version before 10.
Web Worker browser compatability
Basically, if you're not entirely certain what a web worker is, think of it as a way for browsers to execute specialized JavaScript on a separate thread without impacting the main thread (Caveat: On a single-core CPU, both threads will run in an alternating fashion. Luckily, most computers nowadays come equipped with dual-core CPUs). Usually, web workers are reserved for complex computations or some intense processing task. Just keep in mind that any code within the web worker CANNOT reference the DOM nor can it reference any global data structures that have not been passed to it. Essentially, web workers run independent of the main thread. Any code that the worker executes should be kept separate from the rest of your JavaScript code base, within its own JS file. Furthermore, if the web workers need specific data in order to properly work, you need to pass that data into them upon starting them up.
Yet another important thing worth noting is that any JS libraries that you need to use to load the files will need to be copied directly into the JavaScript file that the worker will execute. That means these libraries should first be minified(if they haven't been already), then copied and pasted into the top of the file.
Anyway, I decided to write up a basic template to show you how to approach this. Check it out below. Feel free to ask questions/criticize/etc.
On the JS file that you want to keep executing on the main thread, you want something like the following code below in order to invoke the worker.
function startWorker(dataObj)
{
var message = {},
worker;
try
{
worker = new Worker('workers/getFileData.js');
}
catch(error)
{
// Throw error
}
message.data = dataObj;
// all data is communicated to the worker in JSON format
message = JSON.stringify(message);
// This is the function that will handle all data returned by the worker
worker.onMessage = function(e)
{
display(JSON.parse(e.data));
}
worker.postMessage(message);
}
Then, in a separate file meant for the worker (as you can see in the code above, I named my file getFileData.js), write something like the following...
function fetchFiles(source)
{
// Put your code here
// Keep in mind that any requests made should be synchronous as this should not
// impact the main thread
}
function loadFile(file)
{
// Put your code here
// Keep in mind that any requests made should be synchronous as this should not
// impact the main thread
}
onmessage = function(e)
{
var response = [],
data = JSON.parse(e.data),
file_list = fetchFiles(data.source),
file, fileData;
if (!file_list)
{
response.push('failed to fetch list');
}
else
{
for (file in file_list)
{ // iteration, not enumeration
fileData = loadFile(file);
if (!fileData)
{
response.push('failed to load: ' + file);
}
else
{
response.push(fileData);
}
}
}
response = JSON.stringify(response);
postMessage(response);
close();
}
PS: Also, I dug up another thread which would better help you understand the pros and cons of using synchronous requests in combination with web workers.
Stack Overflow - Web Workers and Synchronous Requests
async is a popular asynchronous flow control library often used with node.js. I've never personally used it in the browser, but apparently it works there as well.
This example would (theoretically) run your two functions, returning an object of all the filenames and their load status. async.map runs in parallel, while waterfall is a series, passing the results of each step on to the next.
I am assuming here that your two async functions accept callbacks. If they do not, I'd require more info as to how they're intended to be used (do they fire off events on completion? etc).
async.waterfall([
function (done) {
fetchFiles(source, function(list) {
if (!list) done('failed to fetch file list');
else done(null, list);
});
// alternatively you could simply fetchFiles(source, done) here, and handle
// the null result in the next function.
},
function (file_list, done) {
var loadHandler = function (memo, file, cb) {
loadFile(file, function(data) {
if (!data) {
display('failed to load: ' + file);
} else {
display(data);
}
// if any of the callbacks to `map` returned an error, it would halt
// execution and pass that error to the final callback. So we don't pass
// an error here, but rather a tuple of the file and load result.
cb(null, [file, !!data]);
});
};
async.map(file_list, loadHandler, done);
}
], function(err, result) {
if (err) return display(err);
// All files loaded! (or failed to load)
// result would be an array of tuples like [[file, bool file loaded?], ...]
});
waterfall accepts an array of functions and executes them in order, passing the result of each along as the arguments to the next, along with a callback function as the last argument, which you call with either an error, or the resulting data from the function.
You could of course add any number of different async callbacks between or around those two, without having to change the structure of the code at all. waterfall is actually only 1 of 10 different flow control structures, so you have a lot of options (although I almost invariably end up using auto, which allows you to mix parallel and series execution in the same function via a Makefile like requirements syntax).
I had this issue with a webapp I'm working on and here's how I solved it (with no libraries).
Step 1: Wrote a very lightweight pubsub implementation. Nothing fancy. Subscribe, Unsubscribe, Publish and Log. Everything (with comments) adds up 93 lines of Javascript. 2.7kb before gzip.
Step 2: Decoupled the process you were trying to accomplish by letting the pubsub implementation do the heavy lifting. Here's an example:
// listen for when files have been fetched and set up what to do when it comes in
pubsub.notification.subscribe(
"processFetchedResults", // notification to subscribe to
"fetchedFilesProcesser", // subscriber
/* what to do when files have been fetched */
function(params) {
var file_list = params.notificationParams.file_list;
for (file in file_list) { // iteration, not enumeration
var data = loadFile(file);
if (!data) {
display('failed to load: ' + file);
} else {
display(data);
}
}
);
// trigger fetch files
function fetchFiles(source) {
// ajax call to source
// on response code 200 publish "processFetchedResults"
// set publish parameters as ajax call response
pubsub.notification.publish("processFetchedResults", ajaxResponse, "fetchFilesFunction");
}
Of course this is very verbose in the setup and scarce on the magic behind the scenes.
Here's some technical details:
I'm using setTimeout to handle triggering subscriptions. This way they run in a non-blocking fashion.
The call is effectively decoupled from the processing. You can write a different subscription to the notification "processFetchedResults" and do multiple things once the response comes through (for example logging and processing) while keeping them in very separate, tiny and easily-managed code blocks.
The above code sample doesn't address fallbacks or run proper checks. I'm sure it will require a bit of tooling to get to production standards. Just wanted to show you how possible it is and how library-independent your solution can be.
Cheers!
I have a simple Javascript function:
makeRequest();
It does a bunch of stuff and places a bunch of content into the DOM.
I make a few calls like so:
makeRequest('food');
makeRequest('shopping');
However, they both fire so quickly that they are stepping on each other's toes. Ultimately I need it to have the functionality of.
makeRequest('food');
wait....
makeRequest('shopping'); only if makeRequest('food') has finished
Thoughts on getting these to execute only one at a time?
Thanks!
If these functions actually do an AJAX request, you are better keeping them asynchronous. You can make a synchronous AJAX request but it will stop the browser from responding and lead to bad user experience.
If what you require if that these AJAX requests are made one after the other because they depend on each other, you should investigate your function to see if it provides a callback mechanism.
makeRequest('food', function()
{
// called when food request is done
makeRequest('shopping');
});
Using jQuery, it looks something like that
$.get("/food", function(food)
{
// do something with food
$.get("/shopping", function(shopping)
{
// do something with shopping
});
});
I would recommend that you simply write them asynchronously--for example, call makeRequest('shopping'); from the AJAX completion handler of the first call.
If you do not want to write your code asynchronously, see Javascript Strands
I suppose that you have a callback method that takes care of the response for the request? Once it has done that, let it make the next request.
Declare an array for the queue, and a flag to keep track of the status:
var queue = [], requestRunning = false;
In the makeRequest method:
if (requestRunning) {
queue.push(requestParameter);
} else {
requestRunning = true;
// do the request
}
In the callback method, after taking care of the response:
if (queue.length > 0) {
var requestParameter = queue.splice(0,1)[0];
// do the request
} else {
requestRunning = false;
}