Returning the data from a deferred? - javascript

I have a class that uses Google's places service. A user can enter an address and Google will return information about it.
Later on I wish to find out lat and lng coordinates on this place, so I have this method which utalizes Google's places service to get the coords.
I return a deferred as this may take some time.
p.getLatLong = function() {
var dfd = $.Deferred();
this.placesService.getDetails({
reference: this.pacReference
}, function(details, status){
if(details){
dfd.resolve({'lat' : details.geometry.location.lat(), 'lng' : details.geometry.location.lng()});
}
else{
dfd.reject();
}
})
}
return dfd;
};
I want to be able to access the above method and just return the coords or null (if the dfd is rejected) but the method returns a deferred.
How can I just return the result of the dfd rather than the dfd itself?
I do not wish to have to call:
this.geo.getLatLng().done(function(data){console.log(data})
But something like this:
console.log(this.geo.getLatLng());

I get your point, though promises exist for a reason, the reason here being the asynchronous nature of asking for data.
There is a way, I used to think it was good before I understood the goal of promises. You could return the reference of the 'to be populated' data, but then, when will you be able to use it? Are you planning on polling the state of an object...? I hope not, seriously stick to promises you will avoid a lot of problems for small profit of a bunch of keystrokes.

Deferred objects are meant to allow the thread to continue while long running operations proceed in the background. They serve a specific purpose, and shouldn't work the way you describe by design.
Remember that JavaScript is single-threaded. That means that if you pause the thread waiting for a long operation to complete, the entire page/UI will be frozen as well.
That warning stated, you could potentially accomplish what you want by wrapping all this into your own closure with a loop that checks to see if the process completes.
Please note this is dangerous, will freeze the page, and should be avoided. It is here for academic reasons only.
var getGetLatLng = (function () {
var running = false;
return function () {
var latlng;
//While we haven't instructed the loop to break.
while (!breakLoop) {
//If we haven't instructed the API call to execute in this iteration of the loop.
if (!running) {
//On next iteration, tell it we are already running, to prevent multiple requests being fired.
running = true;
//Your logic here for getLatLng
this.geo.getLatLng()
//When it completes successfully, set latlng
.done(function (data) {
latlng = data;
})
//always break the loop when HTTP completes.
.always(function () {
breakLoop = true;
});
}
}
//Return latlng - it could be undefined if there was an error.
return latlng;
};
})();
You could wrap this same structure around your original p.getLatLng function body too. Again, I don't recommend it.

Related

Returning Chrome storage API value without function

For the past two days I have been working with chrome asynchronous storage. It works "fine" if you have a function. (Like Below):
chrome.storage.sync.get({"disableautoplay": true}, function(e){
console.log(e.disableautoplay);
});
My problem is that I can't use a function with what I'm doing. I want to just return it, like LocalStorage can. Something like:
var a = chrome.storage.sync.get({"disableautoplay": true});
or
var a = chrome.storage.sync.get({"disableautoplay": true}, function(e){
return e.disableautoplay;
});
I've tried a million combinations, even setting a public variable and setting that:
var a;
window.onload = function(){
chrome.storage.sync.get({"disableautoplay": true}, function(e){
a = e.disableautoplay;
});
}
Nothing works. It all returns undefined unless the code referencing it is inside the function of the get, and that's useless to me. I just want to be able to return a value as a variable.
Is this even possible?
EDIT: This question is not a duplicate, please allow me to explain why:
1: There are no other posts asking this specifically (I spent two days looking first, just in case).
2: My question is still not answered. Yes, Chrome Storage is asynchronous, and yes, it does not return a value. That's the problem. I'll elaborate below...
I need to be able to get a stored value outside of the chrome.storage.sync.get function. I -cannot- use localStorage, as it is url specific, and the same values cannot be accessed from both the browser_action page of the chrome extension, and the background.js. I cannot store a value with one script and access it with another. They're treated separately.
So my only solution is to use Chrome Storage. There must be some way to get the value of a stored item and reference it outside the get function. I need to check it in an if statement.
Just like how localStorage can do
if(localStorage.getItem("disableautoplay") == true);
There has to be some way to do something along the lines of
if(chrome.storage.sync.get("disableautoplay") == true);
I realize it's not going to be THAT simple, but that's the best way I can explain it.
Every post I see says to do it this way:
chrome.storage.sync.get({"disableautoplay": true, function(i){
console.log(i.disableautoplay);
//But the info is worthless to me inside this function.
});
//I need it outside this function.
Here's a tailored answer to your question. It will still be 90% long explanation why you can't get around async, but bear with me — it will help you in general. I promise there is something pertinent to chrome.storage in the end.
Before we even begin, I will reiterate canonical links for this:
After calling chrome.tabs.query, the results are not available
(Chrome specific, excellent answer by RobW, probably easiest to understand)
Why is my variable unaltered after I modify it inside of a function? - Asynchronous code reference (General canonical reference on what you're asking for)
How do I return the response from an asynchronous call?
(an older but no less respected canonical question on asynchronous JS)
You Don't Know JS: Async & Performance (ebook on JS asynchronicity)
So, let's discuss JS asynchonicity.
Section 1: What is it?
First concept to cover is runtime environment. JavaScript is, in a way, embedded in another program that controls its execution flow - in this case, Chrome. All events that happen (timers, clicks, etc.) come from the runtime environment. JavaScript code registers handlers for events, which are remembered by the runtime and are called as appropriate.
Second, it's important to understand that JavaScript is single-threaded. There is a single event loop maintained by the runtime environment; if there is some other code executing when an event happens, that event is put into a queue to be processed when the current code terminates.
Take a look at this code:
var clicks = 0;
someCode();
element.addEventListener("click", function(e) {
console.log("Oh hey, I'm clicked!");
clicks += 1;
});
someMoreCode();
So, what is happening here? As this code executes, when the execution reaches .addEventListener, the following happens: the runtime environment is notified that when the event happens (element is clicked), it should call the handler function.
It's important to understand (though in this particular case it's fairly obvious) that the function is not run at this point. It will only run later, when that event happens. The execution continues as soon as the runtime acknowledges 'I will run (or "call back", hence the name "callback") this when that happens.' If someMoreCode() tries to access clicks, it will be 0, not 1.
This is what called asynchronicity, as this is something that will happen outside the current execution flow.
Section 2: Why is it needed, or why synchronous APIs are dying out?
Now, an important consideration. Suppose that someMoreCode() is actually a very long-running piece of code. What will happen if a click event happened while it's still running?
JavaScript has no concept of interrupts. Runtime will see that there is code executing, and will put the event handler call into the queue. The handler will not execute before someMoreCode() finishes completely.
While a click event handler is extreme in the sense that the click is not guaranteed to occur, this explains why you cannot wait for the result of an asynchronous operation. Here's an example that won't work:
element.addEventListener("click", function(e) {
console.log("Oh hey, I'm clicked!");
clicks += 1;
});
while(1) {
if(clicks > 0) {
console.log("Oh, hey, we clicked indeed!");
break;
}
}
You can click to your heart's content, but the code that would increment clicks is patiently waiting for the (non-terminating) loop to terminate. Oops.
Note that this piece of code doesn't only freeze this piece of code: every single event is no longer handled while we wait, because there is only one event queue / thread. There is only one way in JavaScript to let other handlers do their job: terminate current code, and let the runtime know what to call when something we want occurs.
This is why asynchronous treatment is applied to another class of calls that:
require the runtime, and not JS, to do something (disk/network access for example)
are guaranteed to terminate (whether in success or failure)
Let's go with a classic example: AJAX calls. Suppose we want to load a file from a URL.
Let's say that on our current connection, the runtime can request, download, and process the file in the form that can be used in JS in 100ms.
On another connection, that's kinda worse, it would take 500ms.
And sometimes the connection is really bad, so runtime will wait for 1000ms and give up with a timeout.
If we were to wait until this completes, we would have a variable, unpredictable, and relatively long delay. Because of how JS waiting works, all other handlers (e.g. UI) would not do their job for this delay, leading to a frozen page.
Sounds familiar? Yes, that's exactly how synchronous XMLHttpRequest works. Instead of a while(1) loop in JS code, it essentially happens in the runtime code - since JavaScript cannot let other code execute while it's waiting.
Yes, this allows for a familiar form of code:
var file = get("http://example.com/cat_video.mp4");
But at a terrible, terrible cost of everything freezing. A cost so terrible that, in fact, the modern browsers consider this deprecated. Here's a discussion on the topic on MDN.
Now let's look at localStorage. It matches the description of "terminating call to the runtime", and yet it is synchronous. Why?
To put it simply: historical reasons (it's a very old specification).
While it's certainly more predictable than a network request, localStorage still needs the following chain:
JS code <-> Runtime <-> Storage DB <-> Cache <-> File storage on disk
It's a complex chain of events, and the whole JS engine needs to be paused for it. This leads to what is considered unacceptable performance.
Now, Chrome APIs are, from ground up, designed for performance. You can still see some synchronous calls in older APIs like chrome.extension, and there are calls that are handled in JS (and therefore make sense as synchronous) but chrome.storage is (relatively) new.
As such, it embraces the paradigm "I acknowledge your call and will be back with results, now do something useful meanwhile" if there's a delay involved with doing something with runtime. There are no synchronous versions of those calls, unlike XMLHttpRequest.
Quoting the docs:
It's [chrome.storage] asynchronous with bulk read and write operations, and therefore faster than the blocking and serial localStorage API.
Section 3: How to embrace asynchronicity?
The classic way to deal with asynchronicity are callback chains.
Suppose you have the following synchronous code:
var result = doSomething();
doSomethingElse(result);
Suppose that, now, doSomething is asynchronous. Then this becomes:
doSomething(function(result) {
doSomethingElse(result);
});
But what if it's even more complex? Say it was:
function doABunchOfThings() {
var intermediate = doSomething();
return doSomethingElse(intermediate);
}
if (doABunchOfThings() == 42) {
andNowForSomethingCompletelyDifferent()
}
Well.. In this case you need to move all this in the callback. return must become a call instead.
function doABunchOfThings(callback) {
doSomething(function(intermediate) {
callback(doSomethingElse(intermediate));
});
}
doABunchOfThings(function(result) {
if (result == 42) {
andNowForSomethingCompletelyDifferent();
}
});
Here you have a chain of callbacks: doABunchOfThings calls doSomething immediately, which terminates, but sometime later calls doSomethingElse, the result of which is fed to if through another callback.
Obviously, the layering of this can get messy. Well, nobody said that JavaScript is a good language.. Welcome to Callback Hell.
There are tools to make it more manageable, for example Promises and async/await. I will not discuss them here (running out of space), but they do not change the fundamental "this code will only run later" part.
Section TL;DR: I absolutely must have the storage synchronous, halp!
Sometimes there are legitimate reasons to have a synchronous storage. For instance, webRequest API blocking calls can't wait. Or Callback Hell is going to cost you dearly.
What you can do is have a synchronous cache of the asynchronous chrome.storage. It comes with some costs, but it's not impossible.
Consider:
var storageCache = {};
chrome.storage.sync.get(null, function(data) {
storageCache = data;
// Now you have a synchronous snapshot!
});
// Not HERE, though, not until "inner" code runs
If you can put ALL your initialization code in one function init(), then you have this:
var storageCache = {};
chrome.storage.sync.get(null, function(data) {
storageCache = data;
init(); // All your code is contained here, or executes later that this
});
By the time code in init() executes, and afterwards when any event that was assigned handlers in init() happens, storageCache will be populated. You have reduced the asynchronicity to ONE callback.
Of course, this is only a snapshot of what storage looks at the time of executing get(). If you want to maintain coherency with storage, you need to set up updates to storageCache via chrome.storage.onChanged events. Because of the single-event-loop nature of JS, this means the cache will only be updated while your code doesn't run, but in many cases that's acceptable.
Similarly, if you want to propagate changes to storageCache to the real storage, just setting storageCache['key'] is not enough. You would need to write a set(key, value) shim that BOTH writes to storageCache and schedules an (asynchronous) chrome.storage.sync.set.
Implementing those is left as an exercise.
Make the main function "async" and make a "Promise" in it :)
async function mainFuction() {
var p = new Promise(function(resolve, reject){
chrome.storage.sync.get({"disableautoplay": true}, function(options){
resolve(options.disableautoplay);
})
});
const configOut = await p;
console.log(configOut);
}
Yes, you can achieve that using promise:
let getFromStorage = keys => new Promise((resolve, reject) =>
chrome.storage.sync.get(...keys, result => resolve(result)));
chrome.storage.sync.get has no returned values, which explains why you would get undefined when calling something like
var a = chrome.storage.sync.get({"disableautoplay": true});
chrome.storage.sync.get is also an asynchronous method, which explains why in the following code a would be undefined unless you access it inside the callback function.
var a;
window.onload = function(){
chrome.storage.sync.get({"disableautoplay": true}, function(e){
// #2
a = e.disableautoplay; // true or false
});
// #1
a; // undefined
}
If you could manage to work this out you will have made a source of strange bugs. Messages are executed asynchronously which means that when you send a message the rest of your code can execute before the asychronous function returns. There is not guarantee for that since chrome is multi-threaded and the get function may delay, i.e. hdd is busy.
Using your code as an example:
var a;
window.onload = function(){
chrome.storage.sync.get({"disableautoplay": true}, function(e){
a = e.disableautoplay;
});
}
if(a)
console.log("true!");
else
console.log("false! Maybe undefined as well. Strange if you know that a is true, right?");
So it will be better if you use something like this:
chrome.storage.sync.get({"disableautoplay": true}, function(e){
a = e.disableautoplay;
if(a)
console.log("true!");
else
console.log("false! But maybe undefined as well");
});
If you really want to return this value then use the javascript storage API. This stores only string values so you have to cast the value before storing and after getting it.
//Setting the value
localStorage.setItem('disableautoplay', JSON.stringify(true));
//Getting the value
var a = JSON.stringify(localStorage.getItem('disableautoplay'));
var a = await chrome.storage.sync.get({"disableautoplay": true});
This should be in an async function. e.g. if you need to run it at top level, wrap it:
(async () => {
var a = await chrome.storage.sync.get({"disableautoplay": true});
})();

How to structure these nested asynchronous requests to complete a batch before proceeding?

I have the need to do a main AJAX form submit. However, I want to perform series of other preliminary form submits and AJAX requests halfway, before continuing the main from submit.
Below is the idea, but with a lot of pseudocode. I want to call the ajaxFunction as shown, complete all its tasks, then proceed with the main form submission:
$('#mainform').submit(function(e){
e.preventDefault();
var main = this;
var data = $('#section :input', main).serialize();
//preliminary nested ajax requests
var mainresult = ajaxFunction('arg1', 'arg2');
alert("All preliminary AJAX done, proceeding...");
if(mainresult){
//final ajax
$.post('mainurl', data, function(result){
console.log(result);
});
}else{
//do nothing
}
});
function ajaxFunction(param1, param2){
//ajax1
ajaxFetchingFunction1('url1', function(){
//ajax2
ajaxFetchingFunction2('url2', function(){
//submit handler
$('#anotherform').submit(function(){
if(someparam === 1){
return true;
}else{
return false;
}
});
});
});
}
As it is now, I know it won't work as expected because of all the asynchronous nested AJAX calls. What I get is that alert("All preliminary AJAX done, proceeding..."); executes even before any of the AJAX calls in ajaxFunction.
I believe that this is just the kind of scenario ("callback hell") for which the Deferred/Promise concept was introduced, but I've been struggling to wrap my head around this. How can I structure these different AJAX requests, such that code execution would wait until ajaxFunction completes and returns mainresult for subsequent use?
How can I structure these different AJAX requests, such that code
execution would wait until ajaxFunction completes and returns
mainresult for subsequent use?
You can't and you don't. Javascript will not "wait" for an asynchronous operation to complete. Instead, you move the code that wants to run after the async operation is done into a callback that is then called when the async operation is done. This is true whether using plain async callbacks or structured callbacks that are part of promises.
Asynchronous programming in Javascript requires a rethinking and restructing of the flow of control so that things that you want to run after an async operation is done are put into a callback function rather than just sequentially on the next line of code. Async operations are chained in sequence through a series of callbacks. Promises are a means of simplifying the management of those callbacks and particularly simplifying the propagation of errors and/or the synchronization of multiple async operations.
If you stick with callbacks, then you can communicate completion of ajaxFunction() with a completion callback:
function ajaxFunction(param1, param2, doneCallback){
//ajax1
ajaxFetchingFunction1('url1', function(){
//ajax2
ajaxFetchingFunction2('url2', function(){
doneCallback(someResult);
});
});
}
And, then use it here:
$('#mainform').submit(function(e){
e.preventDefault();
var main = this;
var data = $('#section :input', main).serialize();
//preliminary nested ajax requests
ajaxFunction('arg1', 'arg2', function(result) {
// process result here
alert("All preliminary AJAX done, proceeding...");
if(result){
//final ajax
$.post('mainurl', data, function(result){
console.log(result);
});
}else{
//do nothing
}
});
});
Note: I removed your $('#anotherform').submit() from the code because inserting an event handler in a function that will be called repeatedly is probably the wrong design here (since it ends up creating multiple identical event handlers). You can insert it back if you're sure it's the right thing to do, but it looked wrong to me.
This would generally be a great place to use promises, but your code is a bit abstract to show you exactly how to use promises. We would need to see the real code for ajaxFetchingFunction1() and ajaxFetchingFunction2() to illustrate how to make this work with promises since those async functions would need to create and return promises. If you're using jQuery ajax inside of them, then that will be easy because jQuery already creates a promise for an ajax call.
If both ajaxFetchingFunction1() and ajaxFetchingFunction2() are modified to return a promise, then you can do something like this:
function ajaxFunction(param1, param2){
return ajaxFetchingFunction1('url1').then(function() {
return ajaxFetchingFunction2('url2');
});
}
And, then use it here:
$('#mainform').submit(function(e){
e.preventDefault();
var main = this;
var data = $('#section :input', main).serialize();
//preliminary nested ajax requests
ajaxFunction('arg1', 'arg2').then(function(result) {
// process result here
alert("All preliminary AJAX done, proceeding...");
if(result){
//final ajax
$.post('mainurl', data, function(result){
console.log(result);
});
}else{
//do nothing
}
});
});
Promises make the handling of multiple ajax requests really trivial, however the implications of "partial forms" on GUI design are maybe more of a challenge. You have to consider things like :
One form divided into sections, or one form per partial?
Show all partials at the outset, or reveal them progressively?
Lock previously validated partials to prevent meddling after validation?
Revalidate all partials at each stage, or just the current partial?
One overall submit button or one per per partial?
How should the submit button(s) be labelled (to help the user understand the process he is involved in)?
Let's assume (as is the case for me but maybe not the OP) that we don't know the answers to all those questions yet, but that they can be embodied in two functions - validateAsync() and setState(), both of which accept a stage parameter.
That allows us to write a generalised master routine that will cater for as yet unknown validation calls and a variety of GUI design decisions.
The only real assumption needed at this stage is the selector for the form/partials. Let's assume it/they all have class="partialForm" :
$('.partialForm').on('submit', function(e) {
e.preventDefault();
$.when(setState(1)) // set the initial state, before any validation has occurred.
.then(validateAsync.bind(null, 1)).then(setState.bind(null, 2))
.then(validateAsync.bind(null, 2)).then(setState.bind(null, 3))
.then(validateAsync.bind(null, 3)).then(setState.bind(null, 4))
.then(function aggregateAndSubmit() {
var allData = ....; // here aggregate all three forms' into one serialization.
$.post('mainurl', allData, function(result) {
console.log(result);
});
}, function(error) {
console.log('validation failed at stage: ' + error.message);
// on screen message for user ...
return $.when(); //inhibit .fail() handler below.
})
.fail(function(error) {
console.log(error);
// on screen message for user ...
});
});
It's syntactically convenient here to call setState() as a then callback although it's (probably) synchronous
Sample validateAsync() :
function validateAsync(stage) {
var data, jqXHR;
switch(stage) {
case 1:
data = $("#form1").serialize();
jqXHR = $.ajax(...);
break;
case 2:
data = $("#form2").serialize();
jqXHR = $.ajax(...);
break;
case 3:
data = $("#form3").serialize();
jqXHR = $.ajax(...);
}
return jqXHR.then(null, function() {
return new Error(stage);
});
}
Sample setState() :
function setState(stage) {
switch(stage) {
case 1: //initial state, ready for input into form1
$("#form1").disableForm(false);
$("#form2").disableForm(true);
$("#form3").disableForm(true);
break;
case 2: //form1 validated, ready for input into form2
$("#form1").disableForm(true);
$("#form2").disableForm(false);
$("#form3").disableForm(true);
break;
case 3: //form1 and form2 validated, ready for input into form3
$("#form1").disableForm(true);
$("#form2").disableForm(true);
$("#form3").disableForm(false);
break;
case 4: //form1, form2 and form3 validated, ready for final submission
$("#form1").disableForm(true);
$("#form2").disableForm(true);
$("#form3").disableForm(true);
}
return stage;
}
As written setState(), will need the jQuery plugin .disableForm() :
jQuery.fn.disableForm = function(bool) {
return this.each(function(i, form) {
if(!$(form).is("form")) return true; // continue
$(form.elements).each(function(i, el) {
el.readOnly = bool;
});
});
}
As I say, validateAsync() and setState() above are just rudimentary samples. As a minimum, you will need to :
flesh out validateAsync()
modify setState() to reflect the User Experience of your choice.

Reuse Deferred more than once

I'm using deferred as I need to execute several processes asynchronously.
To be clearer, here is the signification of my treatments :
Treatment1 : call of an ajax service providing user rights
Treatment2 : call of an ajax service providing links and labels.
I need to call these 2 services at the same time and then get the unified response of both services in order to display links depending on rights (my real problem is with a 3rd ajax service but let's talk about with only 2 to simplify).
First, I declare the deferred as global var :
var treatment1 = $.Deferred();
var treatment2 = $.Deferred();
Then, when I need to do the job, I call the resolve method with needed data for using it in the global unique treatment:
when my 1st ajax service responds : treatment1.resolve(responseData1)
when my 2nd ajax service responds : treatment2.resolve(responseData2)
When the treatment1 & 2 are finished, the done event is fired :
$.when(treatment1, treatment2).done(function(responseData1,responseData2) {
DoGlobalTreatmentWithAllResponseData(responseData1,responseData2)
}
My problem is that deferred works only once.
As my website is realized in ajax mainly, I need to fire the event multiple times.
The user can click a button to search for users. Then a list of users is displayed and the ajax services are all called asynchronously. This operation can be repeated infinitely.
I just need a way to reuse the principle of deferred but multiple times. I know that this problem has already been discussed and everyone says deferred can't work this way.
But, is it really not possible to reset the deferred state or reset the promises (even by implementing a custom solution, using AOP or something else)?
If it's impossible, what solution could I use? I don't want to fire treatments one after another but I really want to do a global treatment after all the treatments are finished (that is to say, after the last treatment in activity is finished) and I want to use the responseData of each services.
Here is my sample code that I would like to customize : http://jsfiddle.net/PLce6/14/
I hope to be clear as English is not my native language.
Thank you in advance for your help.
Deferreds can be resolved/rejected only once... However, I think the issue is how you're structuring your code...
As long as you're initializing your deferred each time, there isn't any problem in doing this...
I think the issue is this:
First, i declare the deferred as global var:
var treatment1 =$.Deferred();
var treatment2 = $.Deferred();
Instead, can you try doing this in a function that's invoked in the button click
The user can clic a button to search for users
so have a function like so:
function onClick() {
var treatment1 =$.ajax({url: '/call1'});
var treatment2 = $.ajax({url: '/call2'});
$.when(treatment1, treatment2).done(function(obj1, obj2) {
// do whatever else you need
});
}
Now from the rest of your post, looks like you're trying to reuse the deferreds - but in that case, your original solution should not have a problem with keeping deffereds as global since your done will be called with whatever data they were resolved with.
Can you post some more of your code to help explain what you're trying to do.
Updated from my own comment below for elaboration
based on op's fiddle, he wants to be able to trigger dependent action multiple times. Solution is to have the dependent action create new deferreds and hook up a $.when to itself. See updated fiddle at http://jsfiddle.net/PLce6/15/
// global
var d1 = $.Deferred();
var d2 = $.Deferred();
var d3 = $.Deferred();
// here's the reset
function resetDeferreds() {
d1 = $.Deferred();
d2 = $.Deferred();
d3 = $.Deferred();
$.when(d1, d2, d3).done(
function (responseData1, responseData2, responseData3) {
DoGlobalTreatmentWithAllResponseData(responseData1, responseData2, responseData3);
resetDeferreds();
});
// the onclick handlers
function do3() {
d3.resolve('do3 ');
return d3;
}
// the top level $.when
$.when(d1, d2, d3).done(function (responseData1, responseData2, responseData3) {
DoGlobalTreatmentWithAllResponseData(responseData1, responseData2, responseData3);
resetDeferreds();
});
Perhaps you code is not well designed?
I do not see how that would be an issue. The asynchronous process should be responsible for creating a new Deferred object everytime.
function doSomething() {
var d = $.Deferred();
setTimeout(function () {
d.resolve();
}, 1000);
return d;
}
function doSomethingElse() {
var d = $.Deferred();
setTimeout(function () {
d.resolve();
}, 1000);
return d;
}
Then you can always do the following:
$.when(doSomething(), doSomethingElse()).done(function () {
console.log('done');
});
There's always a solution:
If you absolutely need to be able to call resolve multiple times on the same Deferred, then you should wrap the Deferred into another object, let's say DeferredWrapper, which would expose the same API as a Deferred but would delegate all method calls to the it's encapsulated Deferred.
In addition of delegating the function calls, the DeferredWrapper would have to keep track of all listening operations (e.g. done, always, fail...) that were made on the object. The DeferredWrapper could store all actions as [functionName, arguments] tuples in an internal this._actions property.
Finally, you would need to provide a special implementation for state changing operations (e.g. reject, resolve, resolveWith...etc) that would look like:
Let d be the internal Deferred referenced by this._deferred.
Let fn be the function name of the function being called.
If d.state() is not pending:
3.1 Do d = this._deferred = [[native jQuery Deferred]]
3.2 Apply all actions on d.
Return the result of d[fn].apply(d, arguments)
Note: You would also need to implement a custom promise implementation and make sure it behaves correctly. You can probably use a similar approach like the one described.
I'm going to suggest a small change. One element you weren't clear on is whether or not the treatment1 and treatment2 results are different each time. If they are then do what #raghu and #juan-garcia
function onClick() {
var treatment1 =$.ajax({url: '/call1'});
var treatment2 = $.ajax({url: '/call2'});
$.when(treatment1, treatment2).done(function(obj1, obj2) {
// do whatever else you need
});
}
If they don't change then do this :
var treatment1 =$.ajax({url: '/call1'});
var treatment2 = $.ajax({url: '/call2'});
function onClick() {
$.when(treatment1, treatment2).done(function(obj1, obj2) {
// do whatever else you need
});
}
Or some variation of that. Because once they are complete, your callback function will always execute right away. It's still asynchronous, but it doesn't need to wait since everything is ready to go. This serves both use cases. This is a very common pattern for data that may take a few seconds to load before it's functionally useful when drawing a new component in the page. It's a lazy-load mechanism that's very useful. Once it's in though everything looks as if it's responding instantaneously.
I reworked the javascript in your example on JSFiddle to show just the basics I think you needed to see. That is here. Given your example, I think the mistake is in believing that resolve must be called multiple times to trigger a behavior. Invoking the done behavior cues a one time behavior and each invocation of done loads a new behavior into the queue. Resolve is called one time. $.when().done() you call as many times as you have behaviors dependent on the specific when() condition.

Loop calling an asynchronous function

Introduction to the problem
I need to call an asynchronous function within a loop until a condition is satisfied. This particular function sends a POST request to a website form.php and performs some operations with the response, which is a JSON string representing an object with an id field. So, when that id is null, the outer loop must conclude. The function does something like the following:
function asyncFunction(session) {
(new Request({
url: form.php,
content: "sess=" + session,
onComplete: function (response) {
var response = response.json;
if (response.id) {
doStaff(response.msg);
} else {
// Break loop
}
}
})).get();
}
Note: Although I've found the problem implementing an add-on for Firefox, I think that this is a general javascript question.
Implementing the loop recursively
I've tried implementing the loop by recursivity but it didn't work and I'm not sure that this is the right way.
...
if (response.id) {
doStaff(response.msg);
asyncFunction(session);
} else {
// Break loop
}
...
Using jsdeferred
I also have tried with the jsdeferred library:
Deferred.define(this);
//Instantiate a new deferred object
var deferred = new Deferred();
// Main loop: stops when we receive the exception
Deferred.loop(1000, function() {
asyncFunction(session, deferred);
return deferred;
}).
error(function() {
console.log("Loop finished!");
});
And then calling:
...
if (response.id) {
doStaff(response.msg);
d.call();
} else {
d.fail();
}
...
And I achieve serialization but it started repeating previous calls for every iteration. For example, if it was the third time that it called the asyncFunction, it would call the same function with the corresponding parameters in the iterations 1 and 2.
Your question is not exactly clear, but the basic architecture must be that the completion event handlers for the asynchronous operation must decide whether to try again or to simply return. If the results of the operation warrant another attempt, then the handler should call the parent function. If not, then by simply exiting the cycle will come to an end.
You can't code something like this in JavaScript with anything that looks like a simple "loop" structure, for the very reason that the operations are asynchronous. The results of the operation don't happen in such a way as to allow the looping mechanism to perform a test on the results; the loop may run thousands of iterations before the result is available. To put it another way, you don't "wait" for an asynchronous operation with code. You wait by doing nothing, and allowing the registered event handler to take over when the results are ready.
Thank you guys for your help. This is what I ended doing:
var sess = ...;
Deferred.define(this);
function asyncFunction (session) {
Deferred.next(function() {
var d = new Deferred();
(new Request({
url: form.php,
content: "sess=" + session,
onComplete: function (response) {
d.call(response.json);
}
})).get();
return d;
}).next(function(resp) {
if (resp.id) {
asyncFunction(session);
console.log(resp.msg);
}
});
}
asyncFunction(sess);
Why wouldn't you just use a setInterval loop? In the case of an SDK-based extension, this would look like:
https://builder.addons.mozilla.org/addon/1065247/latest/
The big benefit of promises-like patterns over using timers is that you can do things in parallel, and use much more complicated dependencies for various tasks. A simple loop like this is done just as easily / neatly using setInterval.
If I correctly understand what you want to do, Deferred is a good approach. Here's an example using jQuery which has Deferred functionality built in (jQuery.Deferred)
A timeout is used to simulate an http request. When each timeout is complete (or http request is complete) a random number is returned which is equivalent to the result of your http request.
Based on the result of the request you can decide if you need another http request or want to stop.
Try out the below snippet. Include the jQuery file and then the snippet. It keeps printing values in the console and stops after a zero is reached.
This could take while to understand but useful.
$(function() {
var MAXNUM = 9;
function newAsyncRequest() {
var def = $.Deferred(function(defObject) {
setTimeout(function() {
defObject.resolve(Math.floor(Math.random() * (MAXNUM+1)));
}, 1000);
});
def.done(function(val) {
if (val !== 0)
newAsyncRequest();
console.log(val);
});
};
newAsyncRequest();
});
Update after suggestion from #canuckistani
#canuckistani is correct in his answer. For this problem the solution is simpler. Without using Deferred the above code snippet becomes the following. Sorry I led you to a tougher solution.
$(function() {
var MAXNUM = 9;
function newAsyncRequest() {
setTimeout(function() {
var val = Math.floor(Math.random() * (MAXNUM+1));
if (val !== 0)
newAsyncRequest();
console.log(val);
}, 1000);
}
newAsyncRequest();
});

Force Javascript function call to wait until previous one is finished

I have a simple Javascript function:
makeRequest();
It does a bunch of stuff and places a bunch of content into the DOM.
I make a few calls like so:
makeRequest('food');
makeRequest('shopping');
However, they both fire so quickly that they are stepping on each other's toes. Ultimately I need it to have the functionality of.
makeRequest('food');
wait....
makeRequest('shopping'); only if makeRequest('food') has finished
Thoughts on getting these to execute only one at a time?
Thanks!
If these functions actually do an AJAX request, you are better keeping them asynchronous. You can make a synchronous AJAX request but it will stop the browser from responding and lead to bad user experience.
If what you require if that these AJAX requests are made one after the other because they depend on each other, you should investigate your function to see if it provides a callback mechanism.
makeRequest('food', function()
{
// called when food request is done
makeRequest('shopping');
});
Using jQuery, it looks something like that
$.get("/food", function(food)
{
// do something with food
$.get("/shopping", function(shopping)
{
// do something with shopping
});
});
I would recommend that you simply write them asynchronously--for example, call makeRequest('shopping'); from the AJAX completion handler of the first call.
If you do not want to write your code asynchronously, see Javascript Strands
I suppose that you have a callback method that takes care of the response for the request? Once it has done that, let it make the next request.
Declare an array for the queue, and a flag to keep track of the status:
var queue = [], requestRunning = false;
In the makeRequest method:
if (requestRunning) {
queue.push(requestParameter);
} else {
requestRunning = true;
// do the request
}
In the callback method, after taking care of the response:
if (queue.length > 0) {
var requestParameter = queue.splice(0,1)[0];
// do the request
} else {
requestRunning = false;
}

Categories