Extend ES6 Promise to convert callback to Deferred pattern - javascript

I would like to know what do you think about this kind of extension for ES6 Promise (https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/Promise):
Promise.create = function() {
var deferred;
var promise = new Promise(function (resolve, reject) {
deferred = {
resolve: resolve,
reject: reject
};
});
promise.deferred = deferred;
return promise;
}
Like that it avoid using the first callback to get cleaner code:
var requestsdeferred = Promise.create();
obj.myFunctionWithCallback(function(){
obj.mySecondFunctionWithCallback(function(){
requestsdeferred.resolve('all done!');
});
});
requestsdeferred.then(function(result) {
});
instead of:
var p = new Promise(function(resolve, reject){
obj.myFunctionWithCallback(function(){
obj.mySecondFunctionWithCallback(function(){
resolve('all done!');
});
});
});
p.then(function(){
});
Which need a callback.
What do you think ?

Neither is the correct/regular usage of a promise. The usual way to use a promise would be:
ajax.get('/get').then(function(){
return ajax.get('/cart');
}).then(function(){
alert('all done!');
});
Promises chain, you can return a promise from a then handler and it will cause the returned promise from the then to wait for its completion and assume its state.
Of course, unless the promises depend you can (and likely should) execute them concurrently:
Promise.all([ajax.get("/get"), ajax.get("/cart")]).then(function(results){
// both done here, concurrently
});
Avoid the explicit construction anti-pattern, no need to create a deferred. The reason the API does not look the way you describe it is so that if you throw synchronously it will get converted to a rejection and you won't have to add both a .catch and a catch (e){ handler to your code which is error prone.

It's just an obfuscation of the ES6 standard, and I don't see much benefit to your addition. It's just a layer of abstraction that isn't really needed.
I'd suggest a different approach that plays nice with ES6 promises...
The following is correct, but irrelevant (see comments) :
Any "thenable" can be turned into a standard Promise with Promise.resolve, so if your ajax is (for instance) created with jQuery's $.ajax, you might:
var prom1 = Promise.resolve(ajax.get('/get'));
var prom2 = Promise.resolve(ajax.get('/cart'));
We can create these together (so the requests run concurrently), then wait for all of the promises to complete with Promise.all:
Promise.all([req1, req2])
.then(function(vals){
//and get the results in the callback
var getData = vals[0];
var cartData = vals[1];
});

Related

How do I get data from pending resolved promise without async/await?

I have abstraction:
function fetchDataFromAPI() {
const url = `https://api...`
return fetch(url).then(response => response.json())
}
I want to use it in my other piece of code like:
if(something){
const data = fetchDataFromAPI()
return data
}
if I console.log data what I get is resolved pending promise
PromiseĀ {<pending>}
__proto__: Promise
[[PromiseStatus]]: "resolved"
[[PromiseValue]]: Object
How do I get that Object in data instead of Promise?
You can not. Here is why:
Promise is a language construct that makes JavaScript engine to continue to execute the code without waiting the return of inner function, also known as the executor function. A promise always run inside the event loop.
var p = new Promise(function(resolve, reject) {
setTimeout(function() {
resolve('foo');
}, 300);
});
console.log(p);
Basically a promise is a glorified syntactic sugar for a callback. We will see how but first lets have a more realistic code:
function someApiCall(){
return new Promise(function(resolve, reject){
setTimeout(()=>{
resolve('Hello');
})
})
}
let data = someApiCall();
console.log(data);
This is a so-called asynchronous code, when JavaScript engine executes it, someApiCall immediately returns a result, in this case pending promise:
> Promise {<pending>}
If you pay attention to the executor, you will see we needed to pass resolve and reject arguments aka callbacks. Yes, they are callbacks required by the language construct. When either of them called, promise will change its state and hence be settled. We don't call it resolved because resolving implies successful execution but a function also can error out.
How do we get the data? Well we need more callbacks, which will be called by the executor function once the promise is settled:
var p = new Promise(function(resolve, reject) {
setTimeout(function() {
resolve('foo');
}, 300);
});
p.then((result) => {
console.log(result); // foo
}).catch((err) => {
console.log(err);
});
Why we need to pass separate callbacks? Because one will be fed to the resolve, and the other to the reject. Then callback will be called by the resolve function, the catch callback by the reject function.
Javascript engine will execute these callbacks later on its leisure, for a regular function it means when the event loop is cleared, for timeout when the time is up.
Now to answer your question, how do we get data out from a promise. Well we can't.
If you look closely, you will see we don't really get the data out but keep feeding callbacks. There is no getting data out, but passing callbacks in.
p.then((result) => {
console.log(result);
}).catch((err) => {
console.log(err);
});
Some say use await:
async function() {
let result = await p;
}
But there is a catch. We have to or wrap it in async function. Always. Why? Because Async/await is another level of abstraction or syntactic sugar, whichever you prefer, on top of promise api. That is why we can not use await directly but always wrap it in async statement.
To sum up, when we use promise or async/await we need to follow certain convention and write terse code with closely knitted callbacks. Either javascript engine or transpilers like babeljs or typescript converts these code to regular javascript to be run.
I can understand your confusion because people keep saying getting data out when talking about promises, but we don't get any data out but pass callback to be executed when the data is ready.
Hope everything is clear now.
No, you cannot without using promises or async/await etc because calling a REST API is an asynchronous operation and is non blocking.
When you make a call to a REST API, the code shouldn't wait until the API returns a value because it may take a lot of time, making program non-responsive, thus by design making a network request is categorized as an asynchronous operation.
To avoid async/await, you'll need to use another .then:
if (something) {
return fetchDataFromAPI()
.then((data) => data /* you can console.log(data) here */)
}

Why does my Promise seem to be blocking execution

Below is a simplification of my code, I'm basicly running a function, that creates a Promise inside the function and returns it. For some reason though, testing with console.time(), it would appear that the code is actually blocking. The x.root function takes roughly 200ms to run and both console.time() tests give pretty much 200ms. Now if I did the age old trick of wrapping the function in setTimeout, the problem disappears, but I'd like to know what I'm doing wrong here?
I'd really prefer being able to create, resolve and reject the promises inside my helper function and then just call my function followed by then and catch without having to create Promises on an earlier level. What's the malfunction here?
parseroot = function(params) {
return new Promise(function(resolve, reject) {
try {
var parsed_html = x.root(params);
x.replacecontents(params.target, parsed_html);
resolve(true);
}
catch(e) {
reject(e);
}
});
}
console.time("Test 1");
console.time("Test 2");
var el = document.querySelector("#custom-element");
el.parseroot({
section:"list",
target: "#list-container",
}).then(function(response) {
console.info("searchresult > list() success");
console.timeEnd("Test 2");
});
console.timeEnd("Test 1");
Promises don't turn synchronous code into asynchronous code.
If the function you pass to Promise blocks, then the promise will block too.

Promise.defer standard?

I was working with Promises and prefer to use it like this:
function Deferred() {
this.resolve = null;
this.reject = null;
this.promise = new Promise(function(resolve, reject) {
this.resolve = resolve;
this.reject = reject;
}.bind(this));
Object.freeze(this);
}
function somethingAsync() {
var deferred = new Deferred();
// do stuff then deferred.resolve();
return deferred.promise;
}
I just came across though in Firefox Promise.defer() which gives me the same thing, is this standard? Or just specific to Firefox? I can't find it in the Promise docs of Firefox even - https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise
Promise.defer was a suggestion at one point but it was decided to not include it in the spec but to instead include the promise constructor which uses the revealing constructor pattern.
It was implemented in Firefox and Chrome and later removed from Chrome. It is not a standard but was a proposal at one point.
Your usage of the promise constructor was explicitly supported as a use case when it was designed.
The reason the committee decided to go with the promise constructor was because it guards against synchronous throws by default:
new Promise((resolve, reject) => {
thisThrowsSynchronously();
});
Had the promise constructor not done this - you would have to potentially .catch and } catch(e) { on every promise returning function invocation which can be frustrating. The promise constructor establishes an invariant where .catch is sufficient.
I'd also like to point out that outside of converting callback APIs - I can count the number of times I've used the promise constructor on one hand. Typically your code should have close to zero deferreds or usages of the promise constructor.

Why does the Promise constructor need an executor?

When using Promises, why can't triggers for resolve and reject be defined elsewhere in the codebase?
I don't understand why resolve and reject logic should be localized where the promise is declared. Is this an oversight, or is there a benefit to mandating the executor parameter?
I believe the executor function should be optional, and that its existence should determine whether the promise encapsulates resolution or not. The promise would be much more extensible without such mandates, since you don't have to initiate async right away. The promise should also be resettable. It's a 1 shot switch, 1 or 0, resolve() or reject(). There are a multitude of parallel and sequential outcomes that can be attached: promise.then(parallel1) and promise.then(parallel2) and also promise.then(seq1).then(seq2) but reference-privileged players cannot resolve/reject INTO the switch
You can construct a tree of outcomes at a later time, but you can't alter them, nor can you alter the roots (input triggers)
Honestly, the tree of sequential outcomes should be edittable as well.. say you want to splice out one step and do something else instead, after you've declared many promise chains. It doesn't make sense to reconstruct the promise and every sequential function, especially since you can't even reject or destroy the promise either...
This is called the revealing constructor pattern coined by Domenic.
Basically, the idea is to give you access to parts of an object while that object is not fully constructed yet. Quoting Domenic:
I call this the revealing constructor pattern because the Promise constructor is revealing its internal capabilities, but only to the code that constructs the promise in question. The ability to resolve or reject the promise is only revealed to the constructing code, and is crucially not revealed to anyone using the promise. So if we hand off p to another consumer, say
The past
Initially, promises worked with deferred objects, this is true in the Twisted promises JavaScript promises originated in. This is still true (but often deprecated) in older implementations like Angular's $q, Q, jQuery and old versions of bluebird.
The API went something like:
var d = Deferred();
d.resolve();
d.reject();
d.promise; // the actual promise
It worked, but it had a problem. Deferreds and the promise constructor are typically used for converting non-promise APIs to promises. There is a "famous" problem in JavaScript called Zalgo - basically, it means that an API must be synchronous or asynchronous but never both at once.
The thing is - with deferreds it's possible to do something like:
function request(param) {
var d = Deferred();
var options = JSON.parse(param);
d.ajax(function(err, value) {
if(err) d.reject(err);
else d.resolve(value);
});
}
There is a hidden subtle bug here - if param is not a valid JSON this function throws synchronously, meaning that I have to wrap every promise returning function in both a } catch (e) { and a .catch(e => to catch all errors.
The promise constructor catches such exceptions and converts them to rejections which means you never have to worry about synchronous exceptions vs asynchronous ones with promises. (It guards you on the other side by always executing then callbacks "in the next tick").
In addition, it also required an extra type every developer has to learn about where the promise constructor does not which is pretty nice.
FYI, if you're dying to use the deferred interface rather than the Promise executor interface despite all the good reasons against the deferred interface, you can code one trivially once and then use it everywhere (personally I think it's a bad idea to code this way, but your volume of questions on this topic suggests you think differently, so here it is):
function Deferred() {
var self = this;
var p = this.promise = new Promise(function(resolve, reject) {
self.resolve = resolve;
self.reject = reject;
});
this.then = p.then.bind(p);
this.catch = p.catch.bind(p);
if (p.finally) {
this.finally = p.finally.bind(p);
}
}
Now, you can use the interface you seem to be asking for:
var d = new Deferred();
d.resolve();
d.reject();
d.promise; // the actual promise
d.then(...) // can use .then() on either the Deferred or the Promise
d.promise.then(...)
Here a slightly more compact ES6 version:
function Deferred() {
const p = this.promise = new Promise((resolve, reject) => {
this.resolve = resolve;
this.reject = reject;
});
this.then = p.then.bind(p);
this.catch = p.catch.bind(p);
if (p.finally) {
this.finally = p.finally.bind(p);
}
}
Or, you can do what you asked for in your question using this Deferred() constructor:
var request = new Deferred();
request.resolve();
request.then(handleSuccess, handleError);
But, it has the downsides pointed out by Benjamin and is not considered the best way to code promises.

angularjs check if 2 promises have returned

I have a situation where I want to do things as each individual promise returns, and do something else when Both have returned.
promise1.then(function() { // do stuff })
promise2.then(function() { // do stuff })
$q.all([promise1, promise2]).then(function () {
var dependsOnBoth = promise1.x && promise2.x;
})
I'm wondering whether repeating the promises repeats the calls, and if there is a better way to just make sure that both have finished. Thanks!
Because you are calling then on each promise and again in $q.all on both promises. Each promise gets called twice.
Option one:
You would know, when each promise is resolved. So, don't call then on each promise. Only call it in $q.all.
var promise1 = function(){
var promise1defered = $q.defer();
// promise1 is resolved here.
promise1defered .resolve();
return promise1defered .promise;
}
var promise2 = function(){
var promise2defered = $q.defer();
// promise2 is resolved here.
promise2defered .resolve();
return promise2defered .promise;
}
$q.all([promise1, promise2]).then(function () {
// Both promises are resolved here
})
Option two:
Instead of using $q.all, go for chaining promises.
var promise2 = promise1.then(function(){
// promise1 completed
// Resolve promise2 here
]);
promise2.then(function(){
// promise 2 completed
// Both promise 1 and promise 2 are completed.
});
Then functions resolve in the order that they're added to the promise, so the individual thens will be called before the all.then is called. Each then will only be called once, so that's probably the best way to handle that.
Of course, you could always put all that post-proccessing just in the all route, since it'll happen after both.
Also: don't forget your error callbacks.

Categories