I am trying to figure out how to use Promise with generator. For that I am creating function helper to print async task but seems like I am doing it in the wrong way.
const asyncTask = () => new Promise((resolve, reject) => setTimeout(() => resolve('async task'), 1000))
function helper(func){
return func().next()
}
helper(function* main() {
try {
const a = yield asyncTask()
console.log(a)
} catch(e) {
console.error('error happened', e)
}
})
Getting no output.
I'm not 100% sure what you are trying to achieve. However, if you want to see the output of your promise, you have to await it and log it to the console. Your generator is producing your promise properly but you aren't doing anything with it. This alternative helper() function outputs the result:
async function helper(func){
let p = func().next().value;
let output = await p;
console.log(output);
}
It's generally easier to use asynchronous generators than to use synchronous generators mixed with other async aparatus.
An asynchronous generator works just like a normal generator except:
It's prefixed with async, like all async functions
next() returns a promise
Here's a simplified example:
let asyncGen = async function*() {
yield Promise.resolve('foo');
};
let iterator = asyncGen();
iterator.next().then(result => alert(result.value));
(Note, there's no difference between my Promise.resolve('foo') and your more convoluted promise with a timeout; both are resolved asynchronously - I just kept my example simple.)
You can read more about asynchronous generators in my three-part guide to generators.
You can look at the source code of the co.js to learn more about resolving generators as promises.
The c-promise2 package also has support of generators resolving:
CPromise.from(function*(){
const value1= yield CPromise.delay(3000, 3);
// Run promises in parallel using CPromise.all (shortcut syntax)
const [value2, value3]= yield [CPromise.delay(3000, 4), CPromise.delay(3000, 5)]
return value1 + value2 + value3;
}).then(function*(value){
console.log(`Done: ${value}`); // Done: 12
}, err=>{
console.log(`Failed: ${err}`);
})
Or just use ECMA async generators...
Related
There is quite some topics posted about how async/await behaves in javascript map function, but still, detail explanation in bellow two examples would be nice:
const resultsPromises = myArray.map(async number => {
return await getResult(number);
});
const resultsPromises = myArray.map(number => {
return getResult(number);
});
edited: this if of course a fictional case, so just opened for debate, why,how and when should map function wait for await keyword. solutions how to modify this example, calling Promise.all() is kind of not the aim of this question.
getResult is an async function
The other answers have pretty well covered the details of how your examples behave, but I wanted to try to state it more succinctly.
const resultsPromises = myArray.map(async number => {
return await getResult(number);
});
const resultsPromises = myArray.map(number => {
return getResult(number);
});
Array.prototype.map synchronously loops through an array and transforms each element to the return value of its callback.
Both examples return a Promise.
async functions always return a Promise.
getResult returns a Promise.
Therefore, if there are no errors you can think of them both in pseudocode as:
const resultsPromises = myArray.map(/* map each element to a Promise */);
As zero298 stated and alnitak demonstrated, this very quickly (synchronously) starts off each promise in order; however, since they're run in parallel each promise will resolve/reject as they see fit and will likely not settle (fulfill or reject) in order.
Either run the promises in parallel and collect the results with Promise.all or run them sequentially using a for * loop or Array.prototype.reduce.
Alternatively, you could use a third-party module for chainable asynchronous JavaScript methods I maintain to clean things up and--perhaps--make the code match your intuition of how an async map operation might work:
const delay = ms => new Promise(resolve => setTimeout(resolve, ms));
const getResult = async n => {
await delay(Math.random() * 1000);
console.log(n);
return n;
};
(async () => {
console.log('parallel:');
await AsyncAF([1, 2, 3]).map(getResult).then(console.log);
console.log('sequential:');
await AsyncAF([1, 2, 3]).series.map(getResult).then(console.log)
})();
<script src="https://unpkg.com/async-af#7.0.12/index.js"></script>
async/await is usefull when you want to flatten your code by removing the .then() callbacks or if you want to implicitly return a Promise:
const delay = n => new Promise(res => setTimeout(res, n));
async function test1() {
await delay(200);
// do something usefull here
console.log('hello 1');
}
async function test2() {
return 'hello 2'; // this returned value will be wrapped in a Promise
}
test1();
test2().then(console.log);
However, in your case, you are not using await to replace a .then(), nor are you using it to return an implicit Promise since your function already returns a Promise. So they are not necessary.
Parallel execution of all the Promises
If you want to run all Promises in parallel, I would suggest to simply return the result of getResult with map() and generate an array of Promises. The Promises will be started sequentially but will eventually run in parallel.
const resultsPromises = indicators.map(getResult);
Then you can await all promises and get the resolved results using Promise.all():
const data = [1, 2, 3];
const getResult = x => new Promise(res => {
return setTimeout(() => {
console.log(x);
res(x);
}, Math.random() * 1000)
});
Promise.all(data.map(getResult)).then(console.log);
Sequential execution of the Promises
However, if you want to run each Promise sequentially and wait for the previous Promise to resolve before running the next one, then you can use reduce() and async/await like this:
const data = [1, 2, 3];
const getResult = x => new Promise(res => {
return setTimeout(() => {
console.log(x);
res(x);
}, Math.random() * 1000)
});
data.reduce(async (previous, x) => {
const result = await previous;
return [...result, await getResult(x)];
}, Promise.resolve([])).then(console.log);
Array.prototype.map() is a function that transforms Arrays. It maps one Array to another Array. The most important part of its function signature is the callback. The callback is called on each item in the Array and what that callback returns is what is put into the new Array returned by map.
It does not do anything special with what gets returned. It does not call .then() on the items, it does not await anything. It synchronously transforms data.
That means that if the callback returns a Promise (which all async functions do), all the promises will be "hot" and running in parallel.
In your example, if getResult() returns a Promise or is itself async, there isn't really a difference between your implementations. resultsPromises will be populated by Promises that may or may not be resolved yet.
If you want to wait for everything to finish before moving on, you need to use Promise.all().
Additionally, if you only want 1 getResults() to be running at a time, use a regular for loop and await within the loop.
If the intent of the first code snippet was to have a .map call that waits for all of the Promises to be resolved before returning (and to have those callbacks run sequentially) I'm afraid it doesn't work like that. The .map function doesn't know how to do that with async functions.
This can be demonstrated with the following code:
const array = [ 1, 2, 3, 4, 5 ];
function getResult(n)
{
console.log('starting ' + n);
return new Promise(resolve => {
setTimeout(() => {
console.log('finished ' + n);
resolve(n);
}, 1000 * (Math.random(5) + 1));
});
}
let promises = array.map(async (n) => {
return await getResult(n);
});
console.log('map finished');
Promise.all(promises).then(console.log);
Where you'll see that the .map call finishes immediately before any of the asynchronous operations are completed.
If getResult always returns a promise and never throws an error then both will behave the same.
Some promise returning functions can throw errors before the promise is returned, in this case wrapping the call to getResult in an async function will turn that thrown error into a rejected promise, which can be useful.
As has been stated in many comments, you never need return await - it is equivalent to adding .then(result=>result) on the end of a promise chain - it is (mostly) harmless but unessesary. Just use return.
I have multiple time-consuming functions, and I want to run a function after they have all completed, for example:
data.x = thisTakes2Seconds();
data.y = thisTakes5Seconds();
http.post(data);
I'm familiar with the concept of callbacks in Javascript, but if I have several functions, am I really supposed to have nested callbacks several functions deep?
For handling asynchronous functions easily, the best way is to use promises, and async/await
function thisTakes2Seconds() {
return new Promise(resolve => setTimeout(() => resolve(3), 200)); // 0.2 to avoid waiting :P
}
function thisTakes5Seconds() {
return new Promise(resolve => setTimeout(() => resolve(5), 500));
}
async function foo() {
const data = {};
data.x = await thisTakes2Seconds();
data.y = await thisTakes5Seconds();
// This will run once both promises have been resolved
console.log(data);
}
foo()
.then(() => console.log('done!')
.catch(err => console.error(err));
If you wish to perform both functions in parallel, you can do so, and wait for both to finish using Promise.all
async function foo() {
const data = {};
// Promise.all returns an array where each item is the resolved
// value of the promises passed to it, maintaining the order
// So we use destructuring to assign those values
[data.x, data.y] = await Promise.all([
thisTakes2Seconds(),
thisTakes5Seconds()
]);
console.log(data);
}
If you already have an async function using callbacks, you can easily convert it to promises.
function myAsyncFunction(callback) {
setTimeout(() => {
callback(Math.random());
}, 200);
}
function myAsyncFunctionPromise() {
return new Promise((resolve, reject) => {
myAsyncFunction(resolve);
// If there is an error callback, just pass reject too.
});
}
There are libraries like bluebird, that already have an utility method to promisify callback API.
http://bluebirdjs.com/docs/api/promise.promisify.html
If you're running it on the browser, and need to support outdated ones, you can use babel to transpile async/await to ES5
Your thisTakesXSeconds functions are immediately returning their results. That tells us that they're synchronous. No need for callbacks, that code will just take ~7 seconds to run.
If thisTakesXSeconds started an asynchronous process that took X seconds (although the fact you're return a result suggests otherwise), we'd look at ways of managing the completion process.
am I really supposed to have nested callbacks several functions deep?
That question, and the general dissatisfaction with the answer "yes," is why we now have promises and even async functions. :-)
You'd make your thisTakesXSeconds functions return a promise, and then do something along these lines if the functions can run in parallel:
Promise.all([
thisTakes2Seconds(),
thisTakes5Seconds()
])
.then(([x, y]) => {
data.x = x;
data.y = y;
// use or return `data` here
})
// return the promise or add a `catch` handler
If they need to run in series (one after another), then
thisTakes2Seconds()
.then(x => {
data.x = x;
return thisTakes5Seconds();
})
.then(y => {
data.y = y;
// use or return `data` here
})
// return the promise or add a `catch` handler
...which looks a bit clearer in an async function:
data.x = await thisTakes2Seconds();
data.y = await thisTakes5Seconds();
// use or return `data` here
// add appropriate error handling (at this level or when calling the function)
One technique I have used to handle executing some code after several async calls have executed is to use a "has completed" counter or object.
Each function executes a callback that includes
if (counter == numberOfFuntionsIWantedToComplete)
doTheAfterWeHaveAllDataThing`
Just going through this tutorial, and it baffles me to understand why await only works in async function.
From the tutorial:
As said, await only works inside async function.
From my understanding, async wraps the function return object into a Promise, so the caller can use .then()
async function f() {
return 1;
}
f().then(alert); // 1
And await just waits for the promise to settle within the async function.
async function f() {
let promise = new Promise((resolve, reject) => {
setTimeout(() => resolve("done!"), 1000)
});
let result = await promise; // wait till the promise resolves (*)
alert(result); // "done!"
}
f();
It seems to me their usage are not related, could someone please explain?
Code becomes asynchronous on await - we wouldn't know what to return
What await does in addition to waiting for the promise to resolve is that it immediately returns the code execution to the caller. All code inside the function after await is asynchronous.
async is syntatic sugar for returning a promise.
If you don't want to return a promise at await, what would be the sane alternative in an asynchronous code?
Let's look at the following erroneous code to see the problem of the return value:
function f() {
// Execution becomes asynchronous after the next line, what do we want to return to the caller?
let result = await myPromise;
// No point returning string in async code since the caller has already moved forward.
return "function finished";
}
We could instead ask another question: why don't we have a synchronous version of await that wouldn't change the code to asynchronous?
My take on that is that for many good reasons making asynchronous code synchronous has been made difficult by design. For example, it would make it too easy for people to accidentally make their whole application to freeze when waiting for an asynchronous function to return.
To further illustrate the runtime order with async and await:
async function f() {
for(var i = 0; i < 1000000; i++); // create some synchronous delay
let promise = new Promise((resolve, reject) => {
setTimeout(() => resolve("done!"), 1000)
});
console.log("message inside f before returning, still synchronous, i = " + i);
// let's await and at the same time return the promise to the caller
let result = await promise;
console.log("message inside f after await, asynchronous now");
console.log(result); // "done!"
return "function finished";
}
let myresult = f();
console.log("message outside f, immediately after calling f");
The console log output is:
message inside f before returning, still synchronous, i = 1000000
message message outside f, immediately after calling f
message inside f after await, asynchronous now
done!
async and await are both meta keywords that allow asynchronous code to be written in a way that looks synchronous. An async function tells the compiler ahead of time that the function will be returning a Promise and will not have a value resolved right away. To use await and not block the thread async must be used.
async function f() {
return await fetch('/api/endpoint');
}
is equivalent to
function f() {
return new Promise((resolve,reject) => {
return fetch('/api/endpoint')
.then(resolve);
});
}
All four functions are called below in update return promises.
async function update() {
var urls = await getCdnUrls();
var metadata = await fetchMetaData(urls);
var content = await fetchContent(metadata);
await render(content);
return;
}
What if we want to abort the sequence from outside, at any given time?
For example, while fetchMetaData is being executed, we realize we no longer need to render the component and we want to cancel the remaining operations (fetchContent and render). Is there a way to abort/cancel these operations from outside the update function?
We could check against a condition after each await, but that seems like an inelegant solution, and even then we will have to wait for the current operation to finish.
The standard way to do this now is through AbortSignals
async function update({ signal } = {}) {
// pass these to methods to cancel them internally in turn
// this is implemented throughout Node.js and most of the web platform
try {
var urls = await getCdnUrls({ signal });
var metadata = await fetchMetaData(urls);
var content = await fetchContent(metadata);
await render(content);
} catch (e) {
if(e.name !== 'AbortError') throw e;
}
return;
}
// usage
const ac = new AbortController();
update({ signal: ac.signal });
ac.abort(); // cancel the update
OLD 2016 content below, beware dragons
I just gave a talk about this - this is a lovely topic but sadly you're not really going to like the solutions I'm going to propose as they're gateway-solutions.
What the spec does for you
Getting cancellation "just right" is actually very hard. People have been working on just that for a while and it was decided not to block async functions on it.
There are two proposals attempting to solve this in ECMAScript core:
Cancellation tokens - which adds cancellation tokens that aim to solve this issue.
Cancelable promise - which adds catch cancel (e) { syntax and throw.cancel syntax which aims to address this issue.
Both proposals changed substantially over the last week so I wouldn't count on either to arrive in the next year or so. The proposals are somewhat complimentary and are not at odds.
What you can do to solve this from your side
Cancellation tokens are easy to implement. Sadly the sort of cancellation you'd really want (aka "third state cancellation where cancellation is not an exception) is impossible with async functions at the moment since you don't control how they're run. You can do two things:
Use coroutines instead - bluebird ships with sound cancellation using generators and promises which you can use.
Implement tokens with abortive semantics - this is actually pretty easy so let's do it here
CancellationTokens
Well, a token signals cancellation:
class Token {
constructor(fn) {
this.isCancellationRequested = false;
this.onCancelled = []; // actions to execute when cancelled
this.onCancelled.push(() => this.isCancellationRequested = true);
// expose a promise to the outside
this.promise = new Promise(resolve => this.onCancelled.push(resolve));
// let the user add handlers
fn(f => this.onCancelled.push(f));
}
cancel() { this.onCancelled.forEach(x => x); }
}
This would let you do something like:
async function update(token) {
if(token.isCancellationRequested) return;
var urls = await getCdnUrls();
if(token.isCancellationRequested) return;
var metadata = await fetchMetaData(urls);
if(token.isCancellationRequested) return;
var content = await fetchContent(metadata);
if(token.isCancellationRequested) return;
await render(content);
return;
}
var token = new Token(); // don't ned any special handling here
update(token);
// ...
if(updateNotNeeded) token.cancel(); // will abort asynchronous actions
Which is a really ugly way that would work, optimally you'd want async functions to be aware of this but they're not (yet).
Optimally, all your interim functions would be aware and would throw on cancellation (again, only because we can't have third-state) which would look like:
async function update(token) {
var urls = await getCdnUrls(token);
var metadata = await fetchMetaData(urls, token);
var content = await fetchContent(metadata, token);
await render(content, token);
return;
}
Since each of our functions are cancellation aware, they can perform actual logical cancellation - getCdnUrls can abort the request and throw, fetchMetaData can abort the underlying request and throw and so on.
Here is how one might write getCdnUrl (note the singular) using the XMLHttpRequest API in browsers:
function getCdnUrl(url, token) {
var xhr = new XMLHttpRequest();
xhr.open("GET", url);
var p = new Promise((resolve, reject) => {
xhr.onload = () => resolve(xhr);
xhr.onerror = e => reject(new Error(e));
token.promise.then(x => {
try { xhr.abort(); } catch(e) {}; // ignore abort errors
reject(new Error("cancelled"));
});
});
xhr.send();
return p;
}
This is as close as we can get with async functions without coroutines. It's not very pretty but it's certainly usable.
Note that you'd want to avoid cancellations being treated as exceptions. This means that if your functions throw on cancellation you need to filter those errors on the global error handlers process.on("unhandledRejection", e => ... and such.
You can get what you want using Typescript + Bluebird + cancelable-awaiter.
Now that all evidence point to cancellation tokens not making it to ECMAScript, I think the best solution for cancellations is the bluebird implementation mentioned by #BenjaminGruenbaum, however, I find the usage of co-routines and generators a bit clumsy and uneasy on the eyes.
Since I'm using Typescript, which now support async/await syntax for es5 and es3 targets, I've created a simple module which replaces the default __awaiter helper with one that supports bluebird cancellations: https://www.npmjs.com/package/cancelable-awaiter
Unfortunately, there is no support of cancellable promises so far. There are some custom implementations e.g.
Extends/wraps a promise to be cancellable and resolvable
function promisify(promise) {
let _resolve, _reject
let wrap = new Promise(async (resolve, reject) => {
_resolve = resolve
_reject = reject
let result = await promise
resolve(result)
})
wrap.resolve = _resolve
wrap.reject = _reject
return wrap
}
Usage: Cancel promise and stop further execution immediately after it
async function test() {
// Create promise that should be resolved in 3 seconds
let promise = new Promise(resolve => setTimeout(() => resolve('our resolved value'), 3000))
// extend our promise to be cancellable
let cancellablePromise = promisify(promise)
// Cancel promise in 2 seconds.
// if you comment this line out, then promise will be resolved.
setTimeout(() => cancellablePromise.reject('error code'), 2000)
// wait promise to be resolved
let result = await cancellablePromise
// this line will never be executed!
console.log(result)
}
In this approach, a promise itself is executed till the end, but the caller code that awaits promise result can be 'cancelled'.
Unfortunately, no, you can't control execution flow of default async/await behaviour – it does not mean that the problem itself is impossible, it means that you need to do change your approach a bit.
First of all, your proposal about wrapping every async line in a check is a working solution, and if you have just couple places with such functionality, there is nothing wrong with it.
If you want to use this pattern pretty often, the best solution, probably, is to switch to generators: while not so widespread, they allow you to define each step's behaviour, and adding cancel is the easiest. Generators are pretty powerful, but, as I've mentioned, they require a runner function and not so straightforward as async/await.
Another approach is to create cancellable tokens pattern – you create an object, which will be filled a function which wants to implement this functionality:
async function updateUser(token) {
let cancelled = false;
// we don't reject, since we don't have access to
// the returned promise
// so we just don't call other functions, and reject
// in the end
token.cancel = () => {
cancelled = true;
};
const data = await wrapWithCancel(fetchData)();
const userData = await wrapWithCancel(updateUserData)(data);
const userAddress = await wrapWithCancel(updateUserAddress)(userData);
const marketingData = await wrapWithCancel(updateMarketingData)(userAddress);
// because we've wrapped all functions, in case of cancellations
// we'll just fall through to this point, without calling any of
// actual functions. We also can't reject by ourselves, since
// we don't have control over returned promise
if (cancelled) {
throw { reason: 'cancelled' };
}
return marketingData;
function wrapWithCancel(fn) {
return data => {
if (!cancelled) {
return fn(data);
}
}
}
}
const token = {};
const promise = updateUser(token);
// wait some time...
token.cancel(); // user will be updated any way
I've written articles, both on cancellation and generators:
promise cancellation
generators usage
To summarize – you have to do some additional work in order to support canncellation, and if you want to have it as a first class citizen in your application, you have to use generators.
Here is a simple exemple with a promise:
let resp = await new Promise(function(resolve, reject) {
// simulating time consuming process
setTimeout(() => resolve('Promise RESOLVED !'), 3000);
// hit a button to cancel the promise
$('#btn').click(() => resolve('Promise CANCELED !'));
});
Please see this codepen for a demo
Using CPromise (c-promise2 package) this can be easily done in the following way
(Demo):
import CPromise from "c-promise2";
async function getCdnUrls() {
console.log(`task1:start`);
await CPromise.delay(1000);
console.log(`task1:end`);
}
async function fetchMetaData() {
console.log(`task2:start`);
await CPromise.delay(1000);
console.log(`task2:end`);
}
function* fetchContent() {
// using generators is the recommended way to write asynchronous code with CPromise
console.log(`task3:start`);
yield CPromise.delay(1000);
console.log(`task3:end`);
}
function* render() {
console.log(`task4:start`);
yield CPromise.delay(1000);
console.log(`task4:end`);
}
const update = CPromise.promisify(function* () {
var urls = yield getCdnUrls();
var metadata = yield fetchMetaData(urls);
var content = yield* fetchContent(metadata);
yield* render(content);
return 123;
});
const promise = update().then(
(v) => console.log(`Done: ${v}`),
(e) => console.warn(`Fail: ${e}`)
);
setTimeout(() => promise.cancel(), 2500);
Console output:
task1:start
task1:end
task2:start
task2:end
task3:start
Fail: CanceledError: canceled
Just like in regular code you should throw an exception from the first function (or each of the next functions) and have a try block around the whole set of calls. No need to have extra if-elses. That's one of the nice bits about async/await, that you get to keep error handling the way we're used to from regular code.
Wrt cancelling the other operations there is no need to. They will actually not start until their expressions are encountered by the interpreter. So the second async call will only start after the first one finishes, without errors. Other tasks might get the chance to execute in the meantime, but for all intents and purposes, this section of code is serial and will execute in the desired order.
This answer I posted may help you to rewrite your function as:
async function update() {
var get_urls = comPromise.race([getCdnUrls()]);
var get_metadata = get_urls.then(urls=>fetchMetaData(urls));
var get_content = get_metadata.then(metadata=>fetchContent(metadata);
var render = get_content.then(content=>render(content));
await render;
return;
}
// this is the cancel command so that later steps will never proceed:
get_urls.abort();
But I am yet to implement the "class-preserving" then function so currently you have to wrap every part you want to be able to cancel with comPromise.race.
I created a library called #kaisukez/cancellation-token
The idea is to pass a CancellationToken to every async function, then wrap every promise in AsyncCheckpoint. So that when the token is cancelled, your async function will be cancelled in the next checkpoint.
This idea came from tc39/proposal-cancelable-promises
and conradreuter/cancellationtoken.
How to use my library
Refactor your code
// from this
async function yourFunction(param1, param2) {
const result1 = await someAsyncFunction1(param1)
const result2 = await someAsyncFunction2(param2)
return [result1, result2]
}
// to this
import { AsyncCheckpoint } from '#kaisukez/cancellation-token'
async function yourFunction(token, param1, param2) {
const result1 = await AsyncCheckpoint.after(token, () => someAsyncFunction1(param1))
const result2 = await AsyncCheckpoint.after(token, () => someAsyncFunction2(param2))
return [result1, result2]
}
Create a token then call your function with that token
import { CancellationToken, CancellationError } from '#kaisukez/cancellation-token'
const [token, cancel] = CancellationToken.source()
// spawn background task (run async function without using `await`)
CancellationError.ignoreAsync(() => yourAsyncFunction(token, param1, param2))
// ... do something ...
// then cancel the background task
await cancel()
So this is the solution of the OP's question.
import { CancellationToken, CancellationError, AsyncCheckpoint } from '#kaisukez/cancellation-token'
async function update(token) {
var urls = await AsyncCheckpoint.after(token, () => getCdnUrls());
var metadata = await AsyncCheckpoint.after(token, () => fetchMetaData(urls));
var content = await AsyncCheckpoint.after(token, () => fetchContent(metadata));
await AsyncCheckpoint.after(token, () => render(content));
return;
}
const [token, cancel] = CancellationToken.source();
// spawn background task (run async function without using `await`)
CancellationError.ignoreAsync(() => update(token))
// ... do something ...
// then cancel the background task
await cancel()
Example written in Node with Typescript of a call which can be aborted from outside:
function cancelable(asyncFunc: Promise<void>): [Promise<void>, () => boolean] {
class CancelEmitter extends EventEmitter { }
const cancelEmitter = new CancelEmitter();
const promise = new Promise<void>(async (resolve, reject) => {
cancelEmitter.on('cancel', () => {
resolve();
});
try {
await asyncFunc;
resolve();
} catch (err) {
reject(err);
}
});
return [promise, () => cancelEmitter.emit('cancel')];
}
Usage:
const asyncFunction = async () => {
// doSomething
}
const [promise, cancel] = cancelable(asyncFunction());
setTimeout(() => {
cancel();
}, 2000);
(async () => await promise)();
As I understand it ECMA6 generators are supposed to be able to yield to a function that returns a promise, eventually returning the resolved/rejected. Making the code read more like synchronous code, and avoiding callback hell.
I am using node.js v0.12.2 with --harmony and the following code.
var someAsyncThing = function() {
return new Promise(function(resolve, reject) {
resolve("I'm Resolved!");
});
};
someAsyncThing().then(function(res) {console.log(res);});
// Works as expected: logs I'm Resolved!
function* getPromise() {
var x = yield someAsyncThing();
console.log("x: " + x); // Fails x undefined
}
var y = getPromise();
console.log(y); // returns {}
console.log(y.next());
// Fails: logs { value: {}, done: false }
I've based the code off of the few examples I have been able to find online. What am I doing wrong?
As I understand it ECMA6 generators are supposed to be able to yield to a function that returns a promise, eventually returning the resolved/rejected.
No, that's not their purpose. ES6 generators are supposed to provide a simple way for writing iterators - each call to a generator function produces an iterator. An iterator is just a sequence of values - like an array, but consumed dynamically and produced lazily.
Now, generators can be abused for asynchronous control flow, by producing a sequence of promises that is consumed asynchronously and advancing the iterator with the results of each awaited promise. See here for an explanation without promises.
So what your code is missing is the consumer that actually waits for the promises and advances your generator. Usually you'd use a dedicated library (like co or task.js), or one of the helper functions that many promise libraries provide (Q, Bluebird, when, …), but for the purposes of this answer I'll show a simplified one:
function run(gf) {
let g = gf();
return Promise.resolve(function step(v) {
var res = g.next(v);
if (res.done) return res.value;
return res.value.then(step);
}());
}
Now with this function you can actually "execute" your getPromise generator:
run(getPromise).then(console.log, console.error);
tl;dr: The promise yielded by the generator has to move the generator forward.
If you look at first examples in http://davidwalsh.name/async-generators, you will notice that the async function actually moves the iterator forward:
function request(url) {
// this is where we're hiding the asynchronicity,
// away from the main code of our generator
// `it.next(..)` is the generator's iterator-resume
// call
makeAjaxCall( url, function(response){
it.next( response ); // <--- this is where the magic happens
} );
// Note: nothing returned here!
}
But since you are working with promises, we can improve on that, a little bit. y.next().value returns a promise, and you'd have to listen to that promise. So instead of writing console.log(y.next()), you'd write:
var promise = y.next().value;
promise.then(y.next.bind(y)); // or promise.then(function(v) { y.next(v); });
Babel demo
Of course this not very practical, because now you cannot access the next yielded value and you don't know when the generator will be done. However, you could write a recursive function which takes care of that. That's what runGenerator introduced later in this article takes care of.
Modern JavaScript has async/await syntax. For example:
function sleep(delay) {
return new Promise(function (resolve, reject) {
setTimeout(resolve, delay);
} );
};
(async function () {
console.log('one');
await sleep(1000);
console.log('two');
await sleep(1000);
console.log('three');
})();
Prior to async/await syntax, you had the option of using Promises with function generators that yield promises. To do this, if you put the above code into https://babeljs.io and enable the babel-plugin-transform-async-to-generator it will produce the following code snippet.
function sleep(delay) {
return new Promise(function (resolve, reject) {
setTimeout(resolve, delay);
} );
}
_asyncToGenerator(function *() {
console.log('one');
yield sleep(1000);
console.log('two');
yield sleep(1000);
console.log('three');
})();
function _asyncToGenerator(fn) {
return function() {
var self = this,
args = arguments
return new Promise(function(resolve, reject) {
var gen = fn.apply(self, args)
function _next(value) {
asyncGeneratorStep(gen, resolve, reject, _next, _throw, "next", value)
}
function _throw(err) {
asyncGeneratorStep(gen, resolve, reject, _next, _throw, "throw", err)
}
_next(undefined)
})
}
}
function asyncGeneratorStep(gen, resolve, reject, _next, _throw, key, arg) {
try {
var info = gen[key](arg)
var value = info.value
} catch (error) {
reject(error)
return
}
if (info.done) {
resolve(value)
} else {
Promise.resolve(value).then(_next, _throw)
}
}
N.B. this is a summary of a larger post here ES6 asynchronous generator result