Iterator and a Generator in Javascript? - javascript

On Mozilla's page iterators and generators there is a statement:
While custom iterators are a useful tool, their creation requires
careful programming due to the need to explicitly maintain their
internal state. Generators provide a powerful alternative: they allow
you to define an iterative algorithm by writing a single function
which can maintain its own state.
Regarding above explanation, isn't it possible to write an iterative algorithm without Generators, such as:
Array[Symbol.iterator] = function(){
return {
next: function(){
//logic
return {
value: "",
done:false
}
}
}
}
Can't get my head around. Could someone explain what is the main reason they created an alternative, seems not much different to me.

They might look pretty similar on the surface, but they can be used in very different ways.
Iterators and Iterables
Iterators are rather strictly defined: they are object (the iterators) which contains a next (and possibly a few other) function. Every time the next function is called, it is expected to return an object with two properties:
value: the current value of the iterator
done: is the iterator finished?
An iterable on the other hand is an object which has a property with a Symbol.iterator key (which represents the well know symbol ##iterator). That key contains a function, which when called, returns a new iterator.
An example of an iterable:
const list = {
entries: { 0: 'a', 1: 'b' },
[Symbol.iterator]: function(){
let counter = 0;
const entries = this.entries;
return {
next: function(){
return {
value: entries[counter],
done: !entries.hasOwnProperty(counter++)
}
}
}
}
};
Their main purpose, as their name suggests, is to provide an interface which can be iterated:
for (let item of list) { console.log(item); }
// 'a'
// 'b'
Generators
Generators on the other hand are much more versatile. It helps to think of them as functions which can be paused and resumed.
While they can be iterated (their iterables provide a next method), they can implement much more sophisticated procedures and provide a input/output communication through their next method.
A simple generator:
function *mygen () {
var myVal = yield 12;
return myVal * 2;
}
const myIt = mygen();
const firstGenValue = myIt.next().value;
// Generator is paused and yields the first value
const result = myIt.next(firstGenValue * 2).value;
console.log(result); // 48
Generator delegation
Generators can delegate to another generator:
function *mydelgen(val) {
yield val * 2;
}
function *mygen () {
var myVal = yield 12;
yield* mydelgen(myVal); // delegate to another generator
}
const myIt = mygen();
const val = myIt.next().value;
console.log(val);
console.log(myIt.next(val).value);
console.log(myIt.next().value);
Generators & Promises
Generators and Promises together can create a sort of automatic asynchronous iterator with the help of utilities such as co.
co(function *(){
// resolve multiple promises in parallel
var a = Promise.resolve(1);
var b = Promise.resolve(2);
var c = Promise.resolve(3);
var res = yield [a, b, c];
console.log(res);
// => [1, 2, 3]
}).catch(onerror);
In Conclusion
So in conclusion one could say that the main purpose of iterators is to create an interface for custom objects to be iterated, whereas generators provide a plethora of possibilities for synchronous and asynchronous workflows:
stateful functions
generator delegation
generators & promises
CSP
etc.

Isn't it possible to write an iterative algorithm without Generators.
No, it's not. Yes, it is possible to write every generator algorithm as a custom iterator, but the // logic in your code will be much more complicated. The emphasis of the statement is that it won't be iterative any more, it will be recursive.
As an exercise, here's a quite simple iterative generator function:
function* traverseTree(node) {
if (node == null) return;
yield* traverseTree(node.left);
yield node.value;
yield* traverseTree(node.right);
}
Try to rewrite it as a custom iterator. Whether you get stuck or get it done, it will show you what the difference is.

Related

Verify Iterator versus AsyncIterator type

Is there any known trick in JavaScript to tell the difference between Iterator and AsyncIterator, without triggering iteration?
I'm trying to implement the following type checker:
function isAsyncIterator<T>(i: Iterator<T> | AsyncIterator<T>): boolean {
// returns:
// - true, if i is an asycnronous iterator
// - false, if i is a syncronous iterator
}
I know that calling next would tell us so, but I need it at the point when I cannot trigger iteration.
Also, even though I gave the example in TypeScript, I need to check it strictly at run-time.
This is not possible. Iterators are defined by a protocol, not by a tangible property, so due to the halting problem one cannot determine whether an object is an iterator without actually iterating it. See also the similar question What is the technical definition of a Javascript iterable and how do you test for it?.
However, there are good heuristics that will detect most well-behaved iterator implementations and all builtin iterators:
Iterators typically are iterable (they couldn't be used in for … of loops otherwise, limiting their usefulness) and will delegate to themselves for that. So
function isIterator(obj) {
if (Object(obj) !== obj) return false;
const method = obj[Symbol.iterator];
if (typeof method != 'function') return false;
const iter = method.call(obj);
return iter === obj;
}
function isAsyncIterator(obj) {
if (Object(obj) !== obj) return false;
const method = obj[Symbol.asyncIterator];
if (typeof method != 'function') return false;
const aIter = method.call(obj);
return aIter === obj;
}
Iterators inherit from the builtin %IteratorPrototype%. Until the iterator helpers proposal will introduce the global Iterator constructor these objects are a bit cumbersome to access and test for, but it's possible nonetheless.
const GeneratorPrototype = Object.getPrototypeOf(function*() {}).prototype;
const IteratorPrototype = Object.getPrototypeOf(GeneratorPrototype);
const AsyncGeneratorPrototype = Object.getPrototypeOf(async function*() {}).prototype;
const AsyncIteratorPrototype = Object.getPrototypeOf(AsyncGeneratorPrototype);
function isIterator(obj) {
return IteratorPrototype.isPrototypeOf(obj);
}
function isAsyncIterator(obj) {
return AsyncIteratorPrototype.isPrototypeOf(obj);
}
(Of course, like any instanceof check, this doesn't work with objects from other realms, which is why I would recommend the first approach)

Why to use while when it is always true?

I see most of the examples in redux-saga using while(true){}:
function* watcherSaga(){
while (true) {
yield something()
}
}
Can we not simply write?
function* watcherSaga(){
yield something()
}
Or, is there anything difference?
Take a look at the following example. One generator is never "done" and the other generator is done after the first (and only) yield.
function something() {
return Math.random();
}
function* watcherSaga1() {
while (true) {
yield something();
}
}
function* watcherSaga2() {
yield something();
}
const watcher1 = watcherSaga1();
const watcher2 = watcherSaga2();
console.log('watcher1: ', watcher1.next());
console.log('watcher1: ', watcher1.next());
console.log('watcher1: ', watcher1.next());
console.log('watcher2: ', watcher2.next());
console.log('watcher2: ', watcher2.next());
console.log('watcher2: ', watcher2.next());
With the while loop, the generator will continue to yield values forever. Without, it's just once.
When you have a need for some sequence (say, multiples of 7), a function like that can easily supply such a sequence to a consumer that imposes its own limits on how many values it needs.
Generators provide an extremely powerful way of structuring code, and have a way of permeating a design in some cases. They're particularly useful in the context of media generation, like p5.js, where there's lots of interacting iteration. Generators provide a particularly nice way to encapsulate that.
Generators returns an Iterator.
When you call next() on the iterator, it produces the return value and if the function comes to end, it will generate an undefined value and gets done.
Check the below snippet. Hope this one helps you.
function* watcherSaga() {
var i = 0;
while (true) {
yield i++;
}
}
const sagaIterator = watcherSaga();
console.log("**** with while() loop *****");
console.log(sagaIterator.next());
console.log(sagaIterator.next());
console.log(sagaIterator.next());
// You can keep calling sagaIterator.next() and it never gets "done" because of "while(true)"
function* watcherSagaWithoutWhile() {
var i = 0;
yield i++;
}
const sagaIteratorWİthoutWhite = watcherSagaWithoutWhile();
console.log("**** withOUT while() loop *****");
console.log(sagaIteratorWİthoutWhite.next());
console.log(sagaIteratorWİthoutWhite.next());
console.log(sagaIteratorWİthoutWhite.next());
// The second call to "next()" will return an "undefined" value and the generator gets done because the generator comes to the end
For further reading: https://basarat.gitbooks.io/typescript/docs/generators.html

How to execute promises in series?

var promiseReturningFuncs = [];
for(var i = 0; i < 5; i++){
promiseReturningFuncs.push(askQuestion);
}
var programmers = [];
Promise.reduce(promiseReturningFuncs, function(resp, x) {
console.log(typeof resp);
if(typeof resp != "function") {
programmers.push(resp);
}
return x();
})
.then(function(resp) {
programmers.push(resp);
console.log(programmers);
});
My goal: execute the askQuestion function in series and resolve an array of objects created by that function. (this function must execute in series so that it can respond to user input)
So imagine that the askQuestion function returns a promise that resolves a object I want to add to an array.
This is my messy way of doing it.
I am looking to find a cleaner way of doing it, ideally, i wouldn't even need to push to an array, I would just have a final .then, where the response is an array.
Since you appear to be using the Bluebird promise library, you have a number of built-in options for sequencing your promise returning functions. You can use Promise.reduce(), Promise.map() with a concurrency value of 1, Promise.mapSeries or Promise.each(). If the iterator function returns a promise, all of these will wait for the next iteration until that promise resolves. Which to use depends more upon the mechanics of how your data is structured and what result you want (neither of which you actually show or describe).
Let's suppose you have an array of promise returning functions and you want to call them one at a time, waiting for the one to resolve before calling the next one. If you want all the results, then I'd suggest Promise.mapSeries():
let arrayOfPromiseReturningFunctions = [...];
// call all the promise returning functions in the array, one at a time
// wait for one to resolve before calling the next
Promise.mapSeries(arrayOfPromiseReturningFunctions, function(fn) {
return fn();
}).then(function(results) {
// results is an array of resolved results from all the promises
}).catch(function(err) {
// process error here
});
Promise.reduce() could also be used, but it would accumulate a single result, passing it from one to the next and end with one final result (like Array.prototype.reduce() does).
Promise.map() is a more general version of Promise.mapSeries() that lets you control the concurrency number (the number of async operations in flight at the same time).
Promise.each() will also sequence your functions, but does not accumulate a result. It assumes you either don't have a result or you are accumulating the result out-of-band or via side effects. I tend to not like to use Promise.each() because I don't like side effect programming.
You could solve this in pure JS using ES6 (ES2015) features:
function processArray(arr, fn) {
return arr.reduce(
(p, v) => p.then((a) => fn(v).then(r => a.concat([r]))),
Promise.resolve([])
);
}
It applies the function given to the array in series and resolves to an array of the results.
Usage:
const numbers = [0, 4, 20, 100];
const multiplyBy3 = (x) => new Promise(res => res(x * 3));
// Prints [ 0, 12, 60, 300 ]
processArray(numbers, multiplyBy3).then(console.log);
You'll want to double check browser compatibility but this works on reasonably current Chrome (v59), NodeJS (v8.1.2) and probably most others.
You can use recursion so that you can move to the next iteration in a then block.
function promiseToExecuteAllInOrder(promiseReturningFunctions /* array of functions */) {
var resolvedValues = [];
return new Promise(function(resolve, reject) {
function executeNextFunction() {
var nextFunction = promiseReturningFunctions.pop();
if(nextFunction) {
nextFunction().then(function(result) {
resolvedValues.push(result);
executeNextFunction();
});
} else {
resolve(resolvedValues);
}
}
executeNextFunction();
}
}
Executing one after another using a recursive function( in a non promise way):
(function iterate(i,result,callback){
if( i>5 ) callback(result);askQuestion().then(res=>iterate(i+1,result.concat([res]),callback);
})(0,[],console.log);
For shure this can be wrapped in a promise:
function askFive(){
return new Promise(function(callback){
(function iterate(i,result){
if( i>5 ) callback(result);askQuestion().then(res=>iterate(i+1,result.concat([res]),callback);
})(0,[],console.log);
});
}
askFive().then(console.log);
Or:
function afteranother(i,promise){
return new Promise(function(resolve){
if(!i) return resolve([]);
afteranother(i-1,promise).then(val=>promise().then(val2=>resolve(val.concat([val2])));
});
}
afteranother(5,askQuestion).then(console.log);

Using forEach to sequentially execute functions in Q

I'm attempting to run a series of functions based upon the Q API using their first strategy for sequences. This suggests the pattern:
var funcs = [foo, bar, baz, qux];
var result = Q(initialVal);
funcs.forEach(function (f) {
result = result.then(f);
});
return result;
What structure are each of the functions within the array meant to take? I am quite confused about when to use the return def.promise;. Is that simply always the last line? Will it frequently or always immediately follow def.resolve(someVar). Is something like this then ended structure?
function foo(f){
var def = Q.defer();
f++;
def.resolve(f);
return def.promise;
}
So that each subsequent function within the array will receive the newly calculated value of f: in this case, if var initialVal = 1; and four functions each incrementing f++, the returned result will be 4? How do I access that returned value? console.log(result) prints { state: 'pending' } .
What structure are each of the functions within the array meant to take?
Q.js allows promises to be created in several ways. For example :
function foo(value) {
var def = Q.defer();
def.resolve(value + 1);
return def.promise;
}
function foo(value) {
return Q(value + 1);
}
function foo(value) {
return Q.Promise(function(resolve, reject) {
resolve(value + 1);
});
}
Other Promise libs are similar, but not necessarily so flexible. Native js Promises must be constructed with the third of these approaches.
However, in the real world you will only rarely need to create your own Promise. You will typically be dealing with promise-returning lib methods that someone else has written. For example :
function foo(value) {
return lib.doSomethingAsync(value, and, other, params);
}
How do I access that returned value?
The code is easier to understand if member name "result" is replaced with "promise", and result.then(f) is rewritten with an anonymous function that calls f().
function performAsyncSequence() {
var promise = Q(initialVal);
funcs.forEach(function (f) {
promise = promise.then(function(previousResult) {
return f(previousResult);
});
});
return promise;
}
This is 100% equivalent to the code in the question, but now it should be clearer how the previous result is passed down the promise chain.
Accessing all previous promise results in the sequence is more complicated. The answers here discuss the subject comprehensively.

Multiple Sequential Async JavaScript Functions

Let's say I have a function that looks like this:
var foo = function(callback) {
var final = {};
asyncFuncOne(function(x) {
final.x = x;
});
asyncFuncTwo(function(y) {
final.y = y;
});
callback(final);
});
Obviously, this doesn't do what I want it to do (call callback on final when it has both x and y). I have several questions:
Is there a way to do what I want it to do without nesting everything?
Does the current form introduce a race condition? Are both async functions accessing the same final?
Approach #0. Painful life without promises. Yet life
Actually, your code like cries to be rewritten in promises. Trust me, this refactoring is something you 100% need. But ok, let's try to solve this particular problem without invoking promises at all - just as an exercise. Actually before the promise era the pattern was to introduce a special function that checks whether we can consider that we are done or not.
In your particular case such function is:
function weAreDone() {
return final.hasOwnPropery('x') && final.hasOwnProperty('y')
}
Then we can introduce asyncFuncDecorator:
function asyncFuncDecorator = function(asyncFunc, asyncFuncHandler) {
return function(doneFunc, doneHandler) {
asyncFunc(asyncFuncHandler);
if (doneFunc()) {
doneHandler();
}
}
}
With this two functions introduced you can write something like:
var foo = function(callback) {
var final = {};
//here goes abovementioned declarations
...
asyncFuncDecorator(asyncFuncOne, function(x) {
final.x = x;
})(weAreDone, callback);
asyncFuncDecorator(asyncFuncTwo, function(y) {
final.y = y;
})(weAreDone, callback);
});
You can keep working on making this approach more flexible and universal but, once again, trust me,
you'll end up with something very similar to promises, so better promises ;)
Approach #1. Promisifying existing functions
If, for some reason, you are not ready to rewrite all you functions from callback style to promises,
you can promisify existing functions by using, once again, a decorator. Here's how it can be done for native Promises, which are present in all modern browsers already (for alternatives, check this question):
function promisify(asyncCall){
return new Promise(function(resolve,reject){
asyncCall(resolve,reject);
});
}
In that case you can rewrite you code in this fashion:
var foo = function(callback) {
//here goes abovementioned declarations
...
Promise.all([promisify(asyncFuncOne), promisify(asyncFuncTwo)]).then(function(data) {
// by the way, I'd rather not to call any variable "final" ))
final.x = data[0];
final.y = data[1];
}).then(callback);
});
Not to say that actually foo it's better to be promisified itself ;)
Approach #2. Promises everywhere. From the very beginning
It worth to reiterate this thought - as soon as you need to trigger some function after N other async functions should be completed - promises in 99% cases are unbeatable. It almost always worth trying to rewrite existing code to in promise-based style. Here's how can such code look like
Promise.all([asyncFuncOne(), asyncFuncTwo()]).then(function(data) {
return Promise.resolve({
x: data[0],
y: data[1]
})
}).then(callback);
See how much better it become. Also, a common mistake of using promises - is to have a sequential waterfall of thens - retrieving first chunk of data, only after that - the second one, after that - the third one. You actually never should do this unless you are transforming data received in Nth request depending on what you've got in one of your previous requests - instead just use all method.
This is very crucial to understand. This is one of main reasons why promises quite often are misunderstood as something excessively complicated.
Sidenote: as of December'14, native Promises are natively supported by all major modern browsers except IE, and in Node.js has native promise support is a thing since version 0.11.13, so in real-life you still most probably will need to use promise library. There's a lot of Promise spec implementations, you can check this page for the list of standalone promise libraries, it's quite big, the most popular solutiona are, I guess, Q and bluebird.
Approach #3. Generators. Our bright future. Well, may be
This is something worth to mention, generators are de-facto supported in Firefox, Chromium-based browsers and node.js (called with --harmony_generators option). So, de-facto, there are cases when generators can be used, and actually are already used, in production code. It's just that if you are writing a general-purpose web app, you should be aware of this approach but you'll probably won't use it for a while. So, you can use the fact that generators in js allow you to invoke two-way communication through yield/iterator.next(). In that case.
function async(gen) {
var it = gen();
var state = it.next();
var next = function() {
if (state.done) {
return state.value;
};
state.value(function(res) {
state = it.next(res);
next();
});
}
next();
}
async(function* () {
var res = {
x: yield asyncFuncOne,
y: yield asyncFuncTwo
}
callback(res);
});
Actually, there are already dozens of libraries which do this generator wrapping job for you.
You can read more about this approach and related libraries here.
Another solution is to create a setter:
var foo = function (callback) {
var final = {
setter: function(attr,value){
this[attr] = value;
if (this.hasOwnProperty("x") && this.hasOwnProperty("y"))
callback(this);
}
};
asyncFuncOne(function(x) {
final.setter("x", x);
});
asyncFuncTwo(function(y) {
final.setter("y", y);
});
};
final.x and final.y are set on final, but after it's sent to callback so, unless the callback is waiting, x and y are undefined when callback receives them.
You could check to see if one has come back in the response of the others and call out to the callback:
var foo = function(callback) {
var final = {};
asyncFuncOne(function(x) {
final.x = x;
if (typeof final.y !== 'undefined') {
callback(final);
}
});
asyncFuncTwo(function(y) {
final.y = y;
if (typeof final.x !== 'undefined') {
callback(final);
}
});
});
You could nest your callbacks, though this will cause asyncfuncTwo to not be called until asyncfuncOne has finished):
var foo = function(callback) {
var final = {};
asyncFuncOne(function(x) {
final.x = x;
asyncFuncTwo(function(y) {
final.y = y;
callback(final);
});
});
});
Then there are Promises. These are the future of async however they are not fully supported across all browsers (namely, all of IE [11 and below at the this time]). In fact, 40% of all browser users are not using a browser that natively supports Promises. This means you will have to use a polyfill library to give you support adding substantial filesize to your page. For this simple problem and at this given time I wouldn't recommend using Promises for this simple issue. However, you should definitely read up on how they are used.
If you want to see what that could look like, it'd be this:
var asyncFuncOne = function() {
return new Promise(function(resolve, reject) {
// A 500 seconds async op and resolve x as 5
setTimeout(function() { resolve(5); }, 500);
});
};
var asyncFuncTwo = function() {
return new Promise(function(resolve, reject) {
// A 750ms async op and resolve y as 10
setTimeout(function() { resolve(10); }, 750);
});
};
var foo = function() {
var final = {};
return new Promise(function(resolve, reject) {
Promise.all([
asyncFuncOne(),
asyncFuncTwo()
]).then(function(values) {
final.x = values[0];
final.y = values[1];
resolve(final);
});
});
};
foo().then(function(final) {
// After foo()'s Promise has resolved (750ms)
console.log(final.x + ', ' + final.y);
});
Note no callbacks, just use of then. In a real scenario you would also use catch and reject. Read more about Promises here https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise but, again, I personally don't see a strong need to use them for this single, specific issue (but, to each their own).
One pretty bad idea, but I've had to use it before, because I wasn't about to import a 50k promise library for a single function, would be to set a looping Timeout that checks to see if all the required variables are set, and then calls the callback.

Categories