sagas - tracking and processing parallel tasks - javascript

docs indicate that in order to run parallel tasks, we do something like this
const [ a, b, c, ] = yield [ forkA, forkB, forkC ]
what strategies would you suggest for handling the status of each of these tasks?
for example:
a/c are successful, do X action
b is not successful, do y action
I thought about utilizing actionChannels along the lines of
const tasksStarted = yield actionChannel(start_task);
const tasksCompleted = yield actionChannel(end_task);
Each completed task can then be placed in their corresponding bucket (X/Y)
However, I'm not sure if this is over-engineered and wanted a sanity check if anyone else has solved similar issues.
To give a sense of scale, the upper bound of this array of tasks can be up to 20.
I also posted this in Redux-Saga issues, but since the community here is great, I thought it'd be worth it in try both ahem channels.

Well, you use sagas for execution of some asynchronous operations which shall exchange data on the status. The simplest option in the current situation use of the fork function () instead of automatic construction which captures all redux actions. See https://redux-saga.js.org/docs/advanced/ForkModel.html for general info about forking model.
If it is required not just to organize a set of the expecting flows, but also to make in them branching depending on a mutual status, it is possible to use a trick with closing and the "frozen" promise.
function generateTrigger() {
let raiseSuccess = null, raiseFail = null;
let promise = new Promise((resolve, reject) => {
raiseSuccess = resolve;
raiseFail = reject;
})
return {
raiseSuccess,
raiseFail,
promise
}
}
let trigger = generateTrigger();
function* task1() {
while(true) {
// Some code
if (isSuccess) {
trigger.raiseSuccess();
trigger = generateTrigger();
} else {
trigger.raiseFail();
trigger = generateTrigger();
}
// Some code
}
}
function* task2() {
while(true) {
// Some code
try {
yield call(() => trigger.promise);
// If task1 done with success
} catch (e) {
// If task1 done with fail
}
// Some code
}
}
function* main() {
try {
yield fork(task1)
yield fork(task2)
} catch (e) {
}
}

Related

How can I execute some async tasks in parallel with limit in generator function?

I'm trying to execute some async tasks in parallel with a limitation on the maximum number of simultaneously running tasks.
There's an example of what I want to achieve:
Currently this tasks are running one after another. It's implemented this way:
export function signData(dataItem) {
cadesplugin.async_spawn(async function* (args) {
//... nestedArgs assignment logic ...
for (const id of dataItem.identifiers) {
yield* idHandler(dataItem, id, args, nestedArgs);
}
// some extra logic after all tasks were finished
}, firstArg, secondArg);
}
async function* idHandler(edsItem, researchId, args, nestedArgs) {
...
let oDocumentNameAttr = yield cadesplugin.CreateObjectAsync("CADESCOM.CPAttribute");
yield oDocumentNameAttr.propset_Value("Document Name");
...
// this function mutates some external data, making API calls and returns void
}
Unfortunately, I can't make any changes in cadesplugin.* functions, but I can use any external libraries (or built-in Promise) in my code.
I found some methods (eachLimit and parallelLimit) in async library that might work for me and an answer that shows how to deal with it.
But there are still two problems I can't solve:
How can I pass main params into nested function?
Main function is a generator function, so I still need to work with yield expressions in main and nested functions
There's a link to cadesplugin.* source code, where you can find async_spawn (and another cadesplugin.*) function that used in my code.
That's the code I tried with no luck:
await forEachLimit(dataItem.identifiers, 5, yield* async function* (researchId, callback) {
//... nested function code
});
It leads to Object is not async iterable error.
Another attempt:
let functionArray = [];
dataItem.identifiers.forEach(researchId => {
functionArray.push(researchIdHandler(dataItem, id, args, nestedArgs))
});
await parallelLimit(functionArray, 5);
It just does nothing.
Сan I somehow solve this problem, or the generator functions won't allow me to do this?
square peg, round hole
You cannot use async iterables for this problem. It is the nature of for await .. of to run in series. await blocks and the loop will not continue until the awaited promise has resovled. You need a more precise level of control where you can enforce these specific requirements.
To start, we have a mock myJob that simulates a long computation. More than likely this will be a network request to some API in your app -
// any asynchronous task
const myJob = x =>
sleep(rand(5000)).then(_ => x * 10)
Using Pool defined in this Q&A, we instantiate Pool(size=4) where size is the number of concurrent threads to run -
const pool = new Pool(4)
For ergonomics, I added a run method to the Pool class, making it easier to wrap and run jobs -
class Pool {
constructor (size) ...
open () ...
deferNow () ...
deferStacked () ...
// added method
async run (t) {
const close = await this.open()
return t().then(close)
}
}
Now we need to write an effect that uses our pool to run myJob. Here you will also decide what to do with the result. Note the promise must be wrapped in a thunk otherwise pool cannot control when it begins -
async function myEffect(x) {
// run the job with the pool
const r = await pool.run(_ => myJob(x))
// do something with the result
const s = document.createTextNode(`${r}\n`)
document.body.appendChild(s)
// return a value, if you want
return r
}
Now run everything by mapping myEffect over your list of inputs. In our example myEffect we return r which means the result is also available after all results are fetched. This optional but demonstrates how program knows when everything is done -
Promise.all([1,2,3,4,5,6,7,8,9,10,11,12].map(myEffect))
.then(JSON.stringify)
.then(console.log, console.error)
full program demo
In the functioning demo below, I condensed the definitions so we can see them all at once. Run the program to verify the result in your own browser -
class Pool {
constructor (size = 4) { Object.assign(this, { pool: new Set, stack: [], size }) }
open () { return this.pool.size < this.size ? this.deferNow() : this.deferStacked() }
async run (t) { const close = await this.open(); return t().then(close) }
deferNow () { const [t, close] = thread(); const p = t.then(_ => this.pool.delete(p)).then(_ => this.stack.length && this.stack.pop().close()); this.pool.add(p); return close }
deferStacked () { const [t, close] = thread(); this.stack.push({ close }); return t.then(_ => this.deferNow()) }
}
const rand = x => Math.random() * x
const effect = f => x => (f(x), x)
const thread = close => [new Promise(r => { close = effect(r) }), close]
const sleep = ms => new Promise(r => setTimeout(r, ms))
const myJob = x =>
sleep(rand(5000)).then(_ => x * 10)
async function myEffect(x) {
const r = await pool.run(_ => myJob(x))
const s = document.createTextNode(`${r}\n`)
document.body.appendChild(s)
return r
}
const pool = new Pool(4)
Promise.all([1,2,3,4,5,6,7,8,9,10,11,12].map(myEffect))
.then(JSON.stringify)
.then(console.log, console.error)
slow it down
Pool above runs concurrent jobs as quickly as possible. You may also be interested in throttle which is also introduced in the original post. Instead of making Pool more complex, we can wrap our jobs using throttle to give the caller control over the minimum time a job should take -
const throttle = (p, ms) =>
Promise.all([ p, sleep(ms) ]).then(([ value, _ ]) => value)
We can add a throttle in myEffect. Now if myJob runs very quickly, at least 5 seconds will pass before the next job is run -
async function myEffect(x) {
const r = await pool.run(_ => throttle(myJob(x), 5000))
const s = document.createTextNode(`${r}\n`)
document.body.appendChild(s)
return r
}
In general, it should be better to apply #Mulan answer.
But if you also stuck into cadesplugin.* generator functions and don't really care about heavyweight external libraries, this answer may also be helpful.
(If you are worried about heavyweight external libraries, you may still mix this answer with #Mulan's one)
Async task running can simply be solved using Promise.map function from bluebird library and double-usage of cadesplugin.async_spawn function.
The code will look like the following:
export function signData(dataItem) {
cadesplugin.async_spawn(async function* (args) {
// some extra logic before all of the tasks
await Promise.map(dataItem.identifiers,
(id) => cadesplugin.async_spawn(async function* (args) {
// ...
let oDocumentNameAttr = yield cadesplugin.CreateObjectAsync("CADESCOM.CPAttribute");
yield oDocumentNameAttr.propset_Value("Document Name");
// ...
// this function mutates some external data and making API calls
}),
{
concurrency: 5 //Parallel tasks count
});
// some extra logic after all tasks were finished
}, firstArg, secondArg);
}
The magic comes from async_spawn function which is defined as:
function async_spawn(generatorFunction) {
async function continuer(verb, arg) {
let result;
try {
result = await generator[verb](arg);
} catch (err) {
return Promise.reject(err);
}
if (result.done) {
return result.value;
} else {
return Promise.resolve(result.value).then(onFulfilled, onRejected);
}
}
let generator = generatorFunction(Array.prototype.slice.call(arguments, 1));
let onFulfilled = continuer.bind(continuer, "next");
let onRejected = continuer.bind(continuer, "throw");
return onFulfilled();
}
It can suspend the execution of internal generator functions on yield expressions without suspending the whole generator function.

How to execute an action after the first one is resolved in saga?

I am working on a project that uses React, Redux-Saga. In my code, I have to call same action twice. I want to execute them one after the other. But what happens is that, both of them are called at once because they are essentially non-blocking and in the saga-selectors, it only resolves the first action that reaches it. How can i change this behavior in my react component from where these actions are called? I can't make changes in saga because i don't know it enough and i am only dealing with react.
Please guide me!!!
I tried to use setTimeout(), but silly me it won't work because the functions are non blocking in nature.
let body = {}
body = {
uuid: dragRow.uuid,
orderId: hoverRowId
}
console.log("update for: ", dragRow)
this.props.updateSection(body);
body = {
uuid: hoverRow.uuid,
orderId: dragRowId
}
console.log("update for: ", hoverRow);
this.props.updateSection(body);
This is the code for saga:
//UPDATE SECTION
function* updateSection(sectionAction) {
try {
const payload = yield call(Api.updateSection, sectionAction.payload);
yield put({ type: sectionAction.type + "_success",payload: payload });
} catch (error) {
yield put({ type: sectionAction.type + "_failed", error })
}
}
function* watchUpdateSection() {
yield takeLeading([sectionConstants.UPDATE_SECTION, sectionConstants.HIDE_SECTION, sectionConstants.PUBLISH_SECTION], updateSection)
}
Expected is that the first function call to updateSection() resolves and only after that my second call to it is passed. Right now both of them try to execute at once and only the leading one is executed.
For synchronous execution you can use an actionChannel: https://redux-saga.js.org/docs/advanced/Channels.html
import { take, actionChannel, call, ... } from 'redux-saga/effects'
function* watchUpdateSection() {
// create a channel that is buffered with actions of these types
const chan = yield actionChannel([
sectionConstants.UPDATE_SECTION,
sectionConstants.HIDE_SECTION,
sectionConstants.PUBLISH_SECTION,
]);
while (true) {
// take from the channel and call the `updateSection` saga, waiting
// for execution to complete before taking from the channel again
const action = yield take(chan);
yield call(updateSection, action);
}
}

Handling subscription/unsubscription to a ton of events with redux-saga

In my current project, i'm dealing with firebase websocket subscriptions. Different components can subscribe to different data, e.g. in a list of items every ListItem component subscribes to a websocket "event" for that specific item by dispatching a SUBSCRIBE action in componentDidMount and unsubscribes by dispatching an UNSUBSCRIBE action in componentWillUnmount.
My sagas look like this:
const subscriptions = {}
export function * subscribeLoop () {
while (true) {
const { path } = yield take(SUBSCRIBE)
subscriptions[path] = yield fork(subscription, path)
}
}
export function * unsubscribeLoop () {
while (true) {
const { path } = yield take(UNSUBSCRIBE)
yield cancel(subscriptions[path])
}
}
export function * subscription (path) {
let ref
try {
const updateChannel = channel()
ref = api.child(path)
ref.on('value', snapshot => {
updateChannel.put(snapshot.val())
})
while (true) {
const data = yield take(updateChannel)
yield put(handleUpdate(path, data))
}
} finally {
if (yield cancelled()) {
ref.off()
ref = null
}
}
}
I assume this is not the right way to deal with this - it is indeed rather slow on a list of 500 items.
How can i optimize the performance?
Do i even need to fork?
Should i introduce some kind of delay to give the thread some space to handle other things?
Any hints are appreciated.
Should i introduce some kind of delay to give the thread some space > to handle other things?
First of all it is necessary to remember that use of redux saga and effects like fork actually doesn't create any threads which would be twisted in the infinite loops. It is only syntactic sugar for the organization of a chain of callbacks as the yield operator provides object passing in both sides. From this point of view a question of forced delays doesn't make a sense - as thread and doesn't exist.
Do i even need to fork?
In case of due skill it is possible to do generally without set of fork of calls, and to do everything in one root saga. The idea is in making a subscription with a callback function in the current lexical area on websocket, and to expect obtaining messages in the pseudo-infinite loop on the basis of delayed promise.
Conceptually the code can look approximately so:
const subscribers = new Map()
function * webSocketLoop() {
let resolver = null
let promise = new Promise(resolve => (resolver = resolve))
let message = null;
websocket.on('message', (payload) => {
message = Object.assign({}, payload)
resolver()
promise = promise.then(() => new Promise(resolve => (resolver = resolve)))
})
while(true) {
yield call(() => promise)
const type = message.type
const handlers = subscribers.get(type) || []
handlers.forEach(func => func(message))
}
}
export function * mainSaga () {
yield takeEvery(SUBSCRIBE, subscribe)
yield takeEvery(UNSUBSCRIBE, unsubscribe)
yield fork(webSocketLoop)
}

Can you call a saga and continue execution only when a different saga is finished?

Our app uses an ATTEMPT - SUCCESS - FAILURE approach to handling our responses from the server.
I have a generator function that needs to behave like this:
function * getSingleSectorAttempt(action) {
const sectors = yield select(getSectors);
if (!sectors) {
//If there are no sectors, I need to call the GET_SECTORS_ATTEMPT action
//and only restart/continue this saga when the GET_SECTORS_SUCCESS action is fired
}
const id = sectors[action.name].id;
try {
const response = yield call(api.getSector, id);
//...
} catch (err) {
//...
}
}
From what I've read of the Redux Saga documentation, this does not seem immediately possible. However, I would like to see if I'm missing something. I've already tried this:
yield fork(takeLatest, Type.GET_SECTORS_SUCCESS, getSingleSectorAttempt);
yield put(Actions.getSectorsAttempt());
in the if(!sectors) conditional block, but while this works it does not persist the initial GET_SINGLE_SECTOR_ATTEMPT action parameters, and I am not sure how to get it to do so without getting into callback and argument spaghetti.
The effect allowing you to wait for an action to be dispatched is take. In your case:
function* getSingleSectorAttempt(action) {
let sectors = yield select(getSectors);
if (!sectors) {
yield put(getSectorsAttempt());
yield take(GET_SECTORS_SUCCESS);
sectors = yield select(getSectors);
}
// resume here as normal
}
Your own answer could have unexpected side-effects. For example, if getSectors can return a falsy value several times in the lifetime of the app, you would have several forked processes waiting for GET_SECTORS_SUCCESS to be dispatched, and each executing your side-effect, each keeping a reference to the action that triggered it.
Oops, figured it out:
function* getSingleSectorAttempt(action) {
const sectors = yield select(getSectors);
if(!sectors){
//Pass initial action in a callback function like so:
yield fork(takeLatest, Type.GET_SECTORS_SUCCESS, () => getSingleSectorAttempt(action));
yield put(Actions.getSectorsAttempt());
} else {
const id = sectors[action.name].id;
try {
const response = yield call(api.getSector, id);
//...
} catch (err) {
//..
}
}
}

How to tie emitted events events into redux-saga?

I'm trying to use redux-saga to connect events from PouchDB to my React.js application, but I'm struggling to figure out how to connect events emitted from PouchDB to my Saga. Since the event uses a callback function (and I can't pass it a generator), I can't use yield put() inside the callback, it gives weird errors after ES2015 compilation (using Webpack).
So here's what I'm trying to accomplish, the part that doesn't work is inside replication.on('change' (info) => {}).
function * startReplication (wrapper) {
while (yield take(DATABASE_SET_CONFIGURATION)) {
yield call(wrapper.connect.bind(wrapper))
// Returns a promise, or false.
let replication = wrapper.replicate()
if (replication) {
replication.on('change', (info) => {
yield put(replicationChange(info))
})
}
}
}
export default [ startReplication ]
As Nirrek explained it, when you need to connect to push data sources, you'll have to build an event iterator for that source.
I'd like to add that the above mechanism could be made reusable. So we don't have to recreate an event iterator for each different source.
The solution is to create a generic channel with put and take methods. You can call the take method from inside the Generator and connect the put method to the listener interface of your data source.
Here is a possible implementation. Note that the channel buffers messages if no one is waiting for them (e.g. the Generator is busy doing some remote call)
function createChannel () {
const messageQueue = []
const resolveQueue = []
function put (msg) {
// anyone waiting for a message ?
if (resolveQueue.length) {
// deliver the message to the oldest one waiting (First In First Out)
const nextResolve = resolveQueue.shift()
nextResolve(msg)
} else {
// no one is waiting ? queue the event
messageQueue.push(msg)
}
}
// returns a Promise resolved with the next message
function take () {
// do we have queued messages ?
if (messageQueue.length) {
// deliver the oldest queued message
return Promise.resolve(messageQueue.shift())
} else {
// no queued messages ? queue the taker until a message arrives
return new Promise((resolve) => resolveQueue.push(resolve))
}
}
return {
take,
put
}
}
Then the above channel can be used anytime you want to listen to an external push data source. For your example
function createChangeChannel (replication) {
const channel = createChannel()
// every change event will call put on the channel
replication.on('change', channel.put)
return channel
}
function * startReplication (getState) {
// Wait for the configuration to be set. This can happen multiple
// times during the life cycle, for example when the user wants to
// switch database/workspace.
while (yield take(DATABASE_SET_CONFIGURATION)) {
let state = getState()
let wrapper = state.database.wrapper
// Wait for a connection to work.
yield apply(wrapper, wrapper.connect)
// Trigger replication, and keep the promise.
let replication = wrapper.replicate()
if (replication) {
yield call(monitorChangeEvents, createChangeChannel(replication))
}
}
}
function * monitorChangeEvents (channel) {
while (true) {
const info = yield call(channel.take) // Blocks until the promise resolves
yield put(databaseActions.replicationChange(info))
}
}
We can use eventChannel of redux-saga
Here is my example
// fetch history messages
function* watchMessageEventChannel(client) {
const chan = eventChannel(emitter => {
client.on('message', (message) => emitter(message));
return () => {
client.close().then(() => console.log('logout'));
};
});
while (true) {
const message = yield take(chan);
yield put(receiveMessage(message));
}
}
function* fetchMessageHistory(action) {
const client = yield realtime.createIMClient('demo_uuid');
// listen message event
yield fork(watchMessageEventChannel, client);
}
Please Note:
messages on an eventChannel are not buffered by default. If you want to process message event only one by one, you cannot use blocking call after const message = yield take(chan);
Or You have to provide a buffer to the eventChannel factory in order to specify buffering strategy for the channel (e.g. eventChannel(subscriber, buffer)). See redux-saga API docs for more info
The fundamental problem we have to solve is that event emitters are 'push-based', whereas sagas are 'pull-based'.
If you subscribe to an event like so: replication.on('change', (info) => {}) ,then the callback is executed whenever the replication event emitter decides to push a new value.
With sagas, we need to flip the control around. It is the saga that must be in control of when it decides to respond to new change info being available. Put another way, a saga needs to pull the new info.
Below is an example of one way to achieve this:
function* startReplication(wrapper) {
while (yield take(DATABASE_SET_CONFIGURATION)) {
yield apply(wrapper, wrapper.connect);
let replication = wrapper.replicate()
if (replication)
yield call(monitorChangeEvents, replication);
}
}
function* monitorChangeEvents(replication) {
const stream = createReadableStreamOfChanges(replication);
while (true) {
const info = yield stream.read(); // Blocks until the promise resolves
yield put(replicationChange(info));
}
}
// Returns a stream object that has read() method we can use to read new info.
// The read() method returns a Promise that will be resolved when info from a
// change event becomes available. This is what allows us to shift from working
// with a 'push-based' model to a 'pull-based' model.
function createReadableStreamOfChanges(replication) {
let deferred;
replication.on('change', info => {
if (!deferred) return;
deferred.resolve(info);
deferred = null;
});
return {
read() {
if (deferred)
return deferred.promise;
deferred = {};
deferred.promise = new Promise(resolve => deferred.resolve = resolve);
return deferred.promise;
}
};
}
There is a JSbin of the above example here: http://jsbin.com/cujudes/edit?js,console
You should also take a look at Yassine Elouafi's answer to a similar question:
Can I use redux-saga's es6 generators as onmessage listener for websockets or eventsource?
Thanks to #Yassine Elouafi
I created short MIT licensed general channels implementation as redux-saga extension for TypeScript language based on solution by #Yassine Elouafi.
// redux-saga/channels.ts
import { Saga } from 'redux-saga';
import { call, fork } from 'redux-saga/effects';
export interface IChannel<TMessage> {
take(): Promise<TMessage>;
put(message: TMessage): void;
}
export function* takeEvery<TMessage>(channel: IChannel<TMessage>, saga: Saga) {
while (true) {
const message: TMessage = yield call(channel.take);
yield fork(saga, message);
}
}
export function createChannel<TMessage>(): IChannel<TMessage> {
const messageQueue: TMessage[] = [];
const resolveQueue: ((message: TMessage) => void)[] = [];
function put(message: TMessage): void {
if (resolveQueue.length) {
const nextResolve = resolveQueue.shift();
nextResolve(message);
} else {
messageQueue.push(message);
}
}
function take(): Promise<TMessage> {
if (messageQueue.length) {
return Promise.resolve(messageQueue.shift());
} else {
return new Promise((resolve: (message: TMessage) => void) => resolveQueue.push(resolve));
}
}
return {
take,
put
};
}
And example usage similar to redux-saga *takeEvery construction
// example-socket-action-binding.ts
import { put } from 'redux-saga/effects';
import {
createChannel,
takeEvery as takeEveryChannelMessage
} from './redux-saga/channels';
export function* socketBindActions(
socket: SocketIOClient.Socket
) {
const socketChannel = createSocketChannel(socket);
yield* takeEveryChannelMessage(socketChannel, function* (action: IAction) {
yield put(action);
});
}
function createSocketChannel(socket: SocketIOClient.Socket) {
const socketChannel = createChannel<IAction>();
socket.on('action', (action: IAction) => socketChannel.put(action));
return socketChannel;
}
I had the same problem also using PouchDB and found the answers provided extremely useful and interesting. However there are many ways to do the same thing in PouchDB and I dug around a little and found a different approach which maybe easier to reason about.
If you don't attach listeners to the db.change request then it returns any change data directly to the caller and adding continuous: true to the option will cause to issue a longpoll and not return until some change has happened. So the same result can be achieved with the following
export function * monitorDbChanges() {
var info = yield call([db, db.info]); // get reference to last change
let lastSeq = info.update_seq;
while(true){
try{
var changes = yield call([db, db.changes], { since: lastSeq, continuous: true, include_docs: true, heartbeat: 20000 });
if (changes){
for(let i = 0; i < changes.results.length; i++){
yield put({type: 'CHANGED_DOC', doc: changes.results[i].doc});
}
lastSeq = changes.last_seq;
}
}catch (error){
yield put({type: 'monitor-changes-error', err: error})
}
}
}
There is one thing that I haven't got to the bottom. If I replace the for loop with change.results.forEach((change)=>{...}) then I get an invalid syntax error on the yield. I'm assuming it's something to do with some clash in the use of iterators.

Categories