async.parallel for app.use with second parameter - javascript

I use express v4.17.1 and would like to make my app.use() middlewares to run in parallel.
So I have found some examples in the internet.
Example:
function runInParallel() {
async.parallel([
getUserProfile,
getRecentActivity,
getSubscriptions,
getNotifications
], function(err, results) {
//This callback runs when all the functions complete
});
}
But what I have in my application is:
const app = express();
const APP_FOLDER = "bdt";
app.use(httpContext.middleware);
app.use(metricsMiddleware);
app.use(rtEndMiddleware);
app.use(trackingContextMiddleware);
app.use(healthRoutes());
app.use("/" + APP_FOLDER + "/api/products", productsRoutes);
app.use("/tcalc/" + APP_FOLDER + "/api/products", productsRoutes);
productRoutes is this:
const jsonParser = bodyParser.json({
limit: "1mb",
});
const accessFilter = accessFilterMiddleware(Registry.list());
const localDevFilter = localDevFilterMiddleware(Registry.list());
const apiRoutes: Router = Router();
apiRoutes.get("/", listProducts);
apiRoutes.get("/healthz", cProductHealth);
apiRoutes.get("/:id", accessFilter, localDevFilter, fetchProductData);
apiRoutes.post(
"/:id",
accessFilter,
localDevFilter,
jsonParser,
fetchProductData,
);
apiRoutes.get(
"/:id/fields/:fieldId/options",
accessFilter,
localDevFilter,
fetchProductOptions,
);
apiRoutes.post(
"/:id/loadmoreoptions",
accessFilter,
localDevFilter,
jsonParser,
loadMoreOptions,
);
apiRoutes.post("/:id/ploy", accessFilter, jsonParser, fetchMultipleProductData);
apiRoutes.post(
"/:id/gxx",
accessFilter,
localDevFilter,
jsonParser,
fetchGssData,
);
apiRoutes.get("/:id/healthz", collectProductHealth);
I think for the first ones it should be easy:
async.parallel([
httpContext.middleware,
metricsMiddleware,
rtEndMiddleware,
trackingContextMiddleware,
healthRoutes()
], function(err, results) {
//This callback runs when all the functions complete
});
But my question: How can I do it with a second parameter (productRoutes) in this case ?
app.use("/" + APP_FOLDER + "/api/products", productsRoutes);
app.use("/tcalc/" + APP_FOLDER + "/api/products", productsRoutes);

You've been going down a rabbit hole and you should change your approach quite a bit, currently you're completely over complicating a simple problem and the solution you're looking for is not possible in javascript.
If you want to create routes in express then don't use app.use for everything, it should only be used for middleware or to register a router, on which you can define routes.
You should be using:
app.get('/', () => ...
To define your routes. Alternatively you can use a router for that as well by doing:
app.use(router)
...
router.get('/', () => ...
More than that if you want to define async or "parallel" routes in javascript then just define async callbacks like normal, remove most of the stuff you've done.
app.get('/', async () => ...
That is now a route that will execute asynchronously.
You should also be careful not to just mess around with express' middleware because you're going to mess up the existing middleware (like error routes).
What's more is that the library you're referring to is just a helper library with neat functionality, it won't fundamentally change how javascript works. When you call an async function it will be added to the event queue and still be executed synchronously one after the other in serial, true multi threading isn't possible except for service workers and browser API calls executing temporarily in a single thread of their own, before being added to the event queue anyways.
What you're looking for is just a simple: router.get('/', async () => ..., that's the best you can do and it will appear that all your routes are executing in parallel.
After you've declared multiple of those then you can invoke all of them with something like Promise.all. My best guess is that this is what something like parallel is doing as well.

Middleware in Concept
As far as I understand it, middleware is akin to a chain, where each middleware item:
performs some initial logic
calls the next link in the chain, then
performs some final logic
where both of the chunks of logic are optional, but calling the next link is not. For example, with your first set of middleware, through to healthRoutes(), it could be visualised like so:
> httpContext.middleware
> metricsMiddleware
> rtEndMiddleware
> trackingContextMiddleware
/healthRoutes
< trackingContextMiddleware
< rtEndMiddleware
< metricsMiddleware
< httpContext.middleware
This chain structure is generally used because each middleware may enhance the common state of the request, sometimes conditionally based on the "output" of previously executed middlewares, and for many middlewares the initial logic needs to be performed before the body of the request, and the final logic after.
Middleware - Parallelisable or Not?
Based on the parallelisation in the blog post, the previously separate middlewares (getUser, getSiteList, getCurrentSite and getSubscription) very likely operate independently, which, in my opinion, makes them quite poor candidates for use as middleware. It's no wonder that they observed a significant performance improvement, as these items should never have been run in series in the first place. If we employ the same parallel() function from the post (note, specifically that function, not async.parallel), for the same middlewares as above, the execution looks more like:
> httpContext.middleware
< httpContext.middleware
> metricsMiddleware
< metricsMiddleware
> rtEndMiddleware
< rtEndMiddleware
> trackingContextMiddleware
< trackingContextMiddleware
/healthRoutes
So, parallelising the middlewares changes the order of execution significantly. Only you can determine whether this would be acceptable for your application, but I can imagine that both metricsMiddleware and trackingContextMiddleware may need to perform some logic both before and after the request being executed.
Express Route Syntax
If you decide that you do want to parallelise some or all of the middleware, I'd suggest taking advantage of native Promises directly, not a separate library, and do something like:
const parallel = (...middlewares) => (req, res, next) => {
Promise.all(middlewares.map(x => new Promise((resolve, reject) => {
x(req, res, resolve);
}))).then(() => next());
}
The important parts of this function are:
returning a function that takes the req, res, and next arguments, satisfying the requirement of Express middleware, that
maps each middleware to a Promise that executes the middleware and is passed the resolve argument of the Promise callback as its next argument
gates all these Promises behind Promise.all, and finally
calls the externally provided next argument when that gated Promise resolves
Then, if you decided that you wanted to have httpContent.middleware and metricsMiddleware run in parallel, but the others in series, you'd use that like:
app.use(parallel(
httpContext.middleware,
metricsMiddleware
));
app.use(rtEndMiddleware);
app.use(trackingContextMiddleware);
app.use(healthRoutes());
the execution of which could be visualised as:
> httpContext.middleware
< httpContext.middleware
> metricsMiddleware
< metricsMiddleware
> rtEndMiddleware
> trackingContextMiddleware
/healthRoutes
< trackingContextMiddleware
< rtEndMiddleware
As for the remaining two app.use() statements, given that each subsequent app.use() will add an item onto the middleware stack, you basically don't have to do anything to parallelise all the middleware up to that point, and if I were going down this path, I would keep the middleware explicitly separate (whether used in parallel or series) from the route implementations themselves, to make it clear where the specific application logic begins.

Related

Node ExpressJS: middleware URL route response vs route destination

router.use((req, res, next) => {
if(req.originalUrl == '/api/test'){
//does stuff
res.send(result);
}
next();
})
vs
route.get('/api/test', (req, res) => {
//does stuff
res.send(result)
}
I'm rather unfamiliar with the whole HTTP web application safety conduct so I need to ask, is there any vulnerability or downside if I use the first approach to resolve some of the route destination?
I'm rather unfamiliar with the whole HTTP web application safety conduct so I need to ask, is there any vulnerability or downside if I use the first approach to resolve some of the route destination?
There are two main differences between router.use() and router.get() and one is somewhat relevant here:
router.use() matches ANY http verb such as GET, POST, PUT, OPTION, PATCH, etc... whereas router.get() only matches GET requests, router.post() only matches POST requests, etc..
router.use() uses a "loose" match algorithm where the requested route only has to start with the path designation on the route, not match it entirely.
For the first point, your middleware handler is doing res.send(response) for all http verbs that have a request path of /api/test. That is probably not what you want and is not how you should write the code. You should have your code respond only to the http verbs for that path that you actually intend to support and do something useful with. Other verbs should probably respond with a 4xx status code (which would be the default in Express if you don't have a handler for them).
For the second point, your middleware handler is generic (no path set) and you are already checking for the entire path so that point is not relevant.
With one small addition, I'd say that your middleware approach is fine. I'd add a check for the req.method:
router.use((req, res, next) => {
if(req.method === "GET" && req.originalUrl == '/api/test'){
//does stuff
res.send(result);
// return so the rest of the route handler is not executed
// and we don't call next()
return;
}
next();
});
All that said, you can also probably solve your problem in a bit more generic way. For example, if you put a specific route definition in front of this middleware, then it will automatically be exempted from the middleware as it will get processed before the middleware gets to run. Routes in Express are matched and run in the order they are declared.
router.get('/api/test', (req, res) => {
//does stuff
res.send(result)
});
router.use((req, res, next) => {
// do some middleware prep work on all requests that get here
// ...
next();
});
// other route definitions here that can use the results of the prior middleware
For example, this is very common if you have some routes that need authentication and others that do not. You put the non-authenticated routes first, then place the authentication middleware, then define the routes that want to be behind the authentication check.
There is no "exact" vulnerability, but many, many drawbacks. The major difference here is that the first piece of code is what we call "a global handler". It is executed on each and every request. The second piece is just a specific route.
When you make a request, express starts to evaluate the pipeline of things it needs to do in order to return a response. First it executes all global handlers (like the first example). Then it starts matching the route against a list of handlers, and if it finds it - it executes the function.
What you risk using the first approach is breaking the chain and not executing all global/local handlers properly. Some of those might do specific things (like preventing you from some type of attacks). Then, using the second approach, you define way more things than just a global handler: you define the endpoint, as well as the method (in your case it's a GET request).
When the matcher finds your route, it does break the chain automatically for you (in the simplest scenario).
Also note that in the first example, you'd have a single point with tons of if-else statements to figure out what you want to do with this request. It basically mitigates the need of express whatsoever...
Express is made in a way that it supports multiple "middlewares". If you want to do specific pieces based on some action, here's what you can do:
router.get('/users',
handler1(req, res, next) {
if (req.something) {
next(); // we skip the logic here, but we go to handler2
} else {
// do your magic
res.send(response);
}
},
handler2(req, res, next) {
// you skipped the action from handler1
res.send(anotherResponse); // you MUST send a response in one handler
}

Any reason not to pass request / response as a parameter?

In express I have a handler for a route ex:
router.get(`${api}/path/:params/entrypoint`, routeHandler);
In this example 'routeHandler' function has a lot of logic doing various things. I'd like to break 'routeHandler' into smaller methods to ease readability and testability. So instead of:
routeHandler(req, res) {
//many lines of code
}
We could have:
routeHandler(req, res) {
helperOne(req, res);
helperTwo(req, res);
}
helperOne(req, res) {
//do stuff
}
helper2(req, res) {
//do stuff
}
I am being told not to do this by a coworker who is pretty senior, but I do not understand why. Does anyone know of any issues that can arise by passing the response or request objects into helpers? I can not think of any and google isn't revealing any clear answer.
Thanks!
Does anyone know of any issues that can arise by passing the response or request objects into helpers?
Yes you may run into some problems when passing those parameters, especially res. For example you may res.send multiple times (one in each function) which will raise an exception.
Scenario
A more concrete example is this
routeHandler((req, res) => {
helperOne(req, res);
helperTwo(req, res);
});
Based on some conditions, I want to stop and return an error from helperOne and not go execute any code from helperTwo. My definitions of these functions are like this
helperOne = (req, res) => {
const dataPoint = req.body.dataPoint; // a number for example
if (number > 10) {
return res.send("This is not valid. Stopping here...");
} else {
console.log("All good! Continue..");
}
}
helperTwo = (req, res) => {
res.send("Response from helperTwo");
}
Then let's say I do have req.body.dataPoint = 10, and I'm now expecting my routeHandler to stop after the return res.send in the first block of my if statement in helperOne.
This will not work as expected though, because the return will concern only helperOne which is the returning function. In other terms it won't propagate to routeHandler.
In the end an exception will be raised because routeHandler will call helperTwo and try to send a response again.
Solution
Don't send req or res. Just pass the data you need and handle the reponse in your main handler
An even better alternative is to use Express middlewares. Since you have multiple "sequential" handlers, you can chain multiple middlewares, which is closer to the standard Express.JS way
One reason to avoid doing this is that you're tightly coupling your helper functions to routeHandler, and encouraging complexity in the helpers. If you break up your helper functions so they only have a single responsibility, it's likely you'll only need to pass in a subset of the request.
Why are you passing in res, Are you sending a response from inside the helpers? Without knowing the details of your routeHandler implementation, I would see if you could handle logic in the helpers, but have them each return a value and keep the response-sending in the main routeHandler function. Here's a simple example:
handleRoute('/users/:userID', (req, res) => {
const { userID } = req.params;
const idIsValid = validateUserID(userID);
if (!idIsValid) {
return res.status(400).send('Invalid user ID!');
}
...
});

How to be sure two clients are not requesting at the same time corrupting state

I am new to Node JS and I am building a small app that relies on filesystem.
Let's say that the goal of my app is to fill a file like this:
1
2
3
4
..
And I want that at each request, a new line is written to the file, and in the right order.
Can I achieve that?
I know I can't let my question here without making any code so here it is. I am using an Express JS server:
(We imagine that the file contains only 1 at the first code launch)
import express from 'express'
import fs from 'fs'
let app = express();
app.all('*', function (req, res, next) {
// At every request, I want to write my file
writeFile()
next()
})
app.get('/', function(req,res) {
res.send('Hello World')
})
app.listen(3000, function (req,res) {
console.log('listening on port 3000')
})
function writeFile() {
// I get the file
let content = fs.readFileSync('myfile.txt','utf-8')
// I get an array of the numbers
let numbers = content.split('\n').map(item => {
return parseInt(item)
})
// I compute the new number and push it to the list
let new_number = numbers[numbers.length - 1] + 1
numbers.push(new_number)
// I write back the file
fs.writeFileSync('myfile.txt',numbers.join('\n'))
}
I tried to make a guess on the synchronous process that made me thinking that I was sure that nothing else was made at the same moment but I was really note sure...
If I am unclear, please tell me in the comments
If I understood you correctly, what you are scared of is a race condition happening in this case, where if two clients reach the HTTP server at the same time, the file is saved with the same contents where the number is only incremented once instead of twice.
The simple fix for it is to make sure the shared resource is only access or modified once at a time. In this case, using synchronous methods fix your problem. As when they are executing the whole node process is blocked and will not do anything.
If you change the synchronous methods with their asynchronous counter-parts without any other concurrency control measures then your code is definitely vulnerable to race conditions or corrupted state.
Now if this is only the thing your application is doing, it's probably best to keep this way as it's very simple, but let's say you want to add other functionality to it, in that case you probably want to avoid any synchronous methods as it blocks the process and won't let you have any concurrency.
A simple way to add a concurrency control, is to have a counter which keeps track how many requests are queued. If there's nothing queued up(counter === 0), then we just do read and write the file, else we add to the counter. Once writing to the file is finished we decrease from the counter and repeat:
app.all('*', function (req, res, next) {
// At every request, I want to write my file
writeFile();
next();
});
let counter = 0;
function writeFile() {
if (counter === 0) {
work(function onWriteFileDone() {
counter--;
if (counter > 0) {
work();
}
});
} else {
counter++;
}
function work(callback) {
// I get the file
let content = fs.readFile('myfile.txt','utf-8', function (err, content) {
// ignore the error because life is too short on stackoverflow questions...
// I get an array of the numbers
let numbers = content.split('\n').map(parseInt);
// I compute the new number and push it to the list
let new_number = numbers[numbers.length - 1] + 1;
numbers.push(new_number);
// I write back the file
fs.writeFile('myfile.txt',numbers.join('\n'), callback);
});
}
}
Of course this function doesn't have any arguments, but if you want to add to it, you have to use a queue instead of the counter where you store the arguments in the queue.
Now don't write your own concurrency mechanisms. There's a lot of in the node ecosystem. For example you can use the async module, which provide a queue.
Note that if you only have one process at a time, then you don't have to worry about multiple threads since In node.js, in one process there's only one thread of execution at a time, but let's say there's multiple processes writing to the file then that might make things more complicated, but let's keep that for another question if not already covered. Operating systems provides a few different ways to handle this, also you could use your own lock files or a dedicated process to write to the file or a message queue process.

Best way to pass route params between callbacks in expressjs

I've tried to search for a similar problem on here but suprisingly couldn't find one posted already.
I use expressjs v4 framework and I'm constructing my routes like this:
'use strict';
let express = require('express');
let router = express.Router();
let users = require('./modules/users');
router.post('/',users.add);
router.put('/edit/:id',users.edit);
As you can see above, I'm requiring let users = require('./modules/users')
Now the users module looks (let's say) like this:
'use strict';
let usersDbModule = require('...');
let users = {
'add': (req, res, next) => {
let callback = (err, record) => {
//...do something
users.function1(record)
}
usersDbModule.save(req, callback);
},
'function1': (record) => {
users.function2()
},
'function2': () => {
//...do something with next() function
}
}
You can notice, that router from the first code block is using module's add function. add function it's a standard express middleware function but now the things are getting more complicated.
As you can see, add function has next as one of the params, now I'm doing some complex callbacks calls from different functions and let's say that in the end I want to call next in function2.
My question is, what is the best way of passing req, res and next params between different callback functions within the same module.
I come up with 3 different methods of doing it:
Method 1:
Pass req, res or next if necessary around to all the functions in the chain so in this case I would have to pass next to callback than to function1 and than from function1 to function2.
Not the best way in my opinion, difficult to maintain, read and probably test as well.
Method 2:
Wrap function1 and function2 with closures in the add passing all the necessary params. In this particular case I would have to wrap only function2 with closure passing next so it would looks something like this:
'add': (req, res, next) => {
users.function2(next);
//....rest of the code of the function add
}
And than the function2 itself:
'function2': (next) => {4
return () => {
//...now I have access to next here
// without a need to pass it to each and every
// function in the chain
}
}
Method 3:
Append all the necessary functions/variables to res.locals and pass only res object around.
It has exactly the same problem as Method 1 so I would personally be in favour of Method 2 but not sure if it doesn't make the code less readable and maybe there are some other problems with it, haven't tested it in production nor in development environment with the team.
I would really like to hear what are you guys using and how it plays in your projects/teams. Any preferences, best practices, best patterns ? Please share, I really want to know what's the best way.
Maybe there is even better way of doing it ?
All feedback greatly appreciated!
Real life example:
Example usage for function1 & function2 and possibly more...
Let's say we have an adapter that fetches data from an external API, than it needs to save the data into a database, and return a response. Let's also assume that the data returned from the API expires after 5s. If the client hits the route within 5s span, it gets the data from the database, if time between calls was longer, than it repeats the operation calling the API.
This would be of course more complicated than just function1 and function2. It would require a lot of callback functions, both from the adapter and the database, also separate functions for fetching data from the database, adapter, saving data into a database, and eventually deleting data from the database, it gives at least 4 callback functions already.
I think that mix express and app logic is not a good idea.
I use next way in my project
// middlewares/auth.js
// Example middleware
exports.isAdmin = function (req, res, next) {
if (smth-admin-check)
next();
else
next (new Error(403));
}
// routes/index.js
// Include only modules from /routes
let user = require('./user');
let auth = require('../middlewares/auth');
...
app.get('/user/:id(\\d+)', user.get);
app.post('/user', auth.isAdmin, user.post); // only admin can add user
// routes/user.js
// Call model methods and render/send data to browser
// Don't know about db
let User = require('/models/user');
...
exports.get = function(req, res, next) {
let id = req.params.id;
// I cache most data in memory to avoid callback-hell
// But in common case code like below
User.get(id, function(err, u) {
if (!u)
return next(new Error('Bad id'));
... render page or send json ...
});
}
...
exports.post = function(req, res, next) { ... }
// models/user.js
// Encapsulate user logic
// Don't use any express features
let db = require('my-db');
...
class User {
get(id, callback) { ... }
add(data, callback) { ... } // return Error or new user
...
}

Is continuation passing style any different to pipes?

I've been learning about continuation passing style, particularly the asynchronous version as implemented in javascript, where a function takes another function as a final argument and creates an asychronous call to it, passing the return value to this second function.
However, I can't quite see how continuation-passing does anything more than recreate pipes (as in unix commandline pipes) or streams:
replace('somestring','somepattern', filter(str, console.log));
vs
echo 'somestring' | replace 'somepattern' | filter | console.log
Except that the piping is much, much cleaner. With piping, it seems obvious that the data is passed on, and simultaneously execution is passed to the receiving program. In fact with piping, I expect the stream of data to be able to continue to pass down the pipe, whereas in CPS I expect a serial process.
It is imaginable, perhaps, that CPS could be extended to continuous piping if a comms object and update method was passed along with the data, rather than a complete handover and return.
Am I missing something? Is CPS different (better?) in some important way?
To be clear, I mean continuation-passing, where one function passes execution to another, not just plain callbacks. CPS appears to imply passing the return value of a function to another function, and then quitting.
UNIX pipes vs async javascript
There is a big fundamental difference between the way unix pipes behave vs the async CPS code you link to.
Mainly that the pipe blocks execution until the entire chain is completed whereas your async CPS example will return right after the first async call is made, and will only execute your callback when it is completed. (When the timeout wait is completed, in your example.)
Take a look at this example. I will use the Fetch API and Promises to demonstrate async behavior instead of setTimeout to make it more realistic. Imagine that the first function f1() is responsible for calling some webservice and parsing the result as a json. This is "piped" into f2() that processes the result.
CPS style:
function f2(json){
//do some parsing
}
function f1(param, next) {
return fetch(param).then(response => response.json()).then(json => next(json));
}
// you call it like this:
f1("https://service.url", f2);
You can write something that syntactically looks like a pipe if you move call to f2 out of f1, but that will do exactly the same as above:
function f1(param) {
return fetch(param).then(response => response.json());
}
// you call it like this:
f1("https://service.url").then(f2);
But this still will not block. You cannot do this task using blocking mechanisms in javascript, there is simply no mechanism to block on a Promise. (Well in this case you could use a synchronous XMLHttpRequest, but that's not the point here.)
CPS vs piping
The difference between the above two methods is that who has the control to decide whether to call the next step and with exactly what paramters, the caller (later example) or the called function (CPS).
A good example where CPS comes very handy is middleware. Think about a caching middleware for example in a processing pipeline. Simplified example:
function cachingMiddleware(request, next){
if(someCache.containsKey(request.url)){
return someCache[request.url];
}
return next(request);
}
The middleware executes some logic, checks if the cache is still valid:
If it is not, then next is called, which then will proceed on with the processing pipeline.
If it is valid then the cached value is returned, skipping the next execution.
Continuation Passing Style at application level
Instead of comparing at an expression/function-block level, factoring Continuation Passing Style at an application level can provide an avenue for flow control advantages through its "continuation" function (a.k.a. callback function). Lets take Express.js for example:
Each express middleware takes a rather similar CPS function signature:
const middleware = (req, res, next) => {
/* middleware's logic */
next();
}
const customErrorHandler = (error, req, res, next) => {
/* custom error handling logic*/
};
next is express's native callback function.
Correction: The next() function is not a part of the Node.js or Express API, but is the third argument that is passed to the middleware function. The next() function could be named anything, but by convention it is always named “next”
req and res are naming conventions for HTTP request and HTTP response respectively.
A route handler in Express.JS would be made up of one or more middleware functions. Express.js will pass each of them the req, res objects with changes made by the preceding middleware to the next, and an identical next callback.
app.get('/get', middlware1, middlware2, /*...*/ , middlewareN, customErrorHandler)
The next callback function serves:
As a middleware's continuation:
Calling next() passes the execution flow to the next middleware function. In this case it fulfils its role as a continuation.
Also as a route interceptor:
Calling next('Custom error message') bypasses all subsequent middlewares and passes the execution control to customErrorHandler for error handling. This makes 'cancellation' possible in the middle of the route!
Calling next('route') bypasses subsequent middlewares and passes control to the next matching route eg. /get/part.
Imitating Pipe in JS
There is a TC39 proposal for pipe , but until it is accepted we'll have to imitate pipe's behaviour manually. Nesting CPS functions can potentially lead to callback hell, so here is my attempt for cleaner code:
Assuming that we want to compute a sentence 'The fox jumps over the moon' by replacing parts of a starter string (e.g props)
const props = " The [ANIMAL] [ACTION] over the [OBJECT] "
Every function to replace different parts of the string are sequenced with an array
const insertFox = s => s.replace(/\[ANIMAL\]/g, 'fox')
const insertJump = s => s.replace(/\[ACTION\]/g, 'jumps')
const insertMoon = s => s.replace(/\[OBJECT\]/g, 'moon')
const trim = s => s.trim()
const modifiers = [insertFox, insertJump, insertMoon, trim]
We can achieve a synchronous, non-streaming, pipe behaviour with reduce.
const pipeJS = (chain, callBack) => seed =>
callBack(chain.reduce((acc, next) => next(acc), seed))
const callback = o => console.log(o)
pipeJS(modifiers, callback)(props) //-> 'The fox jumps over the moon'
And here is the asynchronous version of pipeJS;
const pipeJSAsync = chain => async seed =>
await chain.reduce((acc, next) => next(acc), seed)
const callbackAsync = o => console.log(o)
pipeJSAsync(modifiers)(props).then(callbackAsync) //-> 'The fox jumps over the moon'
Hope this helps!

Categories