Is continuation passing style any different to pipes? - javascript

I've been learning about continuation passing style, particularly the asynchronous version as implemented in javascript, where a function takes another function as a final argument and creates an asychronous call to it, passing the return value to this second function.
However, I can't quite see how continuation-passing does anything more than recreate pipes (as in unix commandline pipes) or streams:
replace('somestring','somepattern', filter(str, console.log));
vs
echo 'somestring' | replace 'somepattern' | filter | console.log
Except that the piping is much, much cleaner. With piping, it seems obvious that the data is passed on, and simultaneously execution is passed to the receiving program. In fact with piping, I expect the stream of data to be able to continue to pass down the pipe, whereas in CPS I expect a serial process.
It is imaginable, perhaps, that CPS could be extended to continuous piping if a comms object and update method was passed along with the data, rather than a complete handover and return.
Am I missing something? Is CPS different (better?) in some important way?
To be clear, I mean continuation-passing, where one function passes execution to another, not just plain callbacks. CPS appears to imply passing the return value of a function to another function, and then quitting.

UNIX pipes vs async javascript
There is a big fundamental difference between the way unix pipes behave vs the async CPS code you link to.
Mainly that the pipe blocks execution until the entire chain is completed whereas your async CPS example will return right after the first async call is made, and will only execute your callback when it is completed. (When the timeout wait is completed, in your example.)
Take a look at this example. I will use the Fetch API and Promises to demonstrate async behavior instead of setTimeout to make it more realistic. Imagine that the first function f1() is responsible for calling some webservice and parsing the result as a json. This is "piped" into f2() that processes the result.
CPS style:
function f2(json){
//do some parsing
}
function f1(param, next) {
return fetch(param).then(response => response.json()).then(json => next(json));
}
// you call it like this:
f1("https://service.url", f2);
You can write something that syntactically looks like a pipe if you move call to f2 out of f1, but that will do exactly the same as above:
function f1(param) {
return fetch(param).then(response => response.json());
}
// you call it like this:
f1("https://service.url").then(f2);
But this still will not block. You cannot do this task using blocking mechanisms in javascript, there is simply no mechanism to block on a Promise. (Well in this case you could use a synchronous XMLHttpRequest, but that's not the point here.)
CPS vs piping
The difference between the above two methods is that who has the control to decide whether to call the next step and with exactly what paramters, the caller (later example) or the called function (CPS).
A good example where CPS comes very handy is middleware. Think about a caching middleware for example in a processing pipeline. Simplified example:
function cachingMiddleware(request, next){
if(someCache.containsKey(request.url)){
return someCache[request.url];
}
return next(request);
}
The middleware executes some logic, checks if the cache is still valid:
If it is not, then next is called, which then will proceed on with the processing pipeline.
If it is valid then the cached value is returned, skipping the next execution.

Continuation Passing Style at application level
Instead of comparing at an expression/function-block level, factoring Continuation Passing Style at an application level can provide an avenue for flow control advantages through its "continuation" function (a.k.a. callback function). Lets take Express.js for example:
Each express middleware takes a rather similar CPS function signature:
const middleware = (req, res, next) => {
/* middleware's logic */
next();
}
const customErrorHandler = (error, req, res, next) => {
/* custom error handling logic*/
};
next is express's native callback function.
Correction: The next() function is not a part of the Node.js or Express API, but is the third argument that is passed to the middleware function. The next() function could be named anything, but by convention it is always named “next”
req and res are naming conventions for HTTP request and HTTP response respectively.
A route handler in Express.JS would be made up of one or more middleware functions. Express.js will pass each of them the req, res objects with changes made by the preceding middleware to the next, and an identical next callback.
app.get('/get', middlware1, middlware2, /*...*/ , middlewareN, customErrorHandler)
The next callback function serves:
As a middleware's continuation:
Calling next() passes the execution flow to the next middleware function. In this case it fulfils its role as a continuation.
Also as a route interceptor:
Calling next('Custom error message') bypasses all subsequent middlewares and passes the execution control to customErrorHandler for error handling. This makes 'cancellation' possible in the middle of the route!
Calling next('route') bypasses subsequent middlewares and passes control to the next matching route eg. /get/part.
Imitating Pipe in JS
There is a TC39 proposal for pipe , but until it is accepted we'll have to imitate pipe's behaviour manually. Nesting CPS functions can potentially lead to callback hell, so here is my attempt for cleaner code:
Assuming that we want to compute a sentence 'The fox jumps over the moon' by replacing parts of a starter string (e.g props)
const props = " The [ANIMAL] [ACTION] over the [OBJECT] "
Every function to replace different parts of the string are sequenced with an array
const insertFox = s => s.replace(/\[ANIMAL\]/g, 'fox')
const insertJump = s => s.replace(/\[ACTION\]/g, 'jumps')
const insertMoon = s => s.replace(/\[OBJECT\]/g, 'moon')
const trim = s => s.trim()
const modifiers = [insertFox, insertJump, insertMoon, trim]
We can achieve a synchronous, non-streaming, pipe behaviour with reduce.
const pipeJS = (chain, callBack) => seed =>
callBack(chain.reduce((acc, next) => next(acc), seed))
const callback = o => console.log(o)
pipeJS(modifiers, callback)(props) //-> 'The fox jumps over the moon'
And here is the asynchronous version of pipeJS;
const pipeJSAsync = chain => async seed =>
await chain.reduce((acc, next) => next(acc), seed)
const callbackAsync = o => console.log(o)
pipeJSAsync(modifiers)(props).then(callbackAsync) //-> 'The fox jumps over the moon'
Hope this helps!

Related

subscribe().add() method called with parameter which is not of type Subscription

I am new to Angular (just learning) and I am trying to understand the following code. In this code (I simplified it from another app I have seen on Github) subscribe method calls add method with the parameter resolve (from Promise) - so I have a few questions:
Doesn't parameter passed to add mehtod have to be of type `Subscription' ?
What does framework do with passed resolve parameter. I Thought the parameter must be of type Subscription and framework calls <Subscription>.unsubscribe() on it.
const numbers: Observable<number> = interval(5000);
const takeFourNumbers = numbers.pipe(take(4));
const promise$ = new Promise<void>(resolve => {
// attempt to refresh token on app start up to auto authenticate
takeFourNumbers.subscribe()
.add(resolve);
}).then((result) => {
console.log("My result is ", result);
});
That's a bit of an odd piece of code.
Every Subscription has an .add( function, which is used to trigger something else on teardown, this is when that subscription ends (either because the stream errors, completes, or you call .unsubscribe())
The parameter that .add( takes can be a function (in which case it just gets called on teardown), or another subscription, in which case it will call .unsubscribe() to it.
It's not really used too much unless you're building something low-level (such a library)
In this case, takeFourNumbers is a stream that will emit 4 numbers in succession, 1 second between each emission and then complete. promise$ is a Promise that when takeForNumbers ends after 4 seconds it will resolve with void, because the teardown will call the function passed to .add(, which is resolve, and it doesn't give any parameter as far as I know.
Then on that promise it calls .then( to log the result when that happens, but I expect the result to be undefined.

How to control C# Task (async/await in same way as javascript Promise)?

I'm js developer and jumped to C#. I'm learning async/await/Task in C# and can't understand how to control C# Task.
In javascript I can save Promise resolve handler to call it later (and resolve promise), for example, to organize communication between some "badly connected" app parts (request doesn't returns result, but result is sent separately and fires some "replyEvent"), and do something like this:
// javascript
const handlers = {}
// event firing on reply
const replyEventListener = (requestId, result) => {
if (handlers[requestId]) {
handlers[requestId](result)
delete handlers[requestId]
}
}
const requestFunction = (id, params) => new Promise(resolve => {
// handler is saved to use later from "outside"
handlers[id] = resolve;
// do required request
makeRequest(params)
})
// .. somewhere in code request becomes simple awaitable
const resultOfRequest = await requestFunction(someUniqueId)
How can I do same with C# Task?
I think the philosophy behind Promise (in JS) and Task (in C#) is different.
We don't implement them exactly the same way.
Read the documentation on the MSDN :
Task Class - Remarks
Task<TResult> Class
You must empty your cup to be able to fill it again.
A good article in MSDN is the following which shows examples of how Tasks can be used like Promises in the sense that you can start them and then await them later.
Asynchronous programming with async and await
If you want to await a Task in C# like you would a Promise in JS, you would do the following:
Create a method or function that returns a Task. Usually this will be a Task of some class object. In the article above, it references creating breakfast as an analogy, so you can define FryEggsAsync as a method that takes in an int number of how many eggs to fry and returns a Task<Egg>. The C# convention is to end any function or method name with Async to indicate that a Task is being returned.
Create your Task, which is similar to a Promise in JS, so you could do var eggsTask = FryEggsAsync(2), which would store a Task<Egg> in eggsTask, that you can then await or pass to other functions as needed.
To await your Task to resolve it and get the result, you would simply do await eggsTask.
You can also use Task.WaitAll to await multiple Tasks at once, similar to Promise.all in JS, by passing in an array of Tasks as an argument to that method call. See the Task.WaitAll documentation for some examples of this. The difference here though is that the results are stored separately in each task passed to Task.WaitAll, instead of aggregated together into a results array like in JS with Promise.all, but if you need to await multiple Tasks then this will be an easier way to do so.

Should I use a React Component if it always renders null, or just use a function?

I'm making a UI and came across something that made me wonder. I made a general re-usable function that fetches data and returns it in a callback, which is given to the function by whatever is calling that function. But that's all it does, it fetches data and passes it onward. At the moment the function can take up to ~15 different parameters/props.
I made it a React Component at first, due to the feasibility of calling the function like so:
<SomeFunction
param1={some_param_1}
param2={some_param_2}
...
/>
This way I can easily add and omit parameters at will. However, the SomeFunction always returns null, as its main point is returning fetched data in a callback. Should this Component be reverted to a simple function without any React in it? If so, what is the best way to approach the parameters?
My mind can quickly come up with two alternatives, the first one being positional arguments:
function someFunction(param1, param2, ... param15)
But this seems like a stretch, as I need to give many nulls or such if I want to pass something as the 15th parameter.
Another way that came to mind is to use an object:
function someFunction(options)
and then access parameters like options.param1 and options.param2.
Is the Component approach or the function approach better in this type of case? And what is the best way to handle gazillion optional parameters to a function in JS? I'm not a total noob but it feels like there are so many ways to approach things and best practices in the JS world, not to mention the ever-changing nature of the language and its derivatives.
Two suggestions:
Make it a normal function that accepts its parameters as an object, probably using destructuring. (A component receives its props as an object, so that's basically the same thing.)
Return a promise rather than passing in a callback. Promises provide standard semantics that can be consumed with await in an async function and/or combined with the various promise combinators (Promise.all, Promise.race, etc.).
So for instance, if your function currently uses something that provides a promise (like fetch):
async function fetchTheInformation({param1, param2, param3 = "default for param3"}) {
const response = await fetch(/*...*/);
if (!response.ok) {
throw new Error(`HTTP error ${response.status}`);
}
return response.appropriateMethodHere(); // .text(), .json(), .arrayBuffer(), etc.
}
if it doesn't use something that provides a promise:
function fetchTheInformation({param1, param2, param3 = "default for param3"}) {
return new Promise((resolve, reject) => {
// ...start the operation and call `resolve` if it works, `reject` with an `Error` if it doesn't...
});
}
In either case, the call to it can look like this in an async function:
const information = await fetchTheInformation({
param1: "parameter 1",
param2, // <=== If you happen to have `param2` in a variable/constant with the same name, no need to repeat it
});
(errors [rejections] will automatically propagate to the caller of the async function to be handled there)
or in a non-async function:
fetchTheInformation({
param1: "parameter 1",
param2, // <=== If you happen to have `param2` in a variable/constant with the same name, no need to repeat it, this is just like `param2: param,`
})
.then(information => {
// ...use the information...
})
.catch(error => { // Or return the promise chain to something else that will handle errors
// ...handle/report error...
});
About the parameter list: I assume at least one parameter is required, but if they're all optional (have reasonable defaults), you can do the list like this:
function example({a = "default for a", b = "default for b", c = "default for c"} = {}) {
}
The expressions after = within the destructuring provide defaults for those destructured parameters. The = {} at the end makes the entire parameters object optional, you with the above you can do example() or example({}) or example({a: "something"}), etc.

How do I know when asynchronous JavaScript execution complete in Rhino

I have a JavaScript code, which calls some asynchronous API and it works great. But I also need to call other API to report when script execution completed. The issue is that Context.evaluateString(...) returns immediately, but script code continues to execute because its asynchronous nature. JS example:
function f1(function (err, res) {
function f2(function (err, res) {
function f3(function (err, res) {
handleResult(err, res);
// ideally I need to know when handleResult(...) has completed execution
// but Rhino's Context.evaluateString(...) returns immediately
// after f1() is called, but script continues execution
});
});
});
Yes, I could add some method to script to call it from script when all operations done, and handle it on Java side, but this will force me to call it every time. This is just workaround.
But I need more generic way without applying any rules to script code.
Also, what if customer will forget to call say sendResult() from script? App on other side will wait for result forever. So I need bullet proof solution.
In iOS, using javascriptcore I just reacted when added to script engine top-level object destroyed, but in Java this trick doesn't work because unlike Objective-C/Swift, Java is not reference-counting but using GC and you never know when object will be deallocated.
I have no experience using Rhino, so take this answer with a grain of salt. However this answer might steer you in the right direction.
The documentation states:
evaluateString
...
Returns:
the result of evaluating the string
So I would create a Future that is returned by the JavaScript. Resolve the future after handleResult is executed. Then on the Java side, simply cast the result into the correct object, then wait for the value to be resolved.
// create an empty task
const future = new java.util.concurrent.FutureTask(function () {});
f1(function (err, res) {
f2(function (err, res) {
f3(function (err, res) {
handleResult(err, res);
// run the empty task, doing nothing more than resolving the future
future.run();
});
});
});
// return future to evaluateString
future;
You can find more info about Java objects in JavaScript here.

How can I invoke a callback given to a yield in javascript v8 generator code?

I'm attempting to understand javascript generators in node.js v8, without using any third party libraries.
I want to try having a generator that invokes an asynchronous callback (something like a web request) and returns the value that is called back.
My code (using Node v0.11.12) looks like:
var callbackWrapper = function (){
return function(callback) {
callback(null, 5);
};
};
var result = null;
var generator = (function *(){
result = yield callbackWrapper();
})();
var nextIteration = generator.next();
At the end of this code, nextIteration.value is a function, and result is still null.
It was my understanding that if yield is given a method with a callback, it invokes that callback.
My goal is to somehow get the value, 5, that is called back.
What is the way to do this? Do I have to pass some function to nextIteration.value and invoke it?
Let's clarify some things.
It was my understanding that if yield is given a method with a callback, it invokes that callback.
yield does not invoke anything. The purpose of yield is, well, to yield... When you define a generator you define an entity which yields values. It has nothing to do with async code. We can, however, leverage an important property of generators to handle async code.
Generators, by their definition, yield values. Here's the important part - between one yield to another the generator is suspended. In other words, it waits until it is asked to yield the next value. The internal state of the generator is kept in memory (on the stack) until the generator is exhausted (no more values to yield).
Another important property of generators is that we can send values back, thus changing their internal state.
So if we can suspend generators (make them wait) and we can send values back; we can essentially make them wait for some async operation to complete and then send the result back.
What is the way to do this? Do I have to pass some function to nodeIteration.value and invoke it?
Basically you need to wrap you code with a generator. Every time you start an async operation you make the generator wait by using yield. When the async operation completes you resume your generator by sending the result back with next(result).
I wrote an extensive post to explain this issue, which I'm sure will help you understand. Take a look: http://eyalarubas.com/javascript-generators-and-callbacks.html
Generators don't handle node style callbacks on their own. Instead it's returning the function that you wrapped inside of the callbackWrapper thunk. As yield only returns a value and then pauses execution at that point in time.
Generators weren't really designed for control flow but you can build on top of them to create control flow libraries like co, suspend, etc..
Basically what these libraries do (I'm oversimplifying here), is take your generator function and recursively call it until it tells them that it has finished.
Each of these libraries handles the internal yields in different ways, for example co turns everything it can handle into thunks internally. While suspend uses node-style callbacks for everything internally.
At each yield they check to see what was yielded to them a thunk, promise, generator, or whatever constructs that library handles, and abstracts the control out based on when they are completed.
You can build a structure around generators to handle asynchronous thunked functions but it's not for the feint of heart. Basically it would go something like this (Note: don't use this other than for playing around as its missing all the normal checks, error handling, etc..):
function controlFlow(genFunc){
// check to make sure we have a generator function otherwise explode
var generator; // reference for an initialized generator
// return a funcion so that execution time can be postponed
return function(){
runGen(genFunc());
}
function runGen(generator){
var ret = generator.next();
// here the generator is already finished we we'll return the final value
if(ret.done){
return ret.value
}
// here we'll handle the yielded value no matter what type it is
// I'm being naive here for examples sake don't do this
if(typeof ret.value === 'function'){
// oh look we have a regular function (very faulty logic)
// we wouldn't do this either but yeah
ret.value(function(err, result){
console.log(result);
});
}
// oh we got another item like an array or object that means parallelism or serialization depending on what we got back
// turn array, object, whatever into callback mechanisms and then track their completion
// we're just going to fake it here and just handle the next call
runGen(generator);
}
}
function thunked(callback){
return function(){
callback(null, 5);
};
};
function regular(callback){
console.log('regular function call');
callback(null, 'value');
};
controlFlow(function *(){
yield thunked(function(err, result){
console.log(err);
console.log(result);
});
yield regular;
yield thunked(function(err, result){
console.log('Another Thunked');
});
yield regular(function(err, result){
console.log(err);
console.log(result);
});
})();
result won’t get assigned until you send a value back to the generator by calling next again with the value you want to assign to result. In your example that would look like this:
nextIteration.value(function (error, value) {
generator.next(value);
});

Categories