How do I know when asynchronous JavaScript execution complete in Rhino - javascript

I have a JavaScript code, which calls some asynchronous API and it works great. But I also need to call other API to report when script execution completed. The issue is that Context.evaluateString(...) returns immediately, but script code continues to execute because its asynchronous nature. JS example:
function f1(function (err, res) {
function f2(function (err, res) {
function f3(function (err, res) {
handleResult(err, res);
// ideally I need to know when handleResult(...) has completed execution
// but Rhino's Context.evaluateString(...) returns immediately
// after f1() is called, but script continues execution
});
});
});
Yes, I could add some method to script to call it from script when all operations done, and handle it on Java side, but this will force me to call it every time. This is just workaround.
But I need more generic way without applying any rules to script code.
Also, what if customer will forget to call say sendResult() from script? App on other side will wait for result forever. So I need bullet proof solution.
In iOS, using javascriptcore I just reacted when added to script engine top-level object destroyed, but in Java this trick doesn't work because unlike Objective-C/Swift, Java is not reference-counting but using GC and you never know when object will be deallocated.

I have no experience using Rhino, so take this answer with a grain of salt. However this answer might steer you in the right direction.
The documentation states:
evaluateString
...
Returns:
the result of evaluating the string
So I would create a Future that is returned by the JavaScript. Resolve the future after handleResult is executed. Then on the Java side, simply cast the result into the correct object, then wait for the value to be resolved.
// create an empty task
const future = new java.util.concurrent.FutureTask(function () {});
f1(function (err, res) {
f2(function (err, res) {
f3(function (err, res) {
handleResult(err, res);
// run the empty task, doing nothing more than resolving the future
future.run();
});
});
});
// return future to evaluateString
future;
You can find more info about Java objects in JavaScript here.

Related

Make js wait for a promise before going to next line in Nodejs ideally using ES5

I know its asked many times but I cannot make it to work in my case.
I was writing a code for IBM tool which runs the JS file using Rhino engine version 1.7R4 (JavaScript 1.7 is default, ES5 compliance, JavaScript 1.8 generator expressions) See compatibility here
The js file works on the server fine as the Rhino implementation on server executes it some synchronoous way.
It calls some built in functions which works in synchronous way. e.g. one of them brings data from database. If I try to reproduce the same functionality in my local machine the js executes asynchronously.
A sample code is below which works synchronously on server (changing this code to await async might make it work on my machine but on server this will not work. Hence, I wont be running the same code on my machine and server)
var events = new Object();
function getEvents() {
var query = "select * from alerts.status";
events = DirectSQL("AGG_BSS_Objectserver", query, false);
}
function processEvents() {
// Do something with events
}
getEvents();
processEvents();
The DirectSQL is a built in function from server which executes synchronously and brings data in events before calling other lines of code. i.e processEVents() is called after obtaining results from DirectSQL. The above is a simplified example we are calling DirectSQL numerous times with complex logics.
For testing, I tried running this file on local computer using NodeJS. Since, my local machine does not have DirectSQL function. I created one in the same JS file as below. It works well but its asynchronous. i.e processEvents() is executed before the results are returned from DirectSQL(). Below is my test implementation of DirectSQL() for testing I can change it as much as I like
function DirectSQL(dataSrouce, query, countOnly) {
var Sybase = require('sybase'),
db = new Sybase(server, port, dbanem, user, pass);
db.connect(function (err) {
if (err) return console.log(err);
db.query(query, function (err, data) {
if (err) console.log(err);
console.log(data);
db.disconnect();
return data;
});
});
}
Is there any way to make DirectSQL return results in events before executing processEvents()
I guess this is what you are looking for: How is async/await transpiled to ES5. At the end of the article, you have a reference to more explanations. Good luck!

Impact of sync function inside async functions

Lets imagine an asynchronous function that loads a file first and does something asynchronously with it afterwards. The function can't continue without the file, so my assumption is that loading this file could be done synchronously (*):
const asyncFnWithSyncCode(filePath, next) {
// Load file
const file = fs.readFileSync(filePath)
// Continue to process file with async functions
// ...
next(null, processedFile)
}
asyncFnWithSyncCode could be called several times for different files:
async.parallel([
(done) => { asyncFnWithSyncCode('a.json', done) },
(done) => { asyncFnWithSyncCode('b.json', done) },
(done) => { asyncFnWithSyncCode('c.json', done) }
], next)
My question is: How does this impact the performance? Will the sync function cause the other readFileSyncs to be delayed? Will it have an impact at all?
Best-practices, resources and opinions are welcome. Thanks!
(*) I know that I could simply use the async readFile-version, but I would really like to know how it works in this special construction.
Will the sync function cause the other readFileSyncs to be delayed?
Yes. NodeJS runs all of your JavaScript code on a single thread, using an event loop (job queue), which is one of the reasons that using asynchronous system calls is strongly encouraged over synchronous ones.
readFile schedules the read operation and then lets other things happen on the JavaScript thread while the I/O layer is waiting for the data to come in; Node's I/O layer queues a task for the JavaScript thread when data is available, which is what ultimately makes your readFile callback get called.
In contrast, readFileSync holds up that one single JavaScript thread, waiting for the file data to become available. Since there's only one thread, that holds up everything else your code might otherwise be doing, including other readFileSync calls.
Your code doesn't need to use readFileSync (you almost never do); just use readFile's callback:
const asyncFnWithSyncCode(filePath, next) {
// Load file
fs.readFile(filePath, function(err, file) {
if (err) {
// ...handle error...
// ...continue if appropriate:
next(err, null);
} else {
// ...use `file`...
// Continue to process file with async functions
// ...
next(null, processedFile);
}
});
}

When working with NodeJS FS mkdir, what is the importance of including callbacks?

I'm playing with the NodeJS REPL console and following this tutorial.
http://www.tutorialspoint.com/nodejs/nodejs_file_system.htm
I'm focusing on the File System(FS) module. Let's look at the mkdir function used for creating directories.
According to TutorialsPoint, this is how you create a directory with FS
var fs = require("fs");
console.log("Going to create directory /tmp/test");
fs.mkdir('/tmp/test',function(err){
if (err) {
return console.error(err);
}
console.log("Directory created successfully!");
});
They specifically say you need this syntax
fs.mkdir(path[, mode], callback)
Well I just tried using less code without the callback and it worked.
var fs = require('fs');
fs.mkdir('new-directory');
And the directory was created. The syntax should just be
fs.mkdir(path);
I have to ask, what is the purpose of the callback and do you really need it? For removing a directory I could understand why you would need it, in case the directory didn't exist. But I can't see what could possibly go wrong with the mkdir command. Seems like a lot of unnecessary code.
As of node v10.0, the callback to fs.mkdir() is required. You must pass it, even if you just pass a dummy function that does nothing.
The point of the callback is to let you know if and when the call succeeded and if it didn't succeed, what the specific error was.
Remember, this type of function is asynchronous. It completes some unknown time in the future so the only way to know when it is done or if it completed successfully is by passing a callback function and when the callback is called, you can check the error and see that it has completed.
As it turns out, there are certainly things that can go wrong with mkdir() such as a bad path, a permissions error, etc... so errors can certainly happen. And, if you want to immediately use that new directory, you have to wait until the callback is called before using it.
In response to one of your other comments, the fs.mkdir() function is always asynchronous whether you pass the callback or not.
Here's an example:
var path = '/tmp/test';
fs.mkdir(path, function (err) {
if (err) {
console.log('failed to create directory', err);
} else {
fs.writeFile(path + "/mytemp", myData, function(err) {
if (err) {
console.log('error writing file', err);
} else {
console.log('writing file succeeded');
}
});
}
});
Note: Modern versions of nodejs, include fs.promises.mkdir() which returns a promise that resolves/rejects instead of using plain callbacks. This allows you to use await with try/catch or .then() and .catch() instead of the plain callback to know when it's done and promises make it typically easier to sequence in with other asynchronous operations and to centralize error handling.
Because mkdir is async.
Example:
If you do:
fs.mkdir('test');
fs.statSync('test').isDirectory();//might return false cause it might not be created yet
But if you do:
fs.mkdir('test', function() {
fs.statSync('test').isDirectory();//will be created at this point
});
You can still use mkdirSync if you need a sync version.
Many things could go wrong by using mkdir, and you should probably handle exceptions and errors and return them back to the user, when possible.
e.g. mkdir /foo/bar could go wrong, as you might need root (sudo) permissions in order to create a top-level folder.
However, the general idea behind callbacks is that the method you're using is asynchronous, and given the way Javascript works you might want to be notified and continue your program execution once the directory has been created.
Update: bare in mind that if you need — let's say — to save a file in the directory, you'll need to use that callback:
fs.mkdir('/tmp/test', function (err) {
if (err) {
return console.log('failed to write directory', err);
}
// now, write a file in the directory
});
// at this point, the directory has not been created yet
I also recommend you having a look at promises, which are now being used more often than callbacks.
Because it's an async call, it may be that further execution of the program depends on the outcome of the operation (dir created sucessfully). When the callback executes is the first point in time when this can be checked.
However, this operation is really fast, it may seem as it's happening instantly, but really (because it's async), the following line after fs.mkdir(path); will be executed without waiting for any feedback from the fs.mkdir(path); thus w/o any guarantee that the directory creation finished already, or if it failed.

Blocking javascript functions (node.js)

I have this code:
var resources = myFunc();
myFunc2(resources);
The problem is that JavaScript calls myFunc() asynchronous, and then myFunc2(), but I don't have the results of myFunc() yet.
Is there a way to block the first call? Or a way to make this work?
The reason why this code doesn't work represents the beauty and pitfalls of async javascript. It doesn't work because it is not supposed to.
When the first line of code is executed, you have basically told node to go do something and let you know when it is done. It then moves on to execute the next line of code - which is why you don't have the response yet when you get here. For more on this, I would study the event-loop in greater detail. It's a bit abstract, but it might help you wrap your head around control flow in node.
This is where callbacks come in. A callback is basically a function you pass to another function that will execute when that second function is complete. The usual signature for a callback is (err, response). This enables you to check for errors and handle them accordingly.
//define first
var first = function ( callback ) {
//This function would do something, then
// when it is done, you callback
// if no error, hand in null
callback(err, res);
};
//Then this is how we call it
first ( function (err, res) {
if ( err ) { return handleError(err); }
//Otherwise do your thing
second(res)
});
As you might imagine, this can get complicated really quickly. It is not uncommon to end up with many nested callbacks which make your code hard to read and debug.
Extra:
If you find yourself in this situation, I would check out the async library. Here is a great tutorial on how to use it.
myFunc(), if asynchronous, needs to accept a callback or return a promise. Typically, you would see something like:
myFunc(function myFuncCallback (resources) {
myFunc2(resources);
});
Without knowing more about your environment and modules, I can't give you specific code. However, most asynchronous functions in Node.js allow you to specify a callback that will be called once the function is complete.
Assuming that myFunc calls some async function, you could do something like this:
function myFunc(callback) {
// do stuff
callSomeAsyncFunction(callback);
}
myFunc(myFunc2);

Is continuation passing style any different to pipes?

I've been learning about continuation passing style, particularly the asynchronous version as implemented in javascript, where a function takes another function as a final argument and creates an asychronous call to it, passing the return value to this second function.
However, I can't quite see how continuation-passing does anything more than recreate pipes (as in unix commandline pipes) or streams:
replace('somestring','somepattern', filter(str, console.log));
vs
echo 'somestring' | replace 'somepattern' | filter | console.log
Except that the piping is much, much cleaner. With piping, it seems obvious that the data is passed on, and simultaneously execution is passed to the receiving program. In fact with piping, I expect the stream of data to be able to continue to pass down the pipe, whereas in CPS I expect a serial process.
It is imaginable, perhaps, that CPS could be extended to continuous piping if a comms object and update method was passed along with the data, rather than a complete handover and return.
Am I missing something? Is CPS different (better?) in some important way?
To be clear, I mean continuation-passing, where one function passes execution to another, not just plain callbacks. CPS appears to imply passing the return value of a function to another function, and then quitting.
UNIX pipes vs async javascript
There is a big fundamental difference between the way unix pipes behave vs the async CPS code you link to.
Mainly that the pipe blocks execution until the entire chain is completed whereas your async CPS example will return right after the first async call is made, and will only execute your callback when it is completed. (When the timeout wait is completed, in your example.)
Take a look at this example. I will use the Fetch API and Promises to demonstrate async behavior instead of setTimeout to make it more realistic. Imagine that the first function f1() is responsible for calling some webservice and parsing the result as a json. This is "piped" into f2() that processes the result.
CPS style:
function f2(json){
//do some parsing
}
function f1(param, next) {
return fetch(param).then(response => response.json()).then(json => next(json));
}
// you call it like this:
f1("https://service.url", f2);
You can write something that syntactically looks like a pipe if you move call to f2 out of f1, but that will do exactly the same as above:
function f1(param) {
return fetch(param).then(response => response.json());
}
// you call it like this:
f1("https://service.url").then(f2);
But this still will not block. You cannot do this task using blocking mechanisms in javascript, there is simply no mechanism to block on a Promise. (Well in this case you could use a synchronous XMLHttpRequest, but that's not the point here.)
CPS vs piping
The difference between the above two methods is that who has the control to decide whether to call the next step and with exactly what paramters, the caller (later example) or the called function (CPS).
A good example where CPS comes very handy is middleware. Think about a caching middleware for example in a processing pipeline. Simplified example:
function cachingMiddleware(request, next){
if(someCache.containsKey(request.url)){
return someCache[request.url];
}
return next(request);
}
The middleware executes some logic, checks if the cache is still valid:
If it is not, then next is called, which then will proceed on with the processing pipeline.
If it is valid then the cached value is returned, skipping the next execution.
Continuation Passing Style at application level
Instead of comparing at an expression/function-block level, factoring Continuation Passing Style at an application level can provide an avenue for flow control advantages through its "continuation" function (a.k.a. callback function). Lets take Express.js for example:
Each express middleware takes a rather similar CPS function signature:
const middleware = (req, res, next) => {
/* middleware's logic */
next();
}
const customErrorHandler = (error, req, res, next) => {
/* custom error handling logic*/
};
next is express's native callback function.
Correction: The next() function is not a part of the Node.js or Express API, but is the third argument that is passed to the middleware function. The next() function could be named anything, but by convention it is always named “next”
req and res are naming conventions for HTTP request and HTTP response respectively.
A route handler in Express.JS would be made up of one or more middleware functions. Express.js will pass each of them the req, res objects with changes made by the preceding middleware to the next, and an identical next callback.
app.get('/get', middlware1, middlware2, /*...*/ , middlewareN, customErrorHandler)
The next callback function serves:
As a middleware's continuation:
Calling next() passes the execution flow to the next middleware function. In this case it fulfils its role as a continuation.
Also as a route interceptor:
Calling next('Custom error message') bypasses all subsequent middlewares and passes the execution control to customErrorHandler for error handling. This makes 'cancellation' possible in the middle of the route!
Calling next('route') bypasses subsequent middlewares and passes control to the next matching route eg. /get/part.
Imitating Pipe in JS
There is a TC39 proposal for pipe , but until it is accepted we'll have to imitate pipe's behaviour manually. Nesting CPS functions can potentially lead to callback hell, so here is my attempt for cleaner code:
Assuming that we want to compute a sentence 'The fox jumps over the moon' by replacing parts of a starter string (e.g props)
const props = " The [ANIMAL] [ACTION] over the [OBJECT] "
Every function to replace different parts of the string are sequenced with an array
const insertFox = s => s.replace(/\[ANIMAL\]/g, 'fox')
const insertJump = s => s.replace(/\[ACTION\]/g, 'jumps')
const insertMoon = s => s.replace(/\[OBJECT\]/g, 'moon')
const trim = s => s.trim()
const modifiers = [insertFox, insertJump, insertMoon, trim]
We can achieve a synchronous, non-streaming, pipe behaviour with reduce.
const pipeJS = (chain, callBack) => seed =>
callBack(chain.reduce((acc, next) => next(acc), seed))
const callback = o => console.log(o)
pipeJS(modifiers, callback)(props) //-> 'The fox jumps over the moon'
And here is the asynchronous version of pipeJS;
const pipeJSAsync = chain => async seed =>
await chain.reduce((acc, next) => next(acc), seed)
const callbackAsync = o => console.log(o)
pipeJSAsync(modifiers)(props).then(callbackAsync) //-> 'The fox jumps over the moon'
Hope this helps!

Categories