Avoid allocation errors with destroy() and async functions - javascript

The following is a simple scenario for some GNOME extension:
Enable the extension. The extension is a class which extends
Clutter.Actor.
It creates an actor called myActor and adds it:
this.add_child(myActor).
Then it calls an asynchronous
time-consuming function this._tcFunction() which in the end does something
with myActor.
Here's where I run into a problem:
We disable (run this.destroy()) the extension immediately after enabling it.
On disabling, this.destroy() runs GObject's this.run_dispose() to
collect garbage. However, if this._tcFunction() has not finished running,
it'll later try to access myActor which might have been already
deallocated by this.run_dispose().
One way to go about this, is to define a boolean variable in this.destroy()
destroy() {
this._destroying = true
// ...
this.run_dispose;
}
and then add a check in this._tcFunction(), e.g.
async _tcFunction() {
await this._timeConsumingStuff();
if (this._destroying === true) { return; }
myActor.show();
}
My question: is there a nicer way to deal with these situations? Maybe with Gio.Cancellable()? AFAIK, there's no easy way to stop an async function in javascript...

Firstly, two things to be aware of:
Avoid calling low-level memory management functions like GObject.run_dispose() as there are a cases in the C libraries where these objects are being cached for reuse and aren't actually being disposed when you think they are. There is also no dispose signal and other objects may need notification.
Avoid overriding functions that trigger disposal like Clutter.Actor.destroy() and instead use the destroy signal. GObject signal callback always get the emitting object as the first argument and it is safe to use that in a destroy callback.
There are a couple ways I could think of solving this, but it depends on the situation. If the async function is a GNOME library async function, it probably does have a cancellable argument:
let cancellable = new Gio.Cancellable();
let actor = new Clutter.Actor();
actor.connect('destroy', () => cancellable.cancel());
Gio.File.new_for_path('foo.txt').load_contents_async(cancellable, (file, res) => {
try {
let result = file.load_contents_finish(res);
// This shouldn't be necessary if the operation succeeds (I think)
if (!cancellable.is_cancelled())
log(actor.width);
} catch (e) {
// We know it's not safe
if (e.matches(Gio.IOErrorEnum, Gio.IOErrorEnum.CANCELLED))
log('it was cancelled');
// Probably safe, but let's check
else if (!cancellable.is_cancelled())
log(actor.width);
}
});
// The above function will begin but not finish before the
// cancellable is triggered
actor.destroy();
Of course, you can always use a cancellable with a Promise, or just a regular function/callback pattern:
new Promise((resolve, reject) => {
// Some operation
resolve();
}).then(result => {
// Check the cancellable
if (!cancellable.is_cancelled())
log(actor.width);
});
Another option is to null out your reference, since you can safely check for that:
let actor = new Clutter.Actor();
actor.connect('destroy', () => {
actor = null;
});
if (actor !== null)
log(actor.width);

Related

Implementing event synchronization primitives in JavaScript/TypeScript using async/await/Promises

I have a long, complicated asynchronous process in TypeScript/JavaScript spread across many libraries and functions that, when it is finally finished receiving and processing all of its data, calls a function processComplete() to signal that it's finished:
processComplete(); // Let the program know we're done processing
Right now, that function looks something like this:
let complete = false;
function processComplete() {
complete = true;
}
In order to determine whether the process is complete, other code either uses timeouts or process.nextTick and checks the complete variable over and over again in loops. This is complicated and inefficient.
I'd instead like to let various async functions simply use await to wait and be awoken when the process is complete:
// This code will appear in many different places
await /* something regarding completion */;
console.log("We're done!");
If I were programming in Windows in C, I'd use an event synchronization primitive and the code would look something like this:
Event complete;
void processComplete() {
SetEvent(complete);
}
// Elsewhere, repeated in many different places
WaitForSingleObject(complete, INFINITE);
console.log("We're done!");
In JavaScript or TypeScript, rather than setting a boolean complete value to true, what exactly could processComplete do to make wake up any number of functions that are waiting using await? In other words, how can I implement an event synchronization primitive using await and async or Promises?
This pattern is quite close to your code:
const processComplete = args => new Promise(resolve => {
// ...
// In the middle of a callback for a async function, etc.:
resolve(); // instead of `complete = true;`
// ...
}));
// elsewhere
await processComplete(args);
console.log("We're done!");
More info: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise.
It really depends on what you mean by "other code" in this scenario. It sounds like you want to use some variation of the delegation pattern or the observer pattern.
A simple approach is to take advantage of the fact that JavaScript allows you to store an array of functions. Your processComplete() method could do something like this:
function processComplete(){
arrayOfFunctions.forEach(fn => fn());
}
Elsewhere, in your other code, you could create functions for what needs to be done when the process is complete, and add those functions to the arrayOfFunctions.
If you don't want these different parts of code to be so closely connected, you could set up a completely separate part of your code that functions as a notification center. Then, you would have your other code tell the notification center that it wants to be notified when the process is complete, and your processComplete() method would simply tell the notification center that the process is complete.
Another approach is to use promises.
I have a long, complicated asynchronous process in TypeScript/JavaScript spread across many libraries and functions
Then make sure that every bit of the process that is asynchronous returns a promise for its partial result, so that you can chain onto them and compose them together or await them.
When it is finally finished receiving and processing all of its data, calls a function processComplete() to signal that it's finished
It shouldn't. The function that starts the process should return a promise, and when the process is finished it should fulfill that promise.
If you don't want to properly promisify every bit of the whole process because it's too cumbersome, you can just do
function startProcess(…);
… // do whatever you need to do
return new Promise(resolve => {
processComplete = resolve;
// don't forget to reject when the process failed!
});
}
In JavaScript or TypeScript, rather than setting a boolean complete value to true, what exactly could processComplete do to make wake up any number of functions that are waiting using await?
If they are already awaiting the result of the promise, there is nothing else that needs to be done. (The awaited promise internally has such a flag already). It's really just doing
// somewhere:
var resultPromise = startProcess(…);
// elsewhere:
await resultPromise;
… // the process is completed here
You don't even need to fulfill the promise with a useful result if all you need is to synchronise your tasks, but you really should. (If there's no data they are waiting for, what are they waiting for at all?)

await function inside third-party non-async function

async function myAsyncFunc() : Promise<void> {
// do async stuff
// ex. delay with promise of setTimout
}
libraryObject.onBeforeDrawOnScreen(function(){
myAsyncFunc(); // can't "await" or block.
});
How can I wait before exit from 'onBeforeDrawOnScreen'?
The library doesn't care about the callback's return type or value. There is no spacial check for Promise return, So it doesnt 'await' on my callack.
so I cant add 'async' word on the callback.
I'm aware to javascript's event loop, but ... There is a workaround?
Thanks
I'd see if you can find a better library, or-- if the library you're using is open-source-- try contributing async support yourself. Otherwise, there really isn't any way to do this.
It might occur to you to do what's called a busy-wait, where you have a while loop that repeatedly checks some state that will be changed by a resolving promise. Even this won't work, though, because of the single-threaded nature of JS. The very handlers responsible for getting you out of the loop are blocked until you get out of the loop. Note: the following will cause an infinite loop, so don't actually do it:
libraryObject.onBeforeDrawOnScreen(function(){
let done = false;
let error = null;
myAsyncFunc()
.then(() => {
done = true;
}, (err) => {
error = err;
done = true;
});
while(!done);
if (error) throw error;
});
This is why async support is so crucial in many JS libraries.

What is the equivalent of not calling a callback when inside an async function?

Before async/await, when my code used callbacks, I was able to do three things: (1) call the callback with a result, (2) call the callback with an Error, or (3) not call the callback at all.
Case (3) was used in situations like this: say that you have a zoom button and a user can click it to render an image at a higher resolution, and this is an async process. If the user clicks the zoom button again, then the first render is no longer relevant, and can be canceled to let the new zoom level render run. I handled this by returning from inside the function without calling the callback, e.g.
if (process.wasCanceled()) {
return;
}
// ...
callback(someResult);
With async/await, there are only two things that you can do: return or throw. Currently, I've been using throw to indicate that the operation was canceled, since returning can falsely indicate that upstream processes should keep running. But the problem with throwing is that all the upstream callers need to know that it's not really an error, per se, and so they may need to check the type of the error.
Another crazy idea I had was to create a promise that never returns. E.g. await never(), where the function is defined like this:
async function never () {
return new Promise(function () {});
}
That is sort of the equivalent of not calling a callback.
But I don't know if that would just leak memory over and over.
Is there a better equivalent without the drawbacks I mentioned above?
If absolutely necessary, you can await a promise that never returns. This is the equivalent of not calling a callback.
async function never () {
return new Promise(function () {});
}
async function op (process) {
// ...
if (process.wasCanceled()) await never();
// ...
}
According to these answers, this will be garbage collected, because the returned promise is never used and there are no connections to the heap inside the promise's function argument.
Do never resolved promises cause memory leak?
Are JavaScript forever-pending promises bad?
However, this is most likely not what you want to do, since upstream callers may like to know that their operation has been canceled. If the operation was initiated by a user through the UI, then yes, canceling without telling the caller is probably OK, but if the operation was initiated programmatically and cancelled some other way, e.g. by the user, then the calling code might need to know that, so that it can try again, or clean up resources.
For this reason, the solution is to throw an error, of a specific class so that the caller can detect that the process was cancelled. E.g.
class ProcessCanceledError extends Error {
...
}
async function render (process) {
while (...) {
// do some rendering
await delay(20);
if (process.wasCanceled()) throw new ProcessCanceledError();
}
}
var zoomProcess;
async function zoom () {
let process = new Process();
if (zoomProcess != null && !zoomProcess.isDone()) {
zoomProcess.cancel();
}
try {
await render();
} catch (e) {
// or you could do e.process === process
if (e instanceof ProcessCanceledError &&
process.wasCanceled() // make sure it was actually ours
) {
// this assumes we are a top level function
// otherwise, you would want to propagate the error to caller's caller
return;
}
throw e;
}
}

Waiting for an Asynchronous method in NodeJS

I've looked high and low, and can only find how to write async functions, which I already understand.
What I am trying to do is run an async method in a triggered event [EventEmitter], but such a simple thing seems to be just simply not possible as I can find.
Consider the following...
// Your basic async method..
function doSomething(callback) {
var obj = { title: 'hello' };
// Fire an event for event handlers to alter the object.
// EvenEmitters are called synchronously
eventobj.emit('alter_object', obj);
callback(null, obj);
}
// when this event is fired, I want to manipulate the data
eventobj.on('alter_object', function(obj) {
obj.title += " world!";
// Calling this async function here means that our
// event handler will return before our data is retrieved.
somemodule.asyncFunction(callback(err, data) {
obj.data = data;
});
});
As you can see in the last few lines, the event handler will finish before the object's data property is added.
What I need is something where I can turn the async function into an sync function and get the results there and then. so for example...
obj.data = somemodule.asyncFunction();
I've looked at the wait.for module, the async module, and none of these will not work. I've even looked into the yield method, but it seems not yet fully implemented into the V8 engine.
I've also tried using a while loop too wait for data to populate, but this just brings with it the CPU overload issue.
Has anyone experienced this and found a design pattern to get around this?
You cannot turn an async function into a synchronous one in node.js. It just cannot be done.
If you have an asynchronous result, you cannot return it synchronously or wait for it. You will have to redesign the interface to use an asynchronous interface (which nearly always involves passing in a callback that will be called when the result is ready).
If you're wanting to do something after you .emit() an event that is itself going to do something asynchronously and you want to wait until after the async thing finished, then the event emitter is probably not the right interface. You'd rather have a function call that returns a promise or takes a callback as an argument. You could manipulate an eventEmitter to use this, but you'd have to post back a second event when the async operation finished and have the original caller not do it's second part until it receives the second event (which is really not a good way to go).
Bottom line - you need a different design that works with async responses (e.g. callbacks or promises).
Seems what I wanted to do is just not possible and the two models conflict. To achieve what I wanted in the end, I encapsulated my modules into an array-like object with some event methods that will pass it on to the module's object, which inherited the async-eventemitter class.
So, think of it like so...
My custom app modules may inherit the async-eventemitter module so they have the .on() and .emit(), etc. methods.
I create a customised array item, that will allow me to pass an event on to the module in question that will work asynchronously.
The code I created (and this is by no means complete or perfect)...
// My private indexer for accessing the modules array (below) by name.
var module_dict = {};
// An array of my modules on my (express) app object
app.modules = [];
// Here I extended the array with easier ways to add and find modules.
// Haven't removed some code to trim down this. Let me know if you want the code.
Object.defineProperty(app.modules, 'contains', { enumerable: false, ... });
Object.defineProperty(app.modules, 'find', { enumerable: false, ... });
// Allows us to add a hook/(async)event to a module, if it exists
Object.defineProperty(app.modules, 'on', { enumerable: false, configurable: false, value: function(modulename, action, func) {
if (app.modules.contains(modulename)) {
var modu = app.modules.find(modulename);
if (modu.module && modu.module['on']) {
// This will pass on the event to the module's object that
// will have the async-eventemitter inherited
modu.module.on(action, func);
}
}
} });
Object.defineProperty(app.modules, 'once', { enumerable: false, configurable: false, value: function(modulename, action, func) {
if (app.modules.contains(modulename)) {
var modu = app.modules.find(modulename);
if (modu.on) {
modu.on(action, func);
}
}
} });
This then allows me to bind an event handler to a module by simply calling something like the following... .on(module_name, event_name, callback)
app.modules.on('my_special_module_name', 'loaded', function(err, data, next) {
// ...async stuff, that then calls next to continue to the next event...
if (data.filename.endsWith('.jpg'))
data.dimensions = { width: 100, height: 100 };
next(err, data);
});
And then to execute it I would do something like (express)...
app.get('/foo', function(req, res, next) {
var data = {
filename: 'bar.jpg'
};
// Now have event handlers alter/update our data
// (eg, extend an object about a file with image data if that file is an image file).
my_special_module.emit('loaded', data, function(err, data) {
if (err) next(err);
res.send(data);
next();
});
});
Again, this is just an example of what I did, so i've probably missed something in my copy above, but effectively it's the design I ended up using and it worked like a treat, and I was able to extend data on an object before being pushed out to my HTTP response, without having to replace the main [expressjs] object's standard EventEmitter model.
(eg, I added image data for files that we're loaded that we're image files. If anyone wants the code, let me know, I am more than happy to share what I did)

Angular/Promises - Multiple Controllers waiting for the same $promise? [duplicate]

I want to implement a dynamic loading of a static resource in AngularJS using Promises. The problem: I have couple components on page which might (or not, depends which are displayed, thus dynamic) need to get a static resource from the server. Once loaded, it can be cached for the whole application life.
I have implemented this mechanism, but I'm new to Angular and Promises, and I want to make sure if this is a right solution \ approach.
var data = null;
var deferredLoadData = null;
function loadDataPromise() {
if (deferredLoadData !== null)
return deferredLoadData.promise;
deferredLoadData = $q.defer();
$http.get("data.json").then(function (res) {
data = res.data;
return deferredLoadData.resolve();
}, function (res) {
return deferredLoadData.reject();
});
return deferredLoadData.promise;
}
So, only one request is made, and all next calls to loadDataPromise() get back the first made promise. It seems to work for request that in the progress or one that already finished some time ago.
But is it a good solution to cache Promises?
Is this the right approach?
Yes. The use of memoisation on functions that return promises a common technique to avoid the repeated execution of asynchronous (and usually expensive) tasks. The promise makes the caching easy because one does not need to distinguish between ongoing and finished operations, they're both represented as (the same) promise for the result value.
Is this the right solution?
No. That global data variable and the resolution with undefined is not how promises are intended to work. Instead, fulfill the promise with the result data! It also makes coding a lot easier:
var dataPromise = null;
function getData() {
if (dataPromise == null)
dataPromise = $http.get("data.json").then(function (res) {
return res.data;
});
return dataPromise;
}
Then, instead of loadDataPromise().then(function() { /* use global */ data }) it is simply getData().then(function(data) { … }).
To further improve the pattern, you might want to hide dataPromise in a closure scope, and notice that you will need a lookup for different promises when getData takes a parameter (like the url).
For this task I created service called defer-cache-service which removes all this boiler plate code. It writted in Typescript, but you can grab compiled js file. Github source code.
Example:
function loadCached() {
return deferCacheService.getDeferred('cacke.key1', function () {
return $http.get("data.json");
});
}
and consume
loadCached().then(function(data) {
//...
});
One important thing to notice that if let's say two or more parts calling the the same loadDataPromise and at the same time, you must add this check
if (defer && defer.promise.$$state.status === 0) {
return defer.promise;
}
otherwise you will be doing duplicate calls to backend.
This design design pattern will cache whatever is returned the first time it runs , and return the cached thing every time it's called again.
const asyncTask = (cache => {
return function(){
// when called first time, put the promise in the "cache" variable
if( !cache ){
cache = new Promise(function(resolve, reject){
setTimeout(() => {
resolve('foo');
}, 2000);
});
}
return cache;
}
})();
asyncTask().then(console.log);
asyncTask().then(console.log);
Explanation:
Simply wrap your function with another self-invoking function which returns a function (your original async function), and the purpose of wrapper function is to provide encapsulating scope for a local variable cache, so that local variable is only accessible within the returned function of the wrapper function and has the exact same value every time asyncTask is called (other than the very first time)

Categories