I have got this event listener:
manager.on('newOffer', (offer) => {
console.log(`We have a new offer :#${offer.id}`.bgGreen);
getInventory( function(){
processTradeOffer(offer);});});
My problem born when many 'newOffers' appear. They cannot been process correctly because the program is firing 'newOffers' withouting process first.
What I need(In my opinion): When an 'newOffer' appear, turn off the event listener, complete functions and then turn on again. Is that possibly?
Sound like you need a queue with a concurrency of 1, something that async.queue() can provide:
const handler = (offer, done) => {
console.log(`We have a new offer :#${offer.id}`.bgGreen);
getInventory(() => {
processTradeOffer(offer); // NB: if this is async, pass a callback that calls `done`
done();
});
};
const queue = require('async').queue(handler, 1);
...
// Add a new offer to the queue:
let offer = new Offer(); // or however you create `offer` objects
queue.push(offer);
(however, this could become a bottleneck if you can't handle offers fast enough, in which case you may need to rethink how offers get handled)
Related
Short backstory: I am trying to create a Readable stream based on data chunks that are emitted back to my server from the client side with WebSockets. Here's a class I've created to "simulate" that behavior:
class DataEmitter extends EventEmitter {
constructor() {
super();
const data = ['foo', 'bar', 'baz', 'hello', 'world', 'abc', '123'];
// Every second, emit an event with a chunk of data
const interval = setInterval(() => {
this.emit('chunk', data.splice(0, 1)[0]);
// Once there are no more items, emit an event
// notifying that that is the case
if (!data.length) {
this.emit('done');
clearInterval(interval);
}
}, 1e3);
}
}
In this post, the dataEmitter in question will have been created like this.
// Our data is being emitted through events in chunks from some place.
// This is just to simulate that. We cannot change the flow - only listen
// for the events and do something with the chunks.
const dataEmitter = new DataEmitter();
Right, so I initially tried this:
const readable = new Readable();
dataEmitter.on('chunk', (data) => {
readable.push(data);
});
dataEmitter.once('done', () => {
readable.push(null);
});
But that results in this error:
Error [ERR_METHOD_NOT_IMPLEMENTED]: The _read() method is not implemented
So I did this, implementing read() as an empty function:
const readable = new Readable({
read() {},
});
dataEmitter.on('chunk', (data) => {
readable.push(data);
});
dataEmitter.once('done', () => {
readable.push(null);
});
And it works when piping into a write stream, or sending the stream to my test API server. The resulting .txt file looks exactly as it should:
foobarbazhelloworldabc123
However, I feel like there's something quite wrong and hacky with my solution. I attempted to put the listener registration logic (.on('chunk', ...) and .once('done', ...)) within the read() implementation; however, read() seems to get called multiple times, and that results in the listeners being registered multiple times.
The Node.js documentation says this about the _read() method:
When readable._read() is called, if data is available from the resource, the implementation should begin pushing that data into the read queue using the this.push(dataChunk) method. _read() will be called again after each call to this.push(dataChunk) once the stream is ready to accept more data. _read() may continue reading from the resource and pushing data until readable.push() returns false. Only when _read() is called again after it has stopped should it resume pushing additional data into the queue.
After dissecting this, it seems that the consumer of the stream calls upon .read() when it's ready to read more data. And when it is called, data should be pushed into the stream. But, if it is not called, the stream should not have data pushed into it until the method is called again (???). So wait, does the consumer call .read() when it is ready for more data, or does it call it after each time .push() is called? Or both?? The docs seem to contradict themselves.
Implementing .read() on Readable is straightforward when you've got a basic resource to stream, but what would be the proper way of implementing it in this case?
And also, would someone be able to explain in better terms what the .read() method is on a deeper level, and how it should be implemented?
Thanks!
Response to the answer:
I did try registering the listeners within the read() implementation, but because it is called multiple times by the consumer, it registers the listeners multiple times.
Observing this code:
const readable = new Readable({
read() {
console.log('called');
dataEmitter.on('chunk', (data) => {
readable.push(data);
});
dataEmitter.once('done', () => {
readable.push(null);
});
},
});
readable.pipe(createWriteStream('./data.txt'));
The resulting file looks like this:
foobarbarbazbazbazhellohellohellohelloworldworldworldworldworldabcabcabcabcabcabc123123123123123123123
Which makes sense, because the listeners are being registered multiple times.
Seems like the only purpose of actually implementing the read() method is to only start receiving the chunks and pushing them into the stream when the consumer is ready for that.
Based on these conclusions, I've come up with this solution.
class MyReadable extends Readable {
// Keep track of whether or not the listeners have already
// been added to the data emitter.
#registered = false;
_read() {
// If the listeners have already been registered, do
// absolutely nothing.
if (this.#registered) return;
// "Notify" the client via websockets that we're ready
// to start streaming the data chunks.
const emitter = new DataEmitter();
const handler = (chunk: string) => {
this.push(chunk);
};
emitter.on('chunk', handler);
emitter.once('done', () => {
this.push(null);
// Clean up the listener once it's done (this is
// assuming the #emitter object will still be used
// in the future).
emitter.off('chunk', handler);
});
// Mark the listeners as registered.
this.#registered = true;
}
}
const readable = new MyReadable();
readable.pipe(createWriteStream('./data.txt'));
But this implementation doesn't allow for the consumer to control when things are pushed. I guess, however, in order to achieve that sort of control, you'd need to communicate with the resource emitting the chunks to tell it to stop until the read() method is called again.
If you are using native Events api and not rolling your own out then how can you make sure that event handlers who attach to an event get that event even if an event had already fired. Assume this event only fires once.
Example...you load data from some api. You UI attaches an event to 'dataLoaded' usually fetching data takes time and event handlers get attached before fetching completes. However, sometimes data is too small and fetching requests takes almost no time and UI being attaches the event listener after the fetching was done, and thus missed it.
This is just an example not an actual use case..
I want to be able to provide events for my code where if someone attaches an event then they get that event.
Another way to think about this is you ask someone to tell me when the train arrives, and he goes back and the train was already there but he doesn't tell you. When you ask him why didn't he say anything, he replies well the train was already here, technically no train arrived after me.
=============
another way to phrase this question....
You have pub/sub system. You want certain messages to be delivered even if someone subscribed after the message was published on certain channel.
Promises make for handy containers of async data. You can call .then() as many times as you want on them and will always be guaranteed to receive the resolved data.
For example
// Example async function
function getData() {
const data = { items: [1,2,3] }
// Takes 200ms to resolve data
return new Promise(res => setTimeout(res, 200, data))
}
// Store the promise
const dataLoaded = getData()
// Here's an early listener, attached before resolving
dataLoaded.then(data => {
console.log("Early listener got:", data.items)
})
// Wait and add another listener well after the data has resolved
setTimeout(() => {
dataLoaded.then(data => {
console.log("Late listener got:", data.items)
})
}, 1000)
.as-console-wrapper { max-height: 100% !important; }
The following is a simple scenario for some GNOME extension:
Enable the extension. The extension is a class which extends
Clutter.Actor.
It creates an actor called myActor and adds it:
this.add_child(myActor).
Then it calls an asynchronous
time-consuming function this._tcFunction() which in the end does something
with myActor.
Here's where I run into a problem:
We disable (run this.destroy()) the extension immediately after enabling it.
On disabling, this.destroy() runs GObject's this.run_dispose() to
collect garbage. However, if this._tcFunction() has not finished running,
it'll later try to access myActor which might have been already
deallocated by this.run_dispose().
One way to go about this, is to define a boolean variable in this.destroy()
destroy() {
this._destroying = true
// ...
this.run_dispose;
}
and then add a check in this._tcFunction(), e.g.
async _tcFunction() {
await this._timeConsumingStuff();
if (this._destroying === true) { return; }
myActor.show();
}
My question: is there a nicer way to deal with these situations? Maybe with Gio.Cancellable()? AFAIK, there's no easy way to stop an async function in javascript...
Firstly, two things to be aware of:
Avoid calling low-level memory management functions like GObject.run_dispose() as there are a cases in the C libraries where these objects are being cached for reuse and aren't actually being disposed when you think they are. There is also no dispose signal and other objects may need notification.
Avoid overriding functions that trigger disposal like Clutter.Actor.destroy() and instead use the destroy signal. GObject signal callback always get the emitting object as the first argument and it is safe to use that in a destroy callback.
There are a couple ways I could think of solving this, but it depends on the situation. If the async function is a GNOME library async function, it probably does have a cancellable argument:
let cancellable = new Gio.Cancellable();
let actor = new Clutter.Actor();
actor.connect('destroy', () => cancellable.cancel());
Gio.File.new_for_path('foo.txt').load_contents_async(cancellable, (file, res) => {
try {
let result = file.load_contents_finish(res);
// This shouldn't be necessary if the operation succeeds (I think)
if (!cancellable.is_cancelled())
log(actor.width);
} catch (e) {
// We know it's not safe
if (e.matches(Gio.IOErrorEnum, Gio.IOErrorEnum.CANCELLED))
log('it was cancelled');
// Probably safe, but let's check
else if (!cancellable.is_cancelled())
log(actor.width);
}
});
// The above function will begin but not finish before the
// cancellable is triggered
actor.destroy();
Of course, you can always use a cancellable with a Promise, or just a regular function/callback pattern:
new Promise((resolve, reject) => {
// Some operation
resolve();
}).then(result => {
// Check the cancellable
if (!cancellable.is_cancelled())
log(actor.width);
});
Another option is to null out your reference, since you can safely check for that:
let actor = new Clutter.Actor();
actor.connect('destroy', () => {
actor = null;
});
if (actor !== null)
log(actor.width);
I am using npm ws module (or actually the wrapper called isomorphic-ws) for websocket connection.
NPM Module: isomorphic-ws
I use it to receive some array data from a websocket++ server running on the same PC. This data is then processed and displayed as a series of charts.
Now the problem is that the handling itself takes a very long time. I use one message to calculate 16 charts and for each of them I need to calculate a lot of logarithms and other slow operations and all that in JS. Well, the whole refresh operation takes about 20 seconds.
Now I could actually live with that, but the problem is that when I get a new request it is processed after the whole message handler is finished. And if I get several requests in the meantime, all of them shall be processed as they came in. And so the requests are there queued and the current state gets more and more outdated as the time goes on...
I would like to have a way of detecting that there is another message waiting to be processed. If that is the case I could just stop the current handler at any time and start over... So when using npm ws, is there a way of telling that there is another message waiting in to be processed?
Thanks
You need to create some sort of cancelable job wrapper. It's hard to give a concrete suggestion without seeing your code. But it could be something like this.
const processArray = array => {
let canceled = false;
const promise = new Promise((resolve, reject) => {
// do something with the array
for(let i = 0; i < array.length; i++) {
// check on each iteration if the job has been canceled
if(canceled) return reject({ reason: 'canceled' });
doSomething(array[i])
}
resolve(result)
})
return {
cancel: () => {
cancel = true
},
promise
}
}
const job = processArray([1, 2, 3, ...1000000]) // huge array
// handle the success
job.promise.then(result => console.log(result))
// Cancel the job
job.cancel()
I'm sure there are libraries to serve this exact purpose. But I just wanted to give a basic example of how it could be done.
I have a Service Worker that receives push messages from Firebase FCM. They cause notifications to show or to cancel. The user can have multiple devices (that's what the cancel is for: when the user already acted on notification A I try to dismiss it on all devices).
The problem I have is when one of the user's devices is offline or turned off altogether. Once the device goes online, firebase delivers all the messages it couldn't deliver before. So, for example, you'd get:
Show notif A with content X
Show notif A with content Y (replaces notif A)
Show notif B with content Z
Cancel notif A
The SW receives these messages in rapid succession. The problem is that cancelling a notification is a lot faster than showing one (~2ms vs 16ms). So the 4th message is handled before the first (or second) message actually created the notification, the result being that the notification is not being cancelled.
// EDIT: Heavily edited question below. Added example code and broke down my questions. Also edited the title to better reflect my actual underlying question.
I tried pushing the messages in a queue and handling them one by one. Turns out this can become a bit complicated because everything in SW is async and, to make matters worse, it can be killed at any time when the browser thinks the SW finished its work. I tried to store the queue in a persistent manner but since LocalStorage is unavailable in SW I need to use the async IndexedDB API. More async calls that could cause problems (like losing items).
It's also possible that event.waitUntil thinks my worker is done before it's actually done because I'm not correctly 'passing the torch' from promise to promise ..
Here's a (lot of) simplified code of what I tried:
// Use localforage, simplified API for IndexedDB
importScripts("localforage.min.js");
// In memory..
var mQueue = []; // only accessed through get-/setQueue()
var mQueueBusy = false;
// Receive push messages..
self.addEventListener('push', function(event) {
var data = event.data.json().data;
event.waitUntil(addToQueue(data));
});
// Add to queue
function addToQueue(data) {
return new Promise(function(resolve, reject) {
// Get queue..
getQueue()
.then(function(queue) {
// Push + store..
queue.push(data);
setQueue(queue)
.then(function(queue){
handleQueue()
.then(function(){
resolve();
});
});
});
});
}
// Handle queue
function handleQueue(force) {
return new Promise(function(resolve, reject) {
// Check if busy
if (mQueueBusy && !force) {
resolve();
} else {
// Set busy..
mQueueBusy = true;
// Get queue..
getQueue()
.then(function(queue) {
// Check if we're done..
if (queue && queue.length<=0) {
resolve();
} else {
// Shift first item
var queuedData = queue.shift();
// Store before continuing..
setQueue(queue)
.then(function(queue){
// Now do work here..
doSomething(queuedData)
.then(function(){
// Call handleQueue with 'force=true' to go past (mQueueBusy)
resolve(handleQueue(true));
});
});
}
});
}
});
}
// Get queue
function getQueue() {
return new Promise(function(resolve, reject) {
// Get from memory if it's there..
if (mQueue && mQueue.length>0) {
resolve(mQueue);
}
// Read from indexed db..
else {
localforage.getItem("queue")
.then(function(val) {
var queue = (val) ? JSON.parse(val) : [];
mQueue = queue;
resolve(mQueue);
});
}
});
}
// Set queue
function setQueue(queue) {
return new Promise(function(resolve, reject) {
// Store queue to memory..
mQueue = queue;
// Write to indexed db..
localforage.setItem("queue", mQueue)
.then(function(){
resolve(mQueue);
});
});
}
// Do something..
function doSomething(queuedData) {
return new Promise(function(resolve, reject) {
// just print something and resolve
console.log(queuedData);
resolve();
});
}
The short version of my question - with my particular use-case in mind - is: how do I handle push messages synchronously without having to use more async API's?
And if I would split those questions into multiple:
Am I right to assume I would need to queue those messages?
If so, how would one handle queues in SW?
I can't (completely) rely on global variables because the SW may be killed and I can't use LocalStorage or similar synchronous API's, so I need to use yet another async API like IndexedDB to do this. Is this assumption correct?
Is my code above the right approach?
Somewhat related: Since I need to pass the event.waitUntil from promise to promise until the queue is processed, am I right to call resolve(handleQueue()) inside handleQueue() to keep it going? Or should I do return handleQueue()? Or..?
Just to apprehend the "why not use collapse_key": It's a chat app and every chat room has it's own tag. A user can participate in more than 4 chatrooms and since firebase limits the amount of collapse_keys to 4 I can't use that.
So I'm going to go out on a limb and say that serializing things to IDB could be overkill. As long as you wait until all your pending work is done before you resolve the promise passed to event.waitUntil(), the service worker should be kept alive. (If it takes minutes to finish that work, there's the chance that the service worker would be killed anyway, but for what you describe I'd say the risk of that is low.)
Here's a rough sketch of how I'd structure your code, taking advantage of native async/await support in all browsers that currently support service workers.
(I haven't actually tested any of this, but conceptually I think it's sound.)
// In your service-worker.js:
const isPushMessageHandlerRunning = false;
const queue = [];
self.addEventListener('push', event => {
var data = event.data.json().data;
event.waitUntil(queueData(data));
});
async function queueData(data) {
queue.push(data);
if (!isPushMessageHandlerRunning) {
await handlePushDataQueue();
}
}
async function handlePushDataQueue() {
isPushMessageHandlerRunning = true;
let data;
while(data = queue.shift()) {
// Await on something asynchronous, based on data.
// e.g. showNotification(), getNotifications() + notification.close(), etc.
await ...;
}
isPushMessageHandlerRunning = false;
}