NodeJS stream with multiple 'data' event handler and race condition - javascript

I'd like to consume a NodeJS request stream multiple (two) times in two separated Koa middlewares, so I added listeners to its 'data' event. It works now, however I fear there is a race condition behind it, because- based on the documentation- after adding the first event handler the stream starts calling it as soon as there is any data available. Can it happen that the second event listener doesn't receive all of the chunks if there are other codes between the subscriptions? Or is it somehow avoided (how?)?
Thank you!

Node.js streams use EventEmitter to send events. Anyone listening on an event (it doesn't matter how many listeners there are) will receive the event. So each of your subscribers will get the data event.
The documentation is there to warn you that by listening to the 'data' event, it puts the stream into 'legacy' mode (or streams v1). It means that if for some reason, one of your subscribers misses a message, it's gone forever.
However, Node.js also supports stream piping. You can pipe one readable stream to multiple output streams, and the data will be sent to each of those readers (with a separate read buffer for each of them). At the same time, if the readable stream ends, it will end both of the writable streams. Each stream will have its own read buffer, meaning that they won't miss any messages if they stop listening to the stream temporarily (for some strange reason).
streamA.pipe(streamB)
streamA.pipe(streamC)

Related

When RTCPeerConnection.onIcecandidate() event get invoked?

when sdp set to local?
when answer set to remoteDescription?
when any data or streams added to RTCPeerConnection?
They should start firing as soon as you've set a local description, whether it be an offer or an answer.
Think of it as an optimization: partial updates to the localDescription. If you wait a couple of seconds to inspect localDescription then the SDP will contain all ICE candidates already, and you won't need to listen to any events. It works to just send the SDP and ignore these events. But this is slow.
To speed up connection establishment, the initial localDescription provided is incomplete, missing ICE candidates, since they take time to generate. This lets you signal the SDP early — unblocking the other end — provided you promise to follow up and send the missing candidates as they are generated (which is when the event fires).
Yes, onicecandidate event fire as soon as we set offer or answer to localDescription but it will not fire the event till we wouldn't create the datachannel or any streaming channel first.

NodeJs- does the Event Loop only handle I/O requests?

In general is an event loop only for IO ? and what exactly is an IO job?
For example let's say that a request comes into NodeJs which is then making an outbound http request to an API to get some data while not blocking the user in the meantime.
Is that an IO job and how would NodeJs handle it? what if instead of the http request I wanted to asynchronously make a lengthy calculation and then return a value to the user? Is that handled by the event loop too despite being CPU bound?
In general is an event loop only for IO ?
I wouldn't count timers (setTimeout, setInterval) and sheduling (setImmeadiate, process.nextTick) as IO but one could generally say that the events in the event loop are coming from the outside
and what exactly is an IO job?
That depends on the context you are talking about. Every program gets a certain Input by the user and generates a certain Output. In a terminal for example the input are your keypresses and the output is what is displayed. Whe talking about nodejs IO, one usually refer to network / file operations, or more generally: code not written in js.
For example let's say that a request comes into NodeJs which is then making an outbound http request to an API to get some data while not blocking the user in the meantime.
Is that an IO job and how would NodeJs handle it?
Nodejs would spawn a background thread that makes the request, the main process continues with other stuff in the meantime (continues with other events on the event queue). Then if the async request is done the background process pushes the result onto the event queue, the event loop will pull it from there and execute callbacks etc.
what if instead of the http request I wanted to asynchronously make a lengthy calculation and then return a value to the user?
You have to spawn another thread in nodejs, lengthy calculations are synchronous otherwise.
Is that handled by the event loop too despite being CPU bound?
Everything somewhen lands on the event loop, and everything gets executed on the CPU ...

What happens with socket's client scope on 'disconnect' event (server side)

So What happens with socket's client scope on 'disconnect' event?
I'm trying to avoid bad racing conditions and logic flaws in my node.js + mongoose + socket.io app.
What do I mean by scope is:
io.on('connection', function (client) {
///CLIENT SCOPE///
//define private vars, store and operate client's session info//
//recieving and sending client's socket signals//
}
Some background:
Let's say, for example, I implement some function that operates db by finding room and writing user to this room.
BUT, in the moment when the room (to be written in) found, but user yet not written in - he disconnects. On disconnect event i must pull him out of his last room in db, but i can not, it's not saved in db for this moment yet.
The only way I see is to assign a bool value on 'disconnect' event against which i can check before saving guy in to the room and in the case of true don't save him at all.
What i'm confused with - would this bool survive a disconnect event, as it saved in client's scope.
What happens with the scope? is it completely wiped out on disconnect? or it's wiped out only when everything that relys on this scope is finished?
I'm using 'forceNew': true to force socket.connect(); to socket immedietly if something goes wrong (hypothetically) and socket error fired without user really leaving the site.
If user reconnects through this 'old' socket is he getting back his scope on server, or this socket's previous scope has been wiped out on his disconnection or wiped out on reconnection by on 'connection' event?
The client closure will remain alive as long as there is code running that uses that closure so you generally don't have to worry about that issue. A closure is essentially an object in Javascript and it will only be garbage collected when there is no active code that has a reference to anything inside the closure.
As for your concurrency issue with a socket being disconnected while you are writing to the DB, you are correct to recognize that this is an issue to be careful with. Exactly what you need to do about it depends a bit on how your database behaves. Because node.js runs single threaded, your node.js code writing to the database will run to completion before any disconnect event gets processed. This doesn't mean that the database write will have completed before the disconnect event starts processing, but it does mean that it will have already been sent to the database. So, if your database processes multiple requests in the order it receives them, then it will likely just do the right thing and you won't have any extra code to worry about.
If your database could actually process the delete before the write finishes (which seems unlikely), then you'd have to code up some protection for that. The simplest concept there is to implement a database queue on all database operations for a given user. I'd probably create an object with common DB methods on it to implement this queue and create a separate object in each client closure so they were local to a given user. When a database operation is in process, this object would have a flag indicating that an operation was in progress. If another database operation was called while this flag was set, that second operation would go in a queue rather than being sent directly to the database. Each time a database operation finishes, it checks the queue to see if there is a next operation is waiting to run. If so, it runs it.
FYI, I have a simple node.js app (running on a Raspberry Pi with temperature sensors) that records temperature data and every so-often it writes that data to disk using async writes. Because new temperature data could arrive while I'm in the middle of my async writes of the data, I have a similar issue. I abstracted all operations on the data in an object, implemented a flag that indicates if I'm in the process of writing the data and, if any method calls arrive to modify the data, then those operations go in a queue and they are processed only when the write has finished.
As to your last question, the client scope you have in the io.on('connection', ...) closure is associated only with that particular connection event. If you get another connection event and thus this code is triggered to run again, that will create a new and separate closure that has nothing to do with the prior one, even if the client object is the same. Normally, this works out just fine because your code in this function will just set things up again for a new connection.

WebRTC: Unable to successfully complete signalling process using DataChannel

I've been having trouble establishing a WebRTC session and am trying to simplify the issue as much as possible. So I've written up a simple copy & paste example, where you just paste the offer/answer into webforms and click submit.
The HTML+JS, all in one file, can be found here: http://pastebin.com/Ktmb3mVf
I'm on a local network, and am therefore removing the ICE server initialisation process to make this example as bare-bones as possible.
Here are the steps I'm carrying out in the example:
Page 1
Page 1 (loads page), enters a channel name (e.g. test) and clicks create.
A new Host object is created, new PeerConnection() and createDataChannel are called.
createOffer is called, and the resulting offerSDP is pasted into the offer textarea.
Page 2
Copy offerSDP from Page 1 and paste into offer textarea on Page 2, click join.
New Guest object is created, PeerConnection and an ondatachannel handler is set.
setRemoteDescription is called for the Guest object, with the offerSDP data.
createAnswer is called and the result is pasted into the answer textarea box.
Page 1
The answerSDP is copied from Page 2 and pasted into the answer textarea of Page 1, submit answer is clicked.
Host.setRemoteDescription is called with the answerSDP data. This creates a SessionDescription, then peer.setRemoteDescription is called with the resulting data.
Those are the steps carried out in the example, but it seems I'm missing something critical. After the offerer's remoteDescription is set with the answerSDP, I try to send a test message on the dataChannel:
Chrome 40
"-- complete"
> host.dataChannel.send('hello world');
VM1387:2 Uncaught DOMException: Failed to execute 'send' on 'RTCDataChannel': RTCDataChannel.readyState is not 'open'
Firefox 35
"-- complete"
ICE failed, see about:webrtc for more details
> host.dataChannel.send('hello world');
InvalidStateError: An attempt was made to use an object that is not, or is no longer, usable
I also had a more complicated demo operating, with a WebSocket signalling server, and ICE candidates listed, but was getting the same error. So I hope this simplification can help to track down the issue.
Again, the single-file code link: http://pastebin.com/Ktmb3mVf
To enable webRTC clients to connect to each other, you need ICE. While STUN and TURN, which you don't need for such a test, are part of that, even without these helpers you still need to use ICE to tell the other end which IP/port/protocol to connect to.
There are two ways to do this: Google's "trickle ice", where the SDP (answer/offer) is passed on without any ICE candidates. These are then transported over a separate signaling layer and added as they are discovered. This speeds up the connection process, as ICE takes time and some late ICE candidates might not be needed.
The classic method is to wait until all ICE candidates have been gathered, and then generate the SDP with these already included.
I have modified your latest version to do that: http://pastebin.com/g2YVvrRd
You also need to wait for the datachannel/connection to become available before being able to use it, so I've moved the sending of the message to the channels onopen event.
The significant changes to the original code:
The interface callbacks were removed from Host.prototype.createOffer and Guest.prototype.createAnswer, instead we attach the provided callback function to the respective objects for later use.
self.cb = cb;
Both Host and Guest have an added ICE handler for the PeerConnection:
var self = this;
this.peer.onicecandidate = function (event) {
// This event is called for every discovered ICE candidate.
// If this was trickle ICE, you'd pass them on here.
// An event without an actual candidate signals the end of the
// ICE collection process, which is what we need for classic ICE.
if (!event.candidate) {
// We fetch the up to date description from the PeerConnection
// It now contains lines with the available ICE candidates
self.offer = self.peer.localDescription;
// Now we move on to the deferred callback function
self.cb(self.offer);
}
}
For the guest self.offer becomes self.answer
The interface handler $("#submitAnswer").click() does not send the message anymore, instead it is send when the datachannel is ready in the onopen event defined in setChannelEvents().
channel.onopen = function () {
console.log('** channel.onopen');
channel.send('hello world!');
};

Poco C++ websockets - how to use in non-blocking manner?

I am using the Poco C++ libraries to setup a websocket server, which clients can connect to and stream some data to their webinterface. So I have a loop which continuously sends data and I also want to listen if the clients closes the connection by using the receiveFrame() function, for the rest, the client is totally passive and doesn't send any data or whatsoever. The problem is that receiveFrame() blocks the connection, which is not what I want. I basically want to check if the client has not yet called the close() javascript function and stop streaming data if it has. I tried using
ws.setBlocking(false);
But now receiveFrame throws an exception every time it is called. I also tried removing receiveFrame entirely, which works if the connection is terminated by closing the browser but if the client calls the function close(), the server still tries to send data to the client. So how can I pull this off? Is there somehow a way to check if there are client frames to be received and if not to just continue?
You can repeatedly call Socket::select() (with timeout) in a separate thread; when you detect a readable socket, call receiveFrame(). In spite of the misleading name, Socket::select() call wraps epoll() or poll() on platforms where those are available.
You can also implement this in somewhat more complicated but perhaps a more elegant fashion with Poco::NotificationQueues, posting a notification every time when a socket is readable and reading data in the handler.
setBlocking() does not do what you would expect it to. Here's a little info on it:
http://www.scottklement.com/rpg/socktut/nonblocking.html
What you probably want to do is use setReceiveTimeout() on your socket to control how long it will wait for before giving you back control. Then test your response and loop everything if needed. The Poco docs have more info on how to use that part of the API. Just look up WebSockets.

Categories