What happens with socket's client scope on 'disconnect' event (server side) - javascript

So What happens with socket's client scope on 'disconnect' event?
I'm trying to avoid bad racing conditions and logic flaws in my node.js + mongoose + socket.io app.
What do I mean by scope is:
io.on('connection', function (client) {
///CLIENT SCOPE///
//define private vars, store and operate client's session info//
//recieving and sending client's socket signals//
}
Some background:
Let's say, for example, I implement some function that operates db by finding room and writing user to this room.
BUT, in the moment when the room (to be written in) found, but user yet not written in - he disconnects. On disconnect event i must pull him out of his last room in db, but i can not, it's not saved in db for this moment yet.
The only way I see is to assign a bool value on 'disconnect' event against which i can check before saving guy in to the room and in the case of true don't save him at all.
What i'm confused with - would this bool survive a disconnect event, as it saved in client's scope.
What happens with the scope? is it completely wiped out on disconnect? or it's wiped out only when everything that relys on this scope is finished?
I'm using 'forceNew': true to force socket.connect(); to socket immedietly if something goes wrong (hypothetically) and socket error fired without user really leaving the site.
If user reconnects through this 'old' socket is he getting back his scope on server, or this socket's previous scope has been wiped out on his disconnection or wiped out on reconnection by on 'connection' event?

The client closure will remain alive as long as there is code running that uses that closure so you generally don't have to worry about that issue. A closure is essentially an object in Javascript and it will only be garbage collected when there is no active code that has a reference to anything inside the closure.
As for your concurrency issue with a socket being disconnected while you are writing to the DB, you are correct to recognize that this is an issue to be careful with. Exactly what you need to do about it depends a bit on how your database behaves. Because node.js runs single threaded, your node.js code writing to the database will run to completion before any disconnect event gets processed. This doesn't mean that the database write will have completed before the disconnect event starts processing, but it does mean that it will have already been sent to the database. So, if your database processes multiple requests in the order it receives them, then it will likely just do the right thing and you won't have any extra code to worry about.
If your database could actually process the delete before the write finishes (which seems unlikely), then you'd have to code up some protection for that. The simplest concept there is to implement a database queue on all database operations for a given user. I'd probably create an object with common DB methods on it to implement this queue and create a separate object in each client closure so they were local to a given user. When a database operation is in process, this object would have a flag indicating that an operation was in progress. If another database operation was called while this flag was set, that second operation would go in a queue rather than being sent directly to the database. Each time a database operation finishes, it checks the queue to see if there is a next operation is waiting to run. If so, it runs it.
FYI, I have a simple node.js app (running on a Raspberry Pi with temperature sensors) that records temperature data and every so-often it writes that data to disk using async writes. Because new temperature data could arrive while I'm in the middle of my async writes of the data, I have a similar issue. I abstracted all operations on the data in an object, implemented a flag that indicates if I'm in the process of writing the data and, if any method calls arrive to modify the data, then those operations go in a queue and they are processed only when the write has finished.
As to your last question, the client scope you have in the io.on('connection', ...) closure is associated only with that particular connection event. If you get another connection event and thus this code is triggered to run again, that will create a new and separate closure that has nothing to do with the prior one, even if the client object is the same. Normally, this works out just fine because your code in this function will just set things up again for a new connection.

Related

Meteor, communication between the client and the server

This a snippet from the todo list tutorial. Variable checked is represented both on the client and the server side? How the client and the server communicate to make checked consistent?
Template.task.events({
'click .toggle-checked'() {
// Set the checked property to the opposite of its current value
Tasks.update(this._id, {
$set: { checked: ! this.checked },
});
},
'click .delete'() {
Tasks.remove(this._id);
},
});
checked is an attrubite defined on a Tasks object, as defined in this app.
In Meteor, the definitive record of this object is stored on the server (in MongoDB), however there is a client side cache that is also being manipulated here, known as MiniMongo. The Meteor framework does a lot of work in the background (via the DDP protocol) to keep the server and client side objects in sync.
In this case the following is happening when a user clicks on a checkbox (firing the 'click .toggle-checked' event code) in the Tasks.update method:
First update client side MiniMongo Cache - this is known as Optimistic UI, and enables the client UI to respond fast (without waiting for the server)
Send a message to the server (Meteor Method) that the client wants to update the Tasks object, by setting the clicked variable to a new value.
Message requesting update received by server, which checks this is a valid operation, and either processes it (updating MongoDB version of the Tasks object, or refuses to process the update as appropriate.
Server will send out a DDP update of the resulting status of the Tasks object to all clients that have subscribed to a publication that includes it.
Clients that have previously subscribed will receive this DDP update, and will replace their MiniMongo version with the Server's version of the Tasks object, ensuring that all Clients are in sync with the Server.
Now in the ideal case, when the server accepts the clients changes, the new version of Tasks received (in step 5) by the initiating client will match the object it optimistically updated (in step 1).
However by implementing all these steps the Meteor framework also synchronizes other clients, and handles the case when the server rejects the update, or possibly modifies additional fields, as appropriate for the application.
Luckily though, this is all handled by the Meteor framework, and all you need to do is call Tasks.update for all this magic to happen!
Meteor likes the blur the lines between client and server. There are things you can do to abstract code -- for instance, javascript files (among all files) inside the /server directory to restrict access to it. This means that client users can't see this code.
/client obviously is the opposite. You can check a file with isClient and isServer.
Now, what does this mean to your code?
Depending on where your code is, there are different access levels. However, inside the script, there basically is no difference. checked is known on server/client inside that script because that's how Meteor runs, the blurred line between client and server makes this possible.
Meteor employs something called "database everywhere" which means it doesn't matter where the code is called, because it will run.

Can Socket.io emits arrive out of order? What if volatile?

I've been looking around for a definitive answer to this but I seem to keep finding contradictory answers (ex this and this).
Basically, if I
socket.emit('game_update', {n: 1});
from a node.js server and then, 20 ms later,
socket.emit('game_update', {n: 2});
from the same server, is there any way that the n:2 message arrives before the n:1 message? In other words, does the n:1 message "block" the receiving of the n:2 message if the n:1 message somehow got lost on the way?
What if they were volatile emits? My understanding is that the n:1 message wouldn't block the n:2 message -- if the n:1 message got dropped, the n:2 message would still be received whenever it arrived.
Background: I'm building a node.js game server and want to better understand how my game updates are traveling. I'm using volatile emit right now and I would like to increase the server's tick rate, but I want to make sure that independent game updates wouldn't block each other. I would rather the client receive an update every 30 ms with a few dropped updates scattered here and there than have the client receive an update, receive nothing for 200 ms, and then receive 6 more updates all at once.
Disclaimer: I'm not completely familiar with the internals of socket.io.
is there any way that the n:2 message arrives before the n:1 message?
It depends on the transport that you're using. For the polling transport, I think it's fair to say that it's perfectly possible for messages to arrive out-of-order, because each message can arrive over a different connection.
With the websocket transport, which maintains a persistent connection, the message order is reasonably guaranteed.
What if they were volatile emits?
With volatile emits, all bets are off, it's fire-and-forget. I think that in normal situations, the server will wait (and queue up messages) for a client to be ready to receive messages, unless those messages are volatile, in which case the server will just drop them.
From what you're saying, I think that volatile emits are what you want, although once a websocket connection has been established I don't think you'll see the described scenario ("receive an update, receive nothing for 200 ms, and then receive 6 more updates all at once") is likely to happen. Perhaps only when the connection gets lost and is re-established.
The answer is yes it can possibly arrive later, but it is highly unlikely given that sockets are by nature persistent connections and reliability of order is all but guaranteed.
According to the Socket.io documentation messages will be discarded in the case that the client is not connected. This doesn't necessarily fit with your use case, however within the documentation itself it describes Volatile events as an interesting example if you need to send the position of a character.
// server-side
io.on("connection", (socket) => {
console.log("connect");
socket.on("ping", (count) => {
console.log(count);
});
});
// client-side
let count = 0;
setInterval(() => {
socket.volatile.emit("ping", ++count);
}, 1000);
If you restart the server, you will see in the console:
connect
1
2
3
4
# the server is restarted, the client automatically reconnects
connect
9
10
11
Without the volatile flag, you would see:
connect
1
2
3
4
# the server is restarted, the client automatically reconnects and sends its
buffered events
connect
5
6
7
8
9
10
11
Note: The documentation explicitly states that this will happen during a server restart, meaning that your connection to the client likely has to be lost in order for the volatile emits to be dropped.
I would say a good practice would be to write your emits as volatile just in case you do get a dropped client, however this will depend heavily on your game requirements.
As for the goal, I would recommend that you use client side prediction using some sort of dynamic time system or deltatime based on the client and server keeping a sync clock to help alleviate some of the problems you can incur. Here's an example of how you can do that, though I'm not a fan of the creators syntax, it can be easily adapted to your needs.
Hope this helps anyone who hits this topic.
Socket.io - Volatile events
Client Side Prediction

Return JS function as string from server and execute?

I'm trying to time limit a socket.io connection time on a node.js server. I asked a previous question as to whether this was possible without causing a huge overhead on the server and or blocking the main thread if we had say 1000 concurrent socket connections in various rooms, through something like:
socket.on('connection', function(params){
var maxTime = params.maxTime;
socket.join(params.roomId);
setTimeout(function{
socket.leave(params.roomId);
}, 180000)
});
The best case scenario would be handle this on the client side from a resources perspective but it isn't exactly secure to send the timeout/disconnection value as any lines of client side code that dealt with it could be easily manipulated and a knowing user could in effect prevent the disconnect event/functionality from being called.
Could I execute a function client-side sent as a string? Say:
setTimeout(function(){//disconnect},18000);
socket.emit('timeout_set', function(params){foo:bar});
Then handle appropriately on the server with a response knowing that the timeout has indeed been set:
socket.on('timeout_set', function(params){
socket.emit('proceed_with_stuff', {foo:bar});//includes critical info for proceeding
});
I'm thinking this depends on a few things:
Can you take a string from a server response and execute said string as JS?
Can a client still disrupt the setTimeout function without also triggering the socket.disconnect event?
Is this logic or anything similar possible?
Would the first scenario work on a node.js server given a number of concurrent connections?
Use Function constructor, see https://developer.mozilla.org/de/docs/Web/JavaScript/Reference/Global_Objects/Function

MEAN.JS setInterval process for event loop (gets data from another server)

I have a mean.js server running that will allow a user to check their profile. I want to have a setInterval like process running every second, which based on a condition, retrieve data from another server and update the mongoDB (simple-polling / long-polling). This updates the values that the user sees as well.
Q : Is this event loop allowed on nodejs, if so, where does the logic go that would start the interval when the server starts? or can events only be caused by actions (eg, the user clicking their profile to view the data).
Q: What are the implications of having both ends reading and writing to the same DB? Will the collisions just overwrite each other or fault. Is there info on how much read/write would overload it?
I think you can safely do a mongoDB cronjob to update every x day/hour/minutes. In the case of user profile, I assume thats not a critical data which require you to update your DB in real time.
If you need to update in real time, then do a DB replication. Then you point it to a new DB thats replicated on a real time.

Poco C++ websockets - how to use in non-blocking manner?

I am using the Poco C++ libraries to setup a websocket server, which clients can connect to and stream some data to their webinterface. So I have a loop which continuously sends data and I also want to listen if the clients closes the connection by using the receiveFrame() function, for the rest, the client is totally passive and doesn't send any data or whatsoever. The problem is that receiveFrame() blocks the connection, which is not what I want. I basically want to check if the client has not yet called the close() javascript function and stop streaming data if it has. I tried using
ws.setBlocking(false);
But now receiveFrame throws an exception every time it is called. I also tried removing receiveFrame entirely, which works if the connection is terminated by closing the browser but if the client calls the function close(), the server still tries to send data to the client. So how can I pull this off? Is there somehow a way to check if there are client frames to be received and if not to just continue?
You can repeatedly call Socket::select() (with timeout) in a separate thread; when you detect a readable socket, call receiveFrame(). In spite of the misleading name, Socket::select() call wraps epoll() or poll() on platforms where those are available.
You can also implement this in somewhat more complicated but perhaps a more elegant fashion with Poco::NotificationQueues, posting a notification every time when a socket is readable and reading data in the handler.
setBlocking() does not do what you would expect it to. Here's a little info on it:
http://www.scottklement.com/rpg/socktut/nonblocking.html
What you probably want to do is use setReceiveTimeout() on your socket to control how long it will wait for before giving you back control. Then test your response and loop everything if needed. The Poco docs have more info on how to use that part of the API. Just look up WebSockets.

Categories