I'm using socket.io on the client as well as on the server side. It is a great library, but I must say that the default behavior is not described in the docs, which make using the library so confusing. At least I didn't find any references for the default behavior.
Basically, I watched tutorials, where a basic chat app was build with socket.io. In the tutorials, the server send a message automatically to all connected clients. Is this the default behavior in the server side?
I'm not sure about this. I'm developing an app where the user can (un)subscribe to specific topics and receive values from the server. Let's say I have two topics (topic1 and topic2). I opened two clients (client1 and client2) and I subscribed to topic1 from client1. I noticed that I received the value1 of topic1 in client1 but client2 received nothing.
const io = require('socket.io')(3000); // create server
io.on('connection', socket => {
console.log("client is connected over sockets");
socket.on('subscribe', () => {socket.emit('send-msg', "send to client");})
});
In this case above, will the server send to all clients or to only one client? What is the default behavior of socket.io?
PS: Another thing I noticed about socket.io is that there is many ways to do the same thing and it is not documented well. Like for example, I'm instantiating a client with socketIOClient function: const socket = socketIOClient("my_host") But I've seen many tutorials that uses the openSocket function or even directly the io function (here for some reason the author added this in the html <script defer src="http://localhost:3000/socket.io/socket.io.js"></script>).
All these function do the same thing, right?
You're looking at the difference between namespace.emit and socket.emit. A socket is one specific connection, and emitting an event on it sends it only to that one connection. A namespace on the other hand is a group of several sockets, and emitting an event on it emits it to every socket in the group. The entire io server is one namespace by default, so io.emit(...) sends a message to all connected clients. You can group your sockets into arbitrary namespaces and rooms to make it easy to send messages to selected groups.
Related
So, I am still in the experimental phase of Socket.io, but I just can't figure out why my code is doing this. So, I have the code below and when I console.log the code, it repeats the the connection even when there is only one connection. Do you know a solution?
io.on('connnection', (socket) => {
console.log("A new user is connected.")
})
Client side:
<script src="/socket.io/socket.io.js"></script>
<script>
var socket = io()
</script>
Node.js Console:
A new user is connected.
A new user is connected.
A new user is connected.
A new user is connected.
A new user is connected.
A new user is connected.
A new user is connected.
...
(Note: there is only one connection, and I have already cleared the browser cashe)
Here are some of the possible reasons for socket.io connecting over and over:
Your socket.io client and server versions do not match and this causes a connection failure and an immediate retry.
You are running with some infrastructure (like a proxy or load balancer) that is not configured properly to allow lasting webSocket connections.
You are running a clustered server without sticky webSocket connections.
You have put the server-side io.on('connnection', ...) code inside some other function that is called more than once causing you to register multiple event handlers for the same event so you think you're getting multiple events, but actually you just have multiple listeners for the one occurrence of the event.
Your client code is calling its var socket = io() more than once.
Your client page is reloading (and thus restarting the connection on each reload) either because of a form post or for some other reason.
FYI, you can sometimes learn something useful by installing listeners for all the possible error-related events on both client and server connections and then logging which ones occur and any parameters that they offer. You can see all the client-related error events you can listen to and log here.
To solve repetion problem write your code like that for socket:
io.off("connnection").on('connnection', (socket) => {
console.log("A new user is connected.")
})
I ran into a weird problem when using socket io after many years.
Years ago, I could use the following code on client side
socket.emit('user', {userId: 2});
// and somewhere else in the code id listen for incoming 'user' replies
socket.on('user',(reply) => {
// do something with user data received from server
});
Now, when I have the same code on the client side, and I emit "user" request, the socket.on('user') callback is immediately fired with request payload which was supposed to go to the server( which is offline ).
I thought socket.on() listeners were triggered only by replies from the server and not by the outgoing messages from the client itself.
Is the socket.io supposed to work like this or am i missing something in configuration?
I think I solved this by accidentally stumbling on another post about how socket.io keeps the connection alive.
The socket.emit('user', {userId: 2}); was an example.
In my real app, I used the event name "ping" which seems to be reserved by socket.io.
The client keeps pinging and listening to pings constantly.
So, when I added my own socket.on('ping') listener, it hooked into socket.io internal ping/pong system. ( at least i think thats the case )
I have a node server which is connecting to CloudMQTT and receiving messages in app.js. I have my client web app running on the same node server and want to display my messages received in app.js elsewhere in a .ejs file, I'm struggling as to how best to do this.
app.js
// Create a MQTT Client
var mqtt = require('mqtt');
// Create a client connection to CloudMQTT for live data
var client = mqtt.connect('xxxxxxxxxxx', {
username: 'xxxxx',
password: 'xxxxxxx'
});
client.on('connect', function() { // When connected
console.log("Connected to CloudMQTT");
// Subscribe to the temperature
client.subscribe('Motion', function() {
// When a message arrives, do something with it
client.on('message', function(topic, message, packet) {
// ** Need to pass message out **
});
});
});
Basically you need a way for the client (browser code with EJS - HTML, CSS and JS) to receive live updates. There are basically two ways to do this from the client to the node service:
A websocket session instantiated by the client.
A polling approach.
What's the difference?
Under the hood, a websocket is full-duplex communication mechanism. That means that you can open a socket from the client (browser) to the node server and they can talk to each other both ways over a long-lived session. The pro is that updates are often times instantaneous without having to incur the cost of making another HTTP request as in the polling case. The con is that it uses a socket connection that may be long-lived, and there is typically a socket pool on any server that has limited ability to deal with many sockets. There are ways to scale around this issue, but if it's a big concern for you, you may want to go with polling.
Polling is where you set up an endpoint on your server that the client JS code hits every now and then. That endpoint will return you the updated information. The con is that you are now making a new request in order to get updates, which may not be desirable if a lot of updates are expected to come through and the app is expected to be updated in the timeliest manner possible (most of the time polling is sufficient though). The pro is that you do not have a live connection open on the server indefinitely.
Again, there are many more pros and cons, these are just the obvious ones. You decide how to implement it. When the client receives the data from either of these mechanisms, you may update the UI in any suitable manner.
From the server end, you will need a way to persist the information coming from CloudMQTT. There are multiple ways to do this. If you do not care about memory consumption and are ok with potentially throwing away old data if a client does not ask for it for a while, then it may be ok to just store this in memory in a regular javascript object {}. If you do care about persisting the data between server restarts/crashes (probably best), then you can persist to something like Redis, Mongo, any of the SQL stores if your data is relational in nature, or even a regular JSON file on disk (see fs.writeFile).
Hope this helped give you a step in the right direction!
I'm running a node.js server. On my website I use different URLs for different pages. For instance:
mydomain.com/ -- Index
mydomain.com/register -- Register
mydomain.com/profile -- Profile
I am using socket.io to send chat messages and notifications to the client. However, whenever the user switches page or performs a POST-request the socket connection is disconnected and then reconnected.
I'm wondering what the complexity of socket.io's connect/disconnect functions are and wether it is durable that all clients reconnect their sockets each time they perform an action on my website? I've looked at the documentation for socket.io without finding the answer.
That's what I have found, but I would guess it would really depend on your code. If each page connects to the channel then the connection will be reestablished. I am also in the habit of storing messages on the server side to reestablish state - something to this effect:
const events = []
const onSomeEvent = function(myEvent) {
events.push(myEvent)
socket.emit("onSomeEvent", myEvent)
}
socket.on("connect", function(){
events.forEach(function(myEvent){
socket.emit("onSomeEvent", myEvent)
})
})
Today, I integrated Redis into my node.js application and am using it as a session store. Basically, upon successful authentication, I store the corresponding user object in Redis.
When I receive http requests after authentication, I attempt to retrieve the user object from Redis using a hash. If the retrieval was successful, that means the user is logged in and the request can be fulfilled.
The act of storing the user object in Redis and the retrieval happen in two different files, so I have one Redis client in each file.
Question 1:
Is it ok having two Redis clients, one in each file? Or should I instantiate only one client and use it across all areas of the application?
Question 2:
Does the node-redis library provide a method to show a list of connected clients? If it does, I will be able to iterate through the list, and call client.quit() for each of them when the server is shutting down.
By the way, this is how I'm implementing the "graceful shutdown" of the server:
//Gracefully shutdown and perform clean-up when kill signal is received
process.on('SIGINT', cleanup);
process.on('SIGTERM', cleanup);
function cleanup() {
server.stop(function() {
//todo: quit all connected redis clients
console.log('Server stopped.');
//exit the process
process.exit();
});
};
In terms of design and performance, it's best to create one client and use it across your application. This is pretty easy to do in node. I'm assuming you're using the redis npm package.
First, create a file named redis.js with the following contents:
const redis = require('redis');
const RedisClient = (function() {
return redis.createClient();
})();
module.exports = RedisClient
Then, say in a file set.js, you would use it as so:
const client = require('./redis');
client.set('key', 'value');
Then, in your index.js file, you can import it and close the connection on exit:
const client = require('./redis');
process.on('SIGINT', cleanup);
process.on('SIGTERM', cleanup);
function cleanup() {
client.quit(function() {
console.log('Redis client stopped.');
server.stop(function() {
console.log('Server stopped.');
process.exit();
});
});
};
Using multiple connections may be required by how the application uses Redis.
For instance, as soon as a connection is used the purpose of listening to a pub/sub channel, then it can only be used for this and nothing else. Per the documentation on SUBSCRIBE:
Once the client enters the subscribed state it is not supposed to issue any other commands, except for additional SUBSCRIBE, PSUBSCRIBE, UNSUBSCRIBE and PUNSUBSCRIBE commands.
So if your application needs to subscribe to channels and use Redis as general value cache, then it needs two clients at a minimum: one for subscribing to channels and one for using Redis as a cache.
There are also Redis commands that are blocking like BLPOP. A busy web server normally replies to multiple requests at once. Suppose that for answering request A the server uses its Redis client to issue a blocking command. Then request B comes and the server needs to answer Redis with a non-blocking command but the client is still waiting for the blocking command issued for request A to finish. Now the response to request B is delayed by another request. This can be avoided by using a different client for the second request.
If you do not use any of the facilities that require more than one connection, then you can and should use just one connection.
If the way you use Redis is such that you need more than one connection, and you just need a list of connections but no sophisticated connection management, you could just create your own factory function: it would call redis.createClient() and save the client before returning it. Then at shutdown time, you could go over the list of saved clients and close them. Unfortunately, node-redis does not provide such functionality built-in.
If you need more sophisticated client management than the factory function described above, then the typical way to manage the multiple connections created is to use a connection pool but node-redis does not provide one. I usually access Redis through Python code so I don't have a recommendation for Node.js libraries, but an npm search shows quite a few candidates.