Node JS live text update with CloudMQTT - javascript

I have a node server which is connecting to CloudMQTT and receiving messages in app.js. I have my client web app running on the same node server and want to display my messages received in app.js elsewhere in a .ejs file, I'm struggling as to how best to do this.
app.js
// Create a MQTT Client
var mqtt = require('mqtt');
// Create a client connection to CloudMQTT for live data
var client = mqtt.connect('xxxxxxxxxxx', {
username: 'xxxxx',
password: 'xxxxxxx'
});
client.on('connect', function() { // When connected
console.log("Connected to CloudMQTT");
// Subscribe to the temperature
client.subscribe('Motion', function() {
// When a message arrives, do something with it
client.on('message', function(topic, message, packet) {
// ** Need to pass message out **
});
});
});

Basically you need a way for the client (browser code with EJS - HTML, CSS and JS) to receive live updates. There are basically two ways to do this from the client to the node service:
A websocket session instantiated by the client.
A polling approach.
What's the difference?
Under the hood, a websocket is full-duplex communication mechanism. That means that you can open a socket from the client (browser) to the node server and they can talk to each other both ways over a long-lived session. The pro is that updates are often times instantaneous without having to incur the cost of making another HTTP request as in the polling case. The con is that it uses a socket connection that may be long-lived, and there is typically a socket pool on any server that has limited ability to deal with many sockets. There are ways to scale around this issue, but if it's a big concern for you, you may want to go with polling.
Polling is where you set up an endpoint on your server that the client JS code hits every now and then. That endpoint will return you the updated information. The con is that you are now making a new request in order to get updates, which may not be desirable if a lot of updates are expected to come through and the app is expected to be updated in the timeliest manner possible (most of the time polling is sufficient though). The pro is that you do not have a live connection open on the server indefinitely.
Again, there are many more pros and cons, these are just the obvious ones. You decide how to implement it. When the client receives the data from either of these mechanisms, you may update the UI in any suitable manner.
From the server end, you will need a way to persist the information coming from CloudMQTT. There are multiple ways to do this. If you do not care about memory consumption and are ok with potentially throwing away old data if a client does not ask for it for a while, then it may be ok to just store this in memory in a regular javascript object {}. If you do care about persisting the data between server restarts/crashes (probably best), then you can persist to something like Redis, Mongo, any of the SQL stores if your data is relational in nature, or even a regular JSON file on disk (see fs.writeFile).
Hope this helped give you a step in the right direction!

Related

Better alternative to pinging the database over and over?

I want to create a dashboard that automatically updates when new data is posted.
My first thought was to just make a javascript function and put a fetch statement in it and then loop the function every second or every couple of seconds...
Obviously, this is not a great solution. But I don't know what the better way is...
Some notes:
-PHP Server-Side Language
-Ran on Localhost so traffic is not going over the internet
Can anyone advise what I should be doing or if this is an acceptable approach?
Thanks in advance!
Server Side:
You can look for any onUpdate events if your database supports any such events
Or else just run a query in a timed interval to fetch new updates form the database (Connection to database is made just once and all subsequent requests go through the same connection. Hence this isn't a bad approach)
But when it comes to client side and receiving those updates, you can make it efficient in either of the two ways:
[Simple] Use Socket IO - Push an event with your new data and listen to them on the client side. (This way socket connection is made just once and all subsequent responses are received in the same connection)
Docs: https://socket.io/docs/v4/index.html
[Complex] Use HTTP stream
Example: https://gist.github.com/igrigorik/5736866

Can WebSockets replace AJAX when it comes to Database requests?

This may seem like an extremely dumb question, but I am currently switching my Website from using the EventSource Polling constructor to the WebSocket standard which is implemented in Node.js.
Originally, all backend on my website was handled with PHP. With the introduction of Node.js, I am trying to switch as much as I can without going outside of the "standard". By standard, I meaning that typically I see WebSocket implementations that send small data, and receive small data back vs. performing database queries and then sending large amounts of data back to the client.
Can WebSockets replace AJAX when it comes to Database requests?
Let's consider a small hello world program in PHP/JavaScript (AJAX) vs Node.js/JavaScript (WebSockets)
PHP/JavaScript (AJAX)
// HelloWorld.php with Laravel in the Backend
Table::update([ 'column' => $_POST['message'] ]);
echo $_POST['message'];
Ajax.js with a custom ajax function
Global.request("HelloWorld.php").post({
message: "Hello World"
}).then(message => alert(message));
Node.js/JavaScript (WebSockets)
// skip all the server setup
server.on('connection', function () {
server.on('message', function (message) {
sqlConnection.query("UPDATE `table` SET `column` = ?", [message], function () {
server.send(message);
});
});
});
WebSocket.js:
let socket = new WebSocket('ws://example.com');
socket.onmessage = function (message) {
alert(message)
}
socket.send("Hello World");
They both essentially do the same thing, in a slightly different way. Now, in this scale it would not make sense to use WebSockets. Though an example, imagine it scaled up to a point where Node.js is processing bigger queries and sending lots of data to the client. Is this acceptable?
Yes, theoretically, you could trigger a db query with websockets. Both HTTP and Websockets are built on TCP which do the job of transferring data of being a the bridge between network requests and responses.
The bigger issue is that Websockets were intended to lessen burden of opening/closing network ports, which you would have to do for ajax. This comes with several application-level benefits including real-time media streaming.
So what's the benefit of sticking with HTTP if you don't have a specific use case for web sockets? That HTTP is built has a robust ecosystem of tools - HTTP is largely plug & play. Think of stuff like security and standardization. Web sockets is a relatively new technology and hasn't developed this same ecosystem.

Node.js: Closing all Redis clients on shutdown

Today, I integrated Redis into my node.js application and am using it as a session store. Basically, upon successful authentication, I store the corresponding user object in Redis.
When I receive http requests after authentication, I attempt to retrieve the user object from Redis using a hash. If the retrieval was successful, that means the user is logged in and the request can be fulfilled.
The act of storing the user object in Redis and the retrieval happen in two different files, so I have one Redis client in each file.
Question 1:
Is it ok having two Redis clients, one in each file? Or should I instantiate only one client and use it across all areas of the application?
Question 2:
Does the node-redis library provide a method to show a list of connected clients? If it does, I will be able to iterate through the list, and call client.quit() for each of them when the server is shutting down.
By the way, this is how I'm implementing the "graceful shutdown" of the server:
//Gracefully shutdown and perform clean-up when kill signal is received
process.on('SIGINT', cleanup);
process.on('SIGTERM', cleanup);
function cleanup() {
server.stop(function() {
//todo: quit all connected redis clients
console.log('Server stopped.');
//exit the process
process.exit();
});
};
In terms of design and performance, it's best to create one client and use it across your application. This is pretty easy to do in node. I'm assuming you're using the redis npm package.
First, create a file named redis.js with the following contents:
const redis = require('redis');
const RedisClient = (function() {
return redis.createClient();
})();
module.exports = RedisClient
Then, say in a file set.js, you would use it as so:
const client = require('./redis');
client.set('key', 'value');
Then, in your index.js file, you can import it and close the connection on exit:
const client = require('./redis');
process.on('SIGINT', cleanup);
process.on('SIGTERM', cleanup);
function cleanup() {
client.quit(function() {
console.log('Redis client stopped.');
server.stop(function() {
console.log('Server stopped.');
process.exit();
});
});
};
Using multiple connections may be required by how the application uses Redis.
For instance, as soon as a connection is used the purpose of listening to a pub/sub channel, then it can only be used for this and nothing else. Per the documentation on SUBSCRIBE:
Once the client enters the subscribed state it is not supposed to issue any other commands, except for additional SUBSCRIBE, PSUBSCRIBE, UNSUBSCRIBE and PUNSUBSCRIBE commands.
So if your application needs to subscribe to channels and use Redis as general value cache, then it needs two clients at a minimum: one for subscribing to channels and one for using Redis as a cache.
There are also Redis commands that are blocking like BLPOP. A busy web server normally replies to multiple requests at once. Suppose that for answering request A the server uses its Redis client to issue a blocking command. Then request B comes and the server needs to answer Redis with a non-blocking command but the client is still waiting for the blocking command issued for request A to finish. Now the response to request B is delayed by another request. This can be avoided by using a different client for the second request.
If you do not use any of the facilities that require more than one connection, then you can and should use just one connection.
If the way you use Redis is such that you need more than one connection, and you just need a list of connections but no sophisticated connection management, you could just create your own factory function: it would call redis.createClient() and save the client before returning it. Then at shutdown time, you could go over the list of saved clients and close them. Unfortunately, node-redis does not provide such functionality built-in.
If you need more sophisticated client management than the factory function described above, then the typical way to manage the multiple connections created is to use a connection pool but node-redis does not provide one. I usually access Redis through Python code so I don't have a recommendation for Node.js libraries, but an npm search shows quite a few candidates.

How to call particular node.js method from client side javascript

In my application i have created many methods in node.js file.How can i call the particular method from client side javascript.
Below is my node.js file
exports.method1=function(){
}
exports.method2=function(){
}
exports.method3=function(){
}
Your client should send a message, for example:
socket.emit("callMethod", {"methodName":"method3"});
And in your server:
socket.on("callMethod", function(data) {
if(data["methodName"] == "method3") {
exports.method3();
}
});
You don't call methods directly, you send events/messages.
I would avoid using sockets unless you really need to, from my experience they can be expensive. Sockets are great for intensive applications where a user stays engaged for awhile, otherwise I would suggest using a RESTful setup with javascript and node.js, for example:
http://blog.modulus.io/nodejs-and-express-create-rest-api
this way the socket doesn't always have to be open which causes more overhead anyway. REST will use http requests whereas sockets you will have direct connection via TCP. REST is better if your app won't be constantly engaging a user, but rather have updates here and there.

Node.js - Is this a good structure for frequently updated content?

As a follow-up to my question yesterday about Node.js and communicating with clients, I'm trying to understand how the following would work.
Case:
So, I have this website where content is updated very frequently. Let's assume this time, this content is a list of locations with temperatures. (yes, a weather service)
Now, every time a client checks for a certain location, he or she goes to a url like this: example.com/location/id where id corresponds to the id of the location in my database.
Implementation:
At the server, checktemps.js loops (every other second or so) through all the locations in my (mySQL) database and checks for the corresponding temperature. It then stores this data is an array within checktemps.js. Because temperatures can change all the time, it's important to keep checking for updates in the database.
When a request to example.com/location/id is made, checktemps.js looks into the array with a record with id = id. Then, it responds with the corresponding temperature.
Question:
Plain text, html or an ajax call is not relevant at the moment. I'm just curious if I have this right? Node.js is a rather unusual thing to get a grasp on, so I try to figure out if this is logical?
At the server, checktemps.js loops
(every other second or so) through all
the locations in my (mySQL) database
and checks for the corresponding
temperature. It then stores this data
is an array within checktemps.js
This is extremely inefficient. You should not be doing looping(every other second or so).
Modules
Below I would try and make a list of the modules(both node.js modules as other modules) I would use to do this efficient:
npm is a package manager for node. You can use it to install and publish your node
programs. It manages dependencies and does other cool stuff.
I hope sincerely that you already know about npm, if not i recommend you to learn about it as soon as possible. In the beginning you just need to learn how to install packages, but that is very easy. You just need to type npm install <package-name>. Later I would really like to advice you to learn to write your own packages to manage the dependencies for you.
Express is a High performance, high class web development for Node.js.
This sinatra-style framework from TJ is really sweet and you should read the documentation/screencasts available to learn it's power.
Socket.IO aims to make realtime apps possible in every browser and
mobile device, blurring the
differences between the different
transport mechanisms.
Redis is an open source, advanced
key-value store. It is often referred
to as a data structure server since
keys can contain strings, hashes,
lists, sets and sorted sets.
Like Raynos said this extremely fast/sexy database has pubsub semantics, which are needed to do your question efficiently. I think you should really play with this database(tutorial) to appreciate it's raw power. Installing is easy as pie: make install
Node_redis is a complete Redis client for node.js. It supports all Redis commands, including MULTI, WATCH, and PUBLISH/SUBSCRIBE.
Prototype
I just remembered I helped another user out in the past with a question about pubsub. I think when you look at that answer you will have a better understanding how to do it correctly. The code has been posted a while back and should be updated(minor changes in express) to:
var PORT = 3000,
HOST = 'localhost',
express = require('express'),
io = require('socket.io'),
redis = require('redis'),
app = module.exports = express.createServer(),
socket = null;
app.use(express.static(__dirname + '/public'));
if (!module.parent) {
app.listen(PORT, HOST);
console.log("Express server listening on port %d", app.address().port)
socket = io.listen(app);
socket.on('connection', function(client) {
var subscribe = redis.createClient();
subscribe.subscribe('pubsub'); // listen to messages from channel pubsub
subscribe.on("message", function(channel, message) {
client.send(message);
});
client.on('message', function(msg) {
});
client.on('disconnect', function() {
subscribe.quit();
});
});
}
I have compressed the updated code with all the dependencies inside, but you while still need to start redis first.
Questions
I hope this gives you an idea how to do this.
With node.js you could do it even better. The request/response thingy is manifested in our heads since the beginning of the web. But you can do just one ajax request if you open the website/app and never end this call. Now node.js can send data whenever you have updates to the client. Search on youtube for Introduction to Node.js with Ryan Dahl (the creator of node.js), there he explains it. Then you have realtime updates without having to do requests by the client all the time.

Categories