I am trying to create a simple server which will give every new request to different worker. The DATA object is a simple javascript object in separate file. The problem I faced with is CONSISTENCY of this DATA object.
How to prevent worker from handling the request if the previous request is still proceeding? For example first request is UPDATE and lasts longer and the next request is DELETE and proceeds faster What node tool or pattern I need to use to be 100% percent sure that DELETE will happen after UPDATE?
I need to run every worker on a different port
const cluster = require('cluster');
const http = require('http');
const numCPUs = require('os').cpus().length;
cluster.schedulingPolicy = cluster.SCHED_RR;
const PORT = 4000;
if (cluster.isMaster) {
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
} else {
http.createServer((req, res) => {
if(req.url === '/users' && req.method === "PUT") {
updateUser(req)
} else if(req.url === '/users' && req.method === "DELETE") {
deleteUser(req)
}
}).listen(PORT++);
}
Each worker must reserve ("lock") the DATA object for exclusive use before it can change it. This can be done by writing a lock file and deleting it again after successful object change.
try {
fs.openSync("path/to/lock/file", "wx+");
/* Change DATA object */
fs.rmSync("path/to/lock/file");
} catch(err) {
if (err.code === "EEXIST") throw "locking conflict";
}
The worker executing the first (UPDATE) request will succeed in writing the lock file, but a concurrent worker executing a second (DELETE) request will experience a locking conflict. It can then either report the failure to the user, or re-try after a short waiting time.
(If you decide to implement the lock in this way, the asynchronous fs methods may be more efficient.)
Your code won't even create multiple servers set aside the different ports, and the PORT variable is a const, so it won't increment either.
What node tool or pattern I need to use to be 100% percent sure that DELETE will happen after UPDATE?
Use some sort of lock, not yet available on JavaScript
Use a semaphore/Mutex variable lock (See code).
Remember, JavaScript is a single-threaded language.
need to run every worker on a different port
For each worker, set the listening based on worker ID (See code). Remember that the CPU cannot have capability to generate workers equal to that of number of cores.
Sample working code:
const express = require('express')
const cluster = require('cluster')
const os = require('os')
if (cluster.isMaster) {
for (let i = 0; i < os.cpus().length; i++) {
cluster.fork()
}
} else {
const app = express()
// Global semaphore/mutex variable isUpdating
var isUpdating = false;
const worker = {
handleRequest(req, res) {
console.log("handleRequest on worker /" + cluster.worker.id);
if (req.method == "GET") { // FOR BROWSER TESTING, CHANGE IT LATER TO PUT
isUpdating = true;
console.log("updateUser GET");
// do updateUser(req);
isUpdating = false;
} else if (req.method == "DELETE") {
if (!isUpdating) { // Check for update lock
console.log("updateUser DELETE");
// do deleteUser(req)
}
}
},
}
app.get('/users', (req, res) => {
worker.handleRequest(req, res)
})
// Now each worker will run on different port
app.listen(4000 + cluster.worker.id, () => {
console.log(`Worker ${cluster.worker.id} started listening on port ${4000 + cluster.worker.id}`)
})
}
Related
I'm running on a fastify server and when I send a request, I get a ton of data from mongodb and start a for loop to process each item. This can take ~ 30 minutes. Each item is "processed" by sending to ffmpeg, redis pub->sub, and then a socket to the client.
// Streams controller
exports.renderStreams = async function (req, reply) {
const streams = await Stream.find({}).sort({ createdAt: -1 })//.limit(5)
const renderedStreams = renderStreams(this, streams);
return { success: true, streams: streams.length };
}
// renderStreams
const renderStreams = (fastify, streams = []) => {
const { redis } = fastify;
const channel = "streams";
for (let i = 0; i < streams.length; i++) {
setTimeout(async () => {
const stream = streams[i];
await renderStream(redis, channel, stream);
}, i * 200)
}
}
I am wondering in this for loop, how can I either "pause" it or stop it completely (or both?) via another request, maybe when I call /api/streams/stop.
How would this be possible?
You can use socket.io to do communications with your script while it is still running, so you would create a function to stop the loop and when you get the notification from socket.io you would run it. Documentation link: https://socket.io/docs/v4/
The function I would like this function to run by itself at time intervals. As it is now I have to visit the '/getCompanyInfo' path to trigger it. I would like it to run every minute as if I was visiting the '/getCompanyInfo' path each minute. The app is on Heroku and I would like the function to execute without any pages open.
The original function that is triggered by visiting the path.
const express = require('express');
const app = express();
/**
* getCompanyInfo ()
*/
app.get('/getCompanyInfo', function(req,res){
const companyID = oauthClient.getToken().realmId;
console.log(companyID)
const url = OAuthClient.environment.production ;
oauthClient.makeApiCall({url: url + 'v3/company/0000000000/salesreceipt/8?minorversion=41'})
.then(function(authResponse){
console.log("The response for API call is :"+JSON.parse(JSON.stringify(authResponse)));
res.send(authResponse);
})
.catch(function(e) {
console.error(e);
});
});
One of my attempts here was to put it in a function that executes each minute using node-schedule.
This one doesn't do anything other than print 'This will run once a minute.' to the console.
I tried removing
app.get(function(req,res){
and the
})
below it but that made the app (hosted on Heroku) fail to build.
const express = require('express');
const app = express();
var schedule = require('node-schedule');
var j = schedule.scheduleJob('* * * * *', function(){
console.log('This will run once a minute.');
app.get(function(req,res){
const companyID = oauthClient.getToken().realmId;
console.log(companyID)
const url = OAuthClient.environment.production ;
oauthClient.makeApiCall({url: url + 'v3/company/0000000000/salesreceipt/8?minorversion=41'})
.then(function(authResponse){
console.log("The response for API call is :"+JSON.parse(JSON.stringify(authResponse)));
res.send(authResponse);
})
.catch(function(e) {
console.error(e);
});
});
});
More Context:
It is inside an app I have on Heroku. I would like to set the app to make a requests for JSON data from the API every x time without me having to touch it.
app.get initializes api handler - e.g. this is your api route definition - the thing that will respond when you call GET /getCompanyInfo via web browser or some other client. You should not redefine it regularly with your scheduled action.
The failed build after you've removed the route handler is probably because of the res.send(authResponse); left behind.
You could have something like:
// function that will be used to get the data
const getCompanyInfo = (done) => {
const companyID = oauthClient.getToken().realmId
console.log(companyID)
const url = OAuthClient.environment.production
oauthClient.makeApiCall({url: url + 'v3/company/0000000000/salesreceipt/8?minorversion=41'})
.then((authResponse) => {
console.log("The response for API call is :"+JSON.parse(JSON.stringify(authResponse)))
done(authResponse)
})
.catch((e) => {
console.error(e)
})
}
// this will trigger the function regularly on the specified interval
const j = schedule.scheduleJob('* * * * *', () => {
getCompanyInfo((companyInfo) => {
// ...do whatever you want with the info
})
})
// this will return you the data by demand, when you call GET /getCompanyInfo via browser
app.get('/getCompanyInfo', function(req,res) {
getCompanyInfo((companyInfo) => {
res.send(companyInfo)
})
})
Heroku has an add on called Heroku Scheduler that does what you want. The node-schedule npm package might do the job, but as you mentioned, you probably aren't going to be able to see the execution/results/logs of your jobs that run every 24 hours without making some interface for it on your own.
For your issue, calling app.get doesn't make a lot of sense. That's just telling node about the route. Assuming you have your /getCompanyInfo route up and running, you just need to call it in your scheduled job, not re-register it every time.
You could also just do this (http being the http client you're using):
var j = schedule.scheduleJob('* * * * *', async function(){
console.log('This will run once a minute.');
const result = await http.get('/getCompanyInfo');
console.log(result);
});
So I'm using Server.js for a middleware application that polls data every few seconds and emits the data to all clients. The problem I'm running into is that a new set of pollers are being created for every new socket connection, with each of those pollers getting and emiting to all clients (way too much data). I only want one poller that emits to all, is there a way to do this with Server.js using Socket.IO?
const server = require('server')
const { get, post, socket } = require('server/router')
socket('connect', socket => {
// New connection
setInterval(() => getData(socket), 60000),
});
function getData(socket) {
let sets = db.newSet()
if (typeof socket != 'undefined') {
socket.io.emit("set", sets);
}
return '';
}
Strange issue I haven't really found documentation about. I think it may end up being a simple case of "you don't understand how the product works" and I'm hoping someone can fill the gap(s).
Here's what's going on... I have 3 separate apps which are socket.io servers. They're all listening on different ports. Each server is intended for a different specialized purpose. I'm building the application so that I can expand it in parts and only impact the individual isolated pieces I need to change/update.
This was working fine, until I realized that for each application running there's an extra socket connection per server. So if I have 3 apps, then I have 3 connections on each server.
The evidence of this is that if I add a console.log("Connected") to each server then connect a client, each server reports as many connections as there are servers. Hopefully this makes sense.
My goal, is I want 1 connection per server. It seems like the connections are each acting as a generic connection to all socket servers. My server listeners are set up like this :
io = require('socket.io').listen(26265) // can use up to 26485
My clients connect like this :
socket = new io('http://localhost:26265')
EDIT:
To add on to my original question so that you can see more code...
Full client code:
importJS('/js/pages/admin_base.js',function(){
AdminIO = new io('http://localhost:26266');
AdminIO.on('send_users',function(rows){
toggleLoad();
/*
if(typeof rows === 'object'){
rows = Array(rows);
}
*/
appendUsers(rows);
console.log(rows);
});
AdminIO.on('failed_users',function(){
toggleLoad();
dropInfo("Failed to retrieve userlist",{level: "error"});
});
AdminIO.on('test',function (q) {
console.log(q);
});
queryUsers(AdminIO);
});
The server code is pretty long... So the relevant pieces are :
var io = require('socket.io').listen(26266); // can use up to 26484
//.... imported additional modules and defined simple functions here
io.on('connection', function (socket) {
socket.on('restart_request', function(req){
var success = false
, session = JSON.parse(req.session)
, sessionID = session.sessionID;
checkSession(sessionID, function (ses) {
if (ses === false) { console.error('CheckSession failed: No session exists'); return; }
if (ses.user.uuid !== session.uuid) { console.error('CheckSession failed: UUID mismatched'); return; }
if (ses.user.role < conf['Permissions']['lm_restart']){ socket.emit('restart_fail','Insufficient permissions.'); return; }
if(process.platform === 'win32'){
executeCMD('START "" .\\windows\\scripts\\restart_lm.bat',function(err,res){
var errSent = false;
if(err){
console.error(err);
if(!errSent){ socket.emit('restart_fail','Restart failed'); }
errSent = true;
if(res === null){return;}
}
console.log(res);
socket.emit('restart_success','LM successfully restarted.');
});
}
else if(process.platform === 'linux'){
}
});
});
socket.on('get_users',function(req){
var success = false
, session = JSON.parse(req.session)
, opts = req.opts || null
, sessionID = session.sessionID
, col = opts.col || null
, where = opts.where || null
, range = opts.range || null
;
checkSession(sessionID, function (ses) {
if (!ses) { console.error('CheckSession failed: No session exists'); return; }
if (ses.user.uuid !== session.uuid) { console.error('CheckSession failed: UUID mismatched'); return; }
if (ses.user.role < conf['Permissions']['lm_user_query']){ socket.emit('userQuery_fail','Insufficient permissions.'); return; }
Query.users({col: col, where: where, range: range},function(err,res){
if(!err){socket.emit('send_users',res);}
else {socket.emit('failed_users',true);}
});
});
});
socket.on('test',function(q){
socket.emit('test',q);
});
});
Try removing the 'new' keyword from the io thing.
You shouldn't use 'new' there since it would make new instances every time you reload the page or a new client connects.
So, it should look like:
Server side:
var io = require('socket.io')(26265);
Client side:
var socket = io('http://localhost:26265');
I think this is what you were looking for.
I have app in Node.js and Express. I need to write tests for it. I have a problem with handling Express app errors. I found this How do I catch node.js/express server errors like EADDRINUSE?, but it doesn't work for me, I don't know why. I want to handle errors, which can occured while expressApp.listen() is executing (EADDRINUSE, EACCES etc.).
express = require('express')
listener = express()
#doesn't work for me
listener.on('uncaughtException', (err) ->
#do something
)
#doesn't work too
listener.on("error", (err) ->
#do something
)
#this works, but it caughts all errors in process, I want only in listener
process.on('uncaughtException', (err) ->
#do something
)
listener.listen(80) #for example 80 to get error
Any ideas?
This should do the trick:
listener.listen(80).on('error', function(err) { });
What listener.listen actually does is create a HTTP server and call listen on it:
app.listen = function(){
var server = http.createServer(this);
return server.listen.apply(server, arguments);
};
First off, expressJS does not throw the uncaughtException event, process does, so it's no surprise your code doesn't work.
So use: process.on('uncaughtException',handler) instead.
Next, expressJS already provides a standard means of error handling which is to use the middleware function it provides for this purpose, as in:
app.configure(function(){
app.use(express.errorHandler({ dumpExceptions: true, showStack: true }));
});
This function returns an error message to the client, with optional stacktrace, and is documented at connectJS errorHandler.
(Note that errorHandler is actually part of connectJS and is only exposed by expressJS.)
If the behavior the existing errorHandler provides is not sufficient for your needs, its source is located at connectJS's errorHandler middleware and can be easily modified to suit your needs.
Of course, rather than modifying this function directly, the "correct" way to do this is to create your own errorHandler, using the connectJS version as a starting point, as in:
var myErrorHandler = function(err, req, res, next){
...
// note, using the typical middleware pattern, we'd call next() here, but
// since this handler is a "provider", i.e. it terminates the request, we
// do not.
};
And install it into expressJS as:
app.configure(function(){
app.use(myErrorHandler);
});
See Just Connect it, Already for an explanation of connectJS's idea of filter and provider middleware and How To Write Middleware for Connect/Express for a well-written tutorial.
You might also find these useful:
How to handle code exceptions in node.js?
Recover from Uncaught Exception in Node.JS
Finally, an excellent source of information regarding testing expressJS can be found in its own tests.
Mention: Marius Tibeica answer is complete and great, also david_p comment is. As too is Rob Raisch answer (interesting to explore).
https://stackoverflow.com/a/27040451/7668448
https://stackoverflow.com/a/13326769/7668448
NOTICE
This first method is a bad one! I leave it as a reference! See the Update section! For good versions! And also for the explanation for why!
Bad version
For those who will find this useful, here a function to implement busy port handling
(if the port is busy, it will try with the next port, until it find a no busy port)
app.portNumber = 4000;
function listen(port) {
app.portNumber = port;
app.listen(port, () => {
console.log("server is running on port :" + app.portNumber);
}).on('error', function (err) {
if(err.errno === 'EADDRINUSE') {
console.log(`----- Port ${port} is busy, trying with port ${port + 1} -----`);
listen(port + 1)
} else {
console.log(err);
}
});
}
listen(app.portNumber);
The function listen is recursively calling itself. In case of port busy error. Incrementing the port number each time.
update Completely re-done
Callback full version
First of all this version is the one that follow the same signature as nodejs http.Server.listen() method!
function listen(server) {
const args = Array.from(arguments);
// __________________________________ overriding the callback method (closure to pass port)
const lastArgIndex = arguments.length - 1;
let port = args[1];
if (typeof args[lastArgIndex] === 'function') {
const callback = args[lastArgIndex];
args[lastArgIndex] = function () {
callback(port);
}
}
const serverInstance = server.listen.apply(server, args.slice(1))
.on('error', function (err) {
if(err.errno === 'EADDRINUSE') {
console.log(`----- Port ${port} is busy, trying with port ${port + 1} -----`);
port += 1;
serverInstance.listen.apply(serverInstance, [port].concat(args.slice(2, lastArgIndex)));
} else {
console.log(err);
}
});
return serverInstance;
}
Signature:
listen(serverOrExpressApp, [port[, host[, backlog]]][, callback])
just as per
https://nodejs.org/api/net.html#net_server_listen_port_host_backlog_callback
The callback signature is changed to
(port) => void
usage:
const server = listen(app, 3000, (port) => {
console.log("server is running on port :" + port);
});
// _____________ another example port and host
const server = listen(app, 3000, 'localhost', (port) => {
console.log("server is running on port :" + port);
});
Explanation
In contrary to the old example! This method doesn't call itself!
Key elements:
app.listen() first call will return a net.Server instance
After binding an event once, calling listen again into the same net.Server instance will attempt reconnecting!
The error event listener is always there!
each time an error happen we re-attempt again.
the port variable play on the closure to the callback! when the callback will be called the right value will be passed.
Importantly
serverInstance.listen.apply(serverInstance, [port].concat(args.slice(2, lastArgIndex)));
Why we are skipping the callback here!?
The callback once added! It's hold in the server instance internally on an array! If we add another! We will have multiple triggers! On the number of (attempts + 1). So we only include it in the first attempt!
That way we can have the server instance directly returned! And keep using it to attempt! And it's done cleanly!
Simple version port only
That's too can help to understand better at a glimpse
function listen(server, port, callback) {
const serverInstance = server.listen(port, () => { callback(port) })
.on('error', function (err) {
if(err.errno === 'EADDRINUSE') {
console.log(`----- Port ${port} is busy, trying with port ${port + 1} -----`);
port += 1;
serverInstance.listen(port);
} else {
console.log(err);
}
});
return serverInstance;
}
Here the parameter port variable play on the closure!
ES6 full version
function listen(server, ...args) {
// __________________________________ overriding the callback method (closure to pass port)
const lastArgIndex = args.length - 1;
let port = args[0];
if (typeof args[lastArgIndex] === 'function') {
const callback = args[lastArgIndex];
args[lastArgIndex] = function () {
callback(port);
}
}
const serverInstance = server.listen(server, ...args)
.on('error', function (err) {
if(err.errno === 'EADDRINUSE') {
console.log(`----- Port ${port} is busy, trying with port ${port + 1} -----`);
port += 1;
serverInstance.listen(...[port, ...args.slice(1, lastArgIndex)])
} else {
console.log(err);
}
});
return serverInstance;
}
Why the old version is bad
To say right it's not really! But with the first version! We call the function itself at every failure! And each time it create a new instance! The garbage collector will budge some muscles!
It doesn't matter because this function only execute once and at start!
The old version didn't return the server instance!
Extra (for #sakib11)
You can look at #sakib11 comment to see what problem he fall in! It can be thoughtful!
Also in the comment i mentioned promise version and closure getter pattern! I don't deem them interesting! The way above just respect the same signature as nodejs! And too callback just do fine! And we are getting our server reference write away! With a promise version! A promise get returned and at resolution we pass all the elements! serverInstance + port!
And if you wonder for the closure getter pattern! (It's bad here)
Within our method we create a ref that reference the server instance! If we couldn't return the server instance as we are doing (imaging it was impossible! So each time a new instance is created! The pattern consist of creating a closure (method at that scope) And return it!
so for usage
const getServer = listen(port, () => {
console.log('Server running at port ' + getServer().address().port);
const io = socketIo(getServer(), {});
});
But it's just overhead specially we need to wait for the server to be done!
Unless we set it in a way that it use a callback! or return a promise!
And it's just over complicating! And not good at all!
It's just because i mentioned it!
And the method above can be tweaked! To add number of attempts limit! And add some events or hooks! But well! Generally we only need a simple function that just attempt and make it! For me the above is more then sufficient!
Good links
https://nodejs.org/api/http.html#http_http_createserver_options_requestlistener
https://nodejs.org/api/http.html#http_class_http_server
https://expressjs.com/en/4x/api.html#app.listen
From the doc
The app.listen() method returns an http.Server object and (for HTTP) is a convenience method for the following:
app.listen = function () {
var server = http.createServer(this)
return server.listen.apply(server, arguments)
}