Node.js Clustering - Is there automatically a Load Balancer in place? - javascript

I have a 4 core CPU with 8 logical processors, which in this code creates 8 workers and 1 master process. When a socket connection is formed, it tends to connect to the last worker, CPU 8. Does using this method automatically add a Load Balancer, or would I need to add it in? Is there a way to test if the Load Balancer is working? I've tried to add 100s of clients, but they all connect to CPU 8 - not sure if it could be because there is barely any process handling in this instance
Simple Node.js Clustering
const os = require('os'),
cluster = require('cluster'),
cores = os.cpus();
var clusterCount = 0;
if (cluster.isMaster) {
console.log(`Master ${process.pid} is running`);
// Fork workers
for (let i = 0; i < cores.length; i++) {
cluster.fork();
}
cluster.on('exit', (worker, code, signal) => {
console.log(`worker ${worker.process.pid} died`);
});
}
else {
const http = require('http'),
express = require('express'),
socketio = require('socket.io'),
process = require('process');;
var cpu = cores[clusterCount];
var app = express();
var port = process.env.PORT || process.argv[2] || 8080;
var server = app.listen(port);
var io = socketio(server);
io.adapter(socketioRedis({ host: config.redis_host, port: config.redis_port }));
io.on('connection', (socket) => {
console.log(`User ${socket.id} connected to worker ${process.pid}`);
});
console.log(`Worker ${process.pid} started on port: ${port} | ${cpu.model}`);
clusterCount++;
}

It depends on a couple of points. https://nodejs.org/api/cluster.html#cluster_how_it_works
Does using this method automatically add a Load Balancer?
The master will handle the load balancing.
sure if it could be because there is barely any process handling in this instance
It might be that CPU 8 is not that busy and can still handle the load. There are two strategies which also depend on the OS you are using.

Related

Data object consistency with several workers node

I am trying to create a simple server which will give every new request to different worker. The DATA object is a simple javascript object in separate file. The problem I faced with is CONSISTENCY of this DATA object.
How to prevent worker from handling the request if the previous request is still proceeding? For example first request is UPDATE and lasts longer and the next request is DELETE and proceeds faster What node tool or pattern I need to use to be 100% percent sure that DELETE will happen after UPDATE?
I need to run every worker on a different port
const cluster = require('cluster');
const http = require('http');
const numCPUs = require('os').cpus().length;
cluster.schedulingPolicy = cluster.SCHED_RR;
const PORT = 4000;
if (cluster.isMaster) {
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
} else {
http.createServer((req, res) => {
if(req.url === '/users' && req.method === "PUT") {
updateUser(req)
} else if(req.url === '/users' && req.method === "DELETE") {
deleteUser(req)
}
}).listen(PORT++);
}
Each worker must reserve ("lock") the DATA object for exclusive use before it can change it. This can be done by writing a lock file and deleting it again after successful object change.
try {
fs.openSync("path/to/lock/file", "wx+");
/* Change DATA object */
fs.rmSync("path/to/lock/file");
} catch(err) {
if (err.code === "EEXIST") throw "locking conflict";
}
The worker executing the first (UPDATE) request will succeed in writing the lock file, but a concurrent worker executing a second (DELETE) request will experience a locking conflict. It can then either report the failure to the user, or re-try after a short waiting time.
(If you decide to implement the lock in this way, the asynchronous fs methods may be more efficient.)
Your code won't even create multiple servers set aside the different ports, and the PORT variable is a const, so it won't increment either.
What node tool or pattern I need to use to be 100% percent sure that DELETE will happen after UPDATE?
Use some sort of lock, not yet available on JavaScript
Use a semaphore/Mutex variable lock (See code).
Remember, JavaScript is a single-threaded language.
need to run every worker on a different port
For each worker, set the listening based on worker ID (See code). Remember that the CPU cannot have capability to generate workers equal to that of number of cores.
Sample working code:
const express = require('express')
const cluster = require('cluster')
const os = require('os')
if (cluster.isMaster) {
for (let i = 0; i < os.cpus().length; i++) {
cluster.fork()
}
} else {
const app = express()
// Global semaphore/mutex variable isUpdating
var isUpdating = false;
const worker = {
handleRequest(req, res) {
console.log("handleRequest on worker /" + cluster.worker.id);
if (req.method == "GET") { // FOR BROWSER TESTING, CHANGE IT LATER TO PUT
isUpdating = true;
console.log("updateUser GET");
// do updateUser(req);
isUpdating = false;
} else if (req.method == "DELETE") {
if (!isUpdating) { // Check for update lock
console.log("updateUser DELETE");
// do deleteUser(req)
}
}
},
}
app.get('/users', (req, res) => {
worker.handleRequest(req, res)
})
// Now each worker will run on different port
app.listen(4000 + cluster.worker.id, () => {
console.log(`Worker ${cluster.worker.id} started listening on port ${4000 + cluster.worker.id}`)
})
}

Error during webSocket handshake unexpected response code: 400

i try to implement cluster in my node app with socket.io. if i not using cluster everything is working good. but when i use cluster its occur following error in client browser.
WebSocket connection to 'ws://localhost:8000/socket.io/?EIO=3&transport=websocket&sid=Ff8LkaCbF5g92lKOAAAS' failed: Error during WebSocket
socket.js:2 POST http://localhost:8000/socket.io/?EIO=3&transport=polling&t=MAjySbD&sid=Ff8LkaCbF5g92lKOAAAS 400 (Bad Request)
here is server.js
var http = require('http');
var app = require('../app');
cluster = module.exports = require('cluster');
const numCPUs = require('os').cpus().length;
var server = http.createServer(app);
io = module.exports = require('socket.io').listen(server, {
pingTimeout: 7000,
pingInterval: 10000
});
io.set("transports", ["xhr-polling","websocket","polling"]);
if(cluster.isMaster){
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('exit', (worker, code, signal) => {
console.log(`worker ${worker.process.pid} died`);
});
} else {
var port = 8000;
var host = '0.0.0.0';
server.listen(port,host,function(){
log('server is running on ' + host +':'+port);
});
}
here is the client.js
var socket = io.connect('http://localhost:8000/dashboard',{transports: ['websocket']});
Check if you using express-status-monitor as a middleware on express, this makes http call on the first (request) handshake of WebSocket goes failed, or maybe another factor like proxy (nginx) or similar like that
Look here for more details about this error

why web socket behave differently on nodejs ?

I have a Nodejs Server.js code :
first Concept :
var http = require('http');
var express = require('express');
var app = express();
var path = require('path');
var conn= http.createServer(app).listen(3000, function () {
console.log("server Running at Port 3000");
});
var WebSocketServer = require('ws').Server;
var wss = new WebSocketServer({server: conn});
and i have a index.html code with java script :
<html>
<body>
<script src="myscript.js"></script>
</body>
</html>
inside myscript.js i have :
var connection = new WebSocket('ws://localhost:3000');
This is working fine when i open http://localhost:3000 on browser .
second Concept :
my server.js :
var WebSocketServer = require('ws').Server,
wss = new WebSocketServer({ port: 3000}) ;
wss.on('connection', function (connection) {
});
wss.on('listening', function () {
console.log("Server started...");
});
and HTML and client java script is similar as above .
This is not working when i open http://localhost:3000 on browser . why ? i want to clarify my doubt . Why the first method working and second is not working ?
To specifically answer your question: why web socket behave differently on nodejs? the answer is: It shouldn't. In the second version of your code you are not serving any HTML or JS files to the client on the port 3000 so the browser can't download any HTML.
If you want it to work as expected then you need to serve some HTML and JS files to the browser that visits http://localhost:3000/ or otherwise it will not be able to connect.
I wrote some example code - both server-side and client-side - on how to use WebSocket to do exactly what you are trying to do here. It's available on GitHub and I originally wrote it for this answer: Differences between socket.io and websockets.
The relevant parts of the source code for your question here are:
WebSocket Server
WebSocket server example using Express.js:
var path = require('path');
var app = require('express')();
var ws = require('express-ws')(app);
app.get('/', (req, res) => {
console.error('express connection');
res.sendFile(path.join(__dirname, 'ws.html'));
});
app.ws('/', (s, req) => {
console.error('websocket connection');
for (var t = 0; t < 3; t++)
setTimeout(() => s.send('message from server', ()=>{}), 1000*t);
});
app.listen(3001, () => console.error('listening on http://localhost:3001/'));
console.error('websocket example');
Source: https://github.com/rsp/node-websocket-vs-socket.io/blob/master/ws.js
WebSocket Client
WebSocket client example using vanilla JavaScript:
var l = document.getElementById('l');
var log = function (m) {
var i = document.createElement('li');
i.innerText = new Date().toISOString()+' '+m;
l.appendChild(i);
}
log('opening websocket connection');
var s = new WebSocket('ws://'+window.location.host+'/');
s.addEventListener('error', function (m) { log("error"); });
s.addEventListener('open', function (m) { log("websocket connection open"); });
s.addEventListener('message', function (m) { log(m.data); });
Source: https://github.com/rsp/node-websocket-vs-socket.io/blob/master/ws.html
Instead of debugging a code that it not working, sometimes it's better to start from something that works and go from there. Take a look at how it all works and feel free to change it and use it in your projects - it's released under MIT license.

Getting socket.io, express & node-http2 to communicate though HTTP/2

I wrote a Web Socket server using socket.io, node-http2 and express in Node.js. The server works as intended, except for the fact that according to Chrome's DevTools socket.io's negotiation requests go through HTTP/1.1 (shown below). The "Protocol" column should be displaying h2 if the request was sent using HTTP/2.
This only happens in Chrome, other browsers use the correct protocol.
The server code (shortened):
var PORT = 8667,
config = require('./config'),
socketioServer = require('socket.io'),
app = express(),
https = require('http2'),
cors = require('cors');
app.use(cors(function(req, callback){
var corsOptions = { origin: false };
if (/^https:\/\/mlpvc-rr\.lc/.test(req.header('Origin')))
corsOptions.origin = true;
callback(null, corsOptions);
}));
app.get('/', function (req, res) {
res.sendStatus(403);
});
var server = https.createServer({
cert: fs.readFileSync(config.SSL_CERT),
key: fs.readFileSync(config.SSL_KEY),
}, app);
server.listen(PORT);
var io = socketioServer.listen(server);
// ...
Browser connection code:
var conn = io('https://ws.'+location.hostname+':8667/', { reconnectionDelay: 5000 });
conn.on('connect', function(){
console.log('[WS] Connected');
});
conn.on('disconnect',function(){
console.log('[WS] Disconnected');
});
Output of testssl.sh:
What do I need to change to make the socket.io requests go through HTTP/2?
A little bit late but with Express4 and Spdy (npm) is working great.
bin/www:
var app = require('../app');
var debug = require('debug')('gg:server');
var spdy = require('spdy');
var fs = require('fs');
var port = normalizePort(process.env.PORT || '3000');
app.set('port', port);
var options = {
key: fs.readFileSync(__dirname + '/server.key'),
cert: fs.readFileSync(__dirname + '/server.crt')
}
var server = spdy.createServer(options, app);
var io = app.io
io.attach(server);
server.listen(port);
server.on('error', onError);
server.on('listening', onListening);
...
app.js:
...
var app = express();
var io = app.io = require('socket.io')();
...
client screenshot:
As discussed in comments Chrome has recently stopped allowing the older NPN negotiation for HTTP/2 and insists on the newer ALPN protocol instead. See this article for more info: https://ma.ttias.be/day-google-chrome-disables-http2-nearly-everyone-may-31st-2016/
So you basically need Node.js to support ALPN which it looks as has only been added in v5 so far: https://github.com/nodejs/node/pull/2564 . An alternative would be to route your NodeJs calls through a webserver which is easier to upgrade OpenSSL (e.g. Nginx or Apache) to support HTTP/2 over ALPN.
You confirmed this was the issue by using the testssl.sh program which confirmed no ALPN support and the fact Firefox uses HTTP/2.

Kue crashes parse server

I'm trying to use kue for scheduled jobs on my Parse Server (hosted on heroku). For now I've modified my index.js file like so as stated in the several tutorials I found about Kue :
var express = require('express')
, kue = require('due')
, redis = require('redis');
var ParseServer = require('parse-server').ParseServer;
var databaseUri = process.env.DATABASE_URI || process.env.MONGOLAB_URI;
if (!databaseUri) {
console.log('DATABASE_URI not specified, falling back to localhost.');
}
var api = new ParseServer({
databaseURI: databaseUri || 'mongodb://localhost:27017/dev',
cloud: process.env.CLOUD_CODE_MAIN || __dirname + '/cloud/main.js',
appId: process.env.APP_ID || 'myAppId',
masterKey: process.env.MASTER_KEY || '',
serverURL: process.env.SERVER_URL
});
// Client-keys like the javascript key or the .NET key are not necessary with parse-server
// If you wish you require them, you can set them as options in the initialization above:
// javascriptKey, restAPIKey, dotNetKey, clientKey
// connect to REDIS
var client = redis.createClient(process.env.REDIS_URL);
var app = express();
// Serve the Parse API on the /parse URL prefix
var mountPath = process.env.PARSE_MOUNT || '/parse';
app.use(mountPath, api)
.use(kue.app); // wire up Kue (see /active for queue interface)
// Parse Server plays nicely with the rest of your web routes
app.get('/', function(req, res) {
res.status(200).send('I dream of being a web site.');
});
var port = process.env.PORT || 1337;
app.listen(port, function() {
console.log('parse-server-example running on port ' + port + '.');
});
I've found out that the app crashes at the line : .use(kue.app). Here is the error I get :
Starting process with command `node index.js`
parse-server-example running on port 22995.
/app/node_modules/parse-server/lib/index.js:298
throw err;
^
Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
at Object.exports._errnoException (util.js:890:11)
at exports._exceptionWithHostPort (util.js:913:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1057:14)
Process exited with status 7
State changed from starting to crashed
I don't know why this is happening.
The line : .use(ku.app) can be removed. And all that is needed is to add :
var jobs = kue.createQueue({ redis: process.env.REDIS_URL })
to access the current queue.
Hope it'll helps somebody.

Categories