Sockets unlike HTTP doesn't have anything that is req, res it is always like:
client.on('data', function(data)...
Event gets executed when there is data on the stream.
Now I want to do a Server 2 Server communication. I'm writing a game where I am gonna have a main server and this main server communicates with the games desktop client.
One server is a World server and the other is a Login server. The client directly connects to the world server and if the data is a login data then the world server passes it to the login server.
But I cant wrap my head around how to do this in node. As a previous webdev I can only think of:
login.send(dataToSendToOtherServer, function(responseOfOtherServer) {
if (responseOfOtherServer === 1)
client.write(thisDataIsGoingToTheDesktopClient)
})
So how can I do something like this for the sockets in node.js?
I tried something like:
Client.prototype.send = function(data, cb) {
// convert json to string
var obj = JSON.stringify(data)
this.client.write(obj)
// wait for the response of this request
this.client.on('data', function(req) {
var request = JSON.parse(req)
// return response as callback
if (data.type === request.type) cb(request)
})
}
But with this every request the response gets +1.
Since you're dealing with plain TCP/IP, you need to come up with your own higher-level protocol to specify things like how to determine when a message is complete (since TCP offers no guarantee it will all arrive in one gulp). Common ways of dealing with this are:
Fixed-length messages: buffer up received data until it's the right length.
Prefixing each message with a length count: buffer up received data until the specified length has been reached.
Designating some character or sequence as an end-of-message indicator: buffer up received data until it ends with that sequence/character.
In your case, you could buffer up received data until JSON.parse succeeds on the accumulated data, assuming each message consists of legal JSON.
Related
I have a bulk create participants function that using Promise.allSettled to send 100 axios POST request. The backend is Express and frontend is React. That request is call a single add new participant rest API. I have set the backend timeout to 15s using connect-timeout. And frontend is 10s timeout.
My issue is when I click the bulk add button, the bulk create is triggered and that Promise.allSettled concurrent starts. However, I cannot send a new request before all concurrent request done. Because I have set up a timeout on the frontend, the new request will be cancelled.
Is there a way, I can still make the concurrent request, but that request does not stop other new requests?
This is the frontend code, createParticipant is the API request.
const PromiseArr = []
for (let i = 0; i < totalNumber; i++) {
const participant = participantList[i]
const participantNewDetail = {
firstName: participant.firstName,
lastName: participant.lastName,
email: participant.email,
}
PromiseArr.push(
createParticipant(participantNewDetail)
.then((createParticipantResult) => {
processedTask++
processMessage = `Processing adding participant`
dispatch({ type: ACTIVATE_PROCESS_PROCESSING, payload: { processedTask, processMessage } })
})
.catch((error) => {
processedTask++
processMessage = `Processing adding participant`
dispatch({ type: ACTIVATE_PROCESS_PROCESSING, payload: { processedTask, processMessage } })
throw new Error(
JSON.stringify({
status: "failed",
value: error.data.message ? error.data.message : error,
})
)
})
)
}
const addParticipantResults = await Promise.allSettled(PromiseArr)
PromiseArr is the Promise array with the length 100.
Is it possible I can splite this big request into small pieces promise array and send to the backend and within each request gap, it's possible I can send another new request like retriveUserDetail?
If you're sending 100 requests at a time to your server, that's just going to take awhile for the server to process. It would be best be to find a way to combine them all into one request or into a very small number of requests. Some server APIs have efficient ways of doing multiple queries in one request.
If you can't do that, then you probably should be sending them 5-10 at a time max so the server isn't being asked to handle sooo many simultaneous requests which causes your additional request to go to the end of the line and take too long to process. That will allow you to send other things and get them processed while you're chunking away on the 100 without waiting for all of them to finish.
If this is being done from a browser, you also have some browser safeguard limitations to deal with where the browser refuses to send more than N requests to the same host at a time. So if you send more than that, it queues them up and holds onto them until some prior requests have completed. This keeps one client from massively overwhelming the server, but also creates this long line of requests that any new request has to go to the end of. The way to deal with that is not never send more than a small number of requests to the same host and then that queue/line will be short when you want to send a new request.
You can look at these snippets of code that let you process an array of data N-at-a-time rather than all at once. Each of these has slightly different control options so you can decide which one fits your problem the best.
mapConcurrent() - Process an array with no more than N requests in flight at the same time
pMap() - Similar to mapConcurrent with more argument checking
rateLimitMap() - Process max of N requestsPerSecond
runN() - Allows you to continue processing upon error
These all replace both Promise.all() and whatever code you had for iterating your data, launching all the requests and collecting the promises into an array. The functions take an input array of data, a function to call that gets passed an item of the data and should return a promise that resolves to the result of that request and they return a promise that resolves to an array of data in the original array order (same return value as Promise.all()).
I've got a small Express JS api that I'm building to handle and process multiple incoming requests from the browser and am having some trouble figuring out the best approach to handle them.
The use case is that there's a form, with potentially up-to 30 or so people submitting form data to the Express JS api at any given time, the API then POSTS this data of to some place using axios, and each one needs to return a response back to the browser of the person that submitted the data, my endpoint so far is:
app.post('/api/process', (req, res) => {
if (!req.body) {
res.status(400).send({ code: 400, success: false, message: "No data was submitted" })
return
}
const application = req.body.Application
axios.post('https://example.com/api/endpoint', application)
.then(response => {
res.status(200).send({ code: 200, success: true, message: response })
})
.catch(error => {
res.status(200).send({ code: 200, success: false, message: error })
});
})
If John and James submit form data from different browsers to my Express JS api, which is forwarded to another api, I need the respective responses to go back to the respective browsers...
Let's make clear for you, A response of a request will only send to the requester, But if you need to send a process request and send a response like, hey i received your request and you can use another get route to get the result sometimes later, then you need to determine which job you mean. So You can generate a UUID when server receives a process request and send it back to the sender as response, Hey i received your process request, you can check the result of process sometimes later and this UUID is your reference code. Then you need to pass the UUID code as GETparam or query param and server send you the correct result.
This is the usual way when you are usinf WebSockettoo. send a process req to server and server sends back a reference UUID code, sometime later server sends the process result to websocket of requester and says Hey this is the result of that process with that UUID reference code.
I hope i said clear enough.
I am setting up a new HTTP server to execute a long command and return the response from that shell command to the client.
I run v4.17.1 of Express. Requests from clients have repeatedly timed out when running this command. (I app.use(cors()) if that makes any difference).
app.get("/dl", (req, res) => {
require("child_process").exec("command -url".concat(req.query.url), (err, stdout, stderr) => {
if (err || stderr) res.status(500).send(`err: ${err.message}, stderr: ${stderr}`);
res.status(200).send(stdout);
}
});
Browsers just timeout when I try to run this command because it just takes A LONG TIME. If I can't use 102 Processing that's fine just I would like another solution. Thanks!
I'd suggest not using an HTTP 102. You can read more about why: https://softwareengineering.stackexchange.com/a/316211/79958
I'd also STRONGLY recommend against your current logic using a query parameter. Someone could inject commands that would be executed on the server.
"If I can't use 102 Processing..."
Don't use 102 Processing as it is designed specifically for WebDAV. Please check RFC2518 for detail information.
"I would like another solution"
You can return 200 OK for GET /dl once the HTTP request is received and the child process is launched, indicating: "Hey, client, I've received your request and started the job successfully":
app.get("/dl", (req, res) => {
require("child_process").exec("command -url".concat(req.query.url));
res.status(200).end();
});
Then, in the child process, save the execution result somewhere (in a file, in DB etc.), and mapping the result to the query url:
query url A --> child process result A
query url B --> child process result B
query url C --> child process failed information
In client side, after receive 200 OK for GET /dl request, start a poll -- sending request to server every 5 seconds (or whatever time interval you need), with the previous success query url as parameter, trying to get its result in the above mapping. It would be:
If the result is found in the above mapping, client get what it want, and stop the poll.
If nothing is found in the above mapping, client keeps polling after another 5 seconds.
If failed information is found, or polling is timeout, client give up, stop the poll, and display the error message.
I have node.js service and angular client using socket.io to transport some message during long time http request.
Service:
export const socketArray: SocketIO.Socket[] = [];
export let socketMapping: {[socketId: string]: number} = {};
const socketRegister: hapi.Plugin<any> = {
register: (server) => {
const io: SocketIO.Server = socket(server.listener);
// Whenever a session connected to socket, create a socket object and add it to socket array
io.on("connection", (socket) => {
console.log(`socket ${socket.id} connected`);
logger.info(`socket ${socket.id} connected`);
// Only put socket object into array if init message received
socket.on("init", msg => {
logger.info(`socket ${socket.id} initialized`);
socketArray.push(socket);
socketMapping[socket.id] = msg;
});
// Remove socket object from socket array when disconnected
socket.on("disconnect", (reason) => {
console.log(`socket ${socket.id} disconnected because: ${reason}`)
logger.info(`socket ${socket.id} disconnected because: ${reason}`);
for(let i = 0; i < socketArray.length; i ++) {
if(socketArray[i] === socket) {
socketArray.splice(i, 1);
return;
}
}
});
});
},
name: "socketRegister",
version: "1.0"
}
export const socketSender = async (socketId: string, channel: string, content: SocketMessage) => {
try {
// Add message to db here
// await storeMessage(socketMapping[socketId], content);
// Find corresponding socket and send message
logger.info(`trying sending message to ${socketId}`);
for (let i = 0; i < socketArray.length; i ++) {
if (socketArray[i].id === socketId) {
socketArray[i].emit(channel, JSON.stringify(content));
logger.info(`socket ${socketId} send message to ${channel}`);
if (content.isFinal == true) {
// TODO: delete all messages of the process if isFinal is true
await deleteProcess(content.processId);
}
return;
}
}
} catch (err) {
logger.error("Socket sender error: ", err.message);
}
};
Client:
connectSocket() {
if (!this.socket) {
try {
this.socket = io(socketUrl);
this.socket.emit('init', 'some-data');
} catch (err) {
console.log(err);
}
} else if (this.socket.disconnected) {
this.socket.connect();
this.socket.emit('init', 'some-data');
}
this.socket.on('some-channel', (data) => {
// Do something
});
this.socket.on('disconnect', (data) => {
console.log(data);
});
}
They usually work fine but produce disconnection error randomly. From my log file, we can see this:
2018-07-21T00:20:28.209Z[x]INFO: socket 8jBh7YC4A1btDTo_AAAN connected
2018-07-21T00:20:28.324Z[x]INFO: socket 8jBh7YC4A1btDTo_AAAN initialized
2018-07-21T00:21:48.314Z[x]INFO: socket 8jBh7YC4A1btDTo_AAAN disconnected because: ping timeout
2018-07-21T00:21:50.849Z[x]INFO: socket C6O7Vq38ygNiwGHcAAAO connected
2018-07-21T00:23:09.345Z[x]INFO: trying sending message to C6O7Vq38ygNiwGHcAAAO
And at the same time of disconnect message, front-end also noticed a disconnect event which saying transport close.
From the log, we can get the work flow is this:
Front-end started a socket connection and sent an init message to back-end. It also save the socket.
Back-end detected the connection and received init message
Back-end put the socket to the array so that it can be used anytime anywhere
The first socket was disconnected unexpectedly and another connection is published without front-end's awareness so front-end never send a message to initialize it.
Since front-end's saved socket is not changed, it used the old socket id when made http request. As a result, back-end sent message with the old socket which was already removed from socket array.
The situation doesn't happen frequently. Does anyone know what could cause the disconnect and unknown connect issue?
It really depends what "long time http request" is doing. node.js runs your Javascript as a single thread. That means it can literally only do one thing at a time. But, since many things that servers do are I/O related (read from a database, get data from a file, get data from another server, etc...) and node.js uses event-driven asynchronous I/O, it can often have many balls in the air at the same time so it appears to be working on lots of requests at once.
But, if your complex http request is CPU-intensive, using lots of CPU, then it's hogging the single Javascript thread and nothing else can get done while it is hogging the CPU. That means that all incoming HTTP or socket.io requests have to wait in a queue until the one node.js Javascript thread is free so it can grab the next event from the event queue and start to process that incoming request.
We could only really help you more specifically if we could see the code for this "very complex http request".
The usual way around CPU-hogging things in node.js is to offload CPU-intensive stuff to other processes. If it's mostly just this one piece of code that causes the problem, you can spin up several child processes (perhaps as many as the number of CPUs you have in your server) and then feed them the CPU-intensive work and leave your main node.js process free to handle incoming (non-CPU-intensive) requests with very low latency.
If you have multiple operations that might hog the CPU, then you either have to farm them all out to child processes (probably via some sort of work queue) or you can deploy clustering. The challenge with clustering is that a given socket.io connection will be to one particular server in your cluster and if it's that process that just happens to be executing a CPU-hogging operation, then all the socket.io connections assigned to that server would have bad latency. So, regular clustering is probably not so good for this type of issue. The work-queue and multiple specialized child processes to handle CPU-intensive work are probably better because those processes won't have any outside socket.io connections that they are responsible for.
Also, you should know that if you're using synchronous file I/O, that blocks the entire node.js Javascript thread. node.js can not run any other Javascript during a synchronous file I/O operation. node.js gets its scalability and its ability to have many operations in flight at the same from its asynchronous I/O model. If you use synchronous I/O, you completely break that and ruin scalability and responsiveness.
Synchronous file I/O belongs only in server startup code or in a single purpose script (not a server). It should never be used while processing a request in a server.
Two ways to make asynchronous file I/O a little more tolerable are by using streams or by using async/await with promisified fs methods.
I'm trying to use streams in Node.js to basically build a running buffer of HTTP data until some processing is done, but I'm struggling with the specifics of streams. Some pseudocode will probably help:
var server = http.createServer(function(request, response) {
// Create a buffer stream to hold data generated by the asynchronous process
// to be piped to the response after headers and other obvious response data
var buffer = new http.ServerResponse();
// Start the computation of the full response as soon as possible, passing
// in the buffer stream to hold returned data until headers are written
beginAsyncProcess(request, buffer);
// Send headers and other static data while waiting for the full response
// to be generated by 'beginAsyncProcess'
sendObviousData(response, function() {
// Once obvious data is written (unfortunately HTTP and Node.js have
// certain requirements for the order data must be written in) then pipe
// the stream with the data from 'beginAsyncProcess' into the response
buffer.pipe(response);
});
});
Most of this is almost legitimate code, but it doesn't work. The basic issue is figuring out a way to take advantage of the asynchronous nature of Node.js when there are certain order requirements associated with HTTP requests, namely that headers must always be written first.
While I would definitely appreciate any answers with little hacks to get around the order problem without directly addressing streams, I wanted to use the opportunity to get to know them better. There are plenty of similar situations, but this scenario is more to open the can of worms than anything else.
Let's make a use of callbacks and streams in Node.js and .pause() / .resume() stream functions:
var server = http.createServer(function(request, response) {
// Handle the request first, then..
var body = new Stream(); // <-- you can implement stream.Duplex for read / write operations
body.on('open', function(){
body.pause();
// API generate data
// body.write( generated data ) <-- write to the stream
body.resume();
});
var firstPartOfThePage = getHTMLSomeHow();
response.writeHead(200, { 'Content-Type': 'text/html'});
response.write(firstPartOfThePage, function(){ // <-- callback after sending first part, our body already being processed
body.pipe( response ); // <-- This should fire after being resumed
body.on('end', function(){
response.end(); // <-- end the response
});
});
});
Check this: http://codewinds.com/blog/2013-08-31-nodejs-duplex-streams.html for costum duplex stream creation.
Note: it's still a pseudo code