Http Request can't send a response immediately in Nodejs server - javascript

I send JSON requests one by one to the nodejs server. After 6th request, server can't reply to the client immediately and then it takes a little while(15 seconds or little bit more and send back to me answer 200 ok) It occurs a writing json value into MongoDB and time is important option for me in terms with REST call. How can I find the error in this case? (which tool or script code can help me?) My server side code is like that
var controlPathDatabaseSave = "/save";
app.use('/', function(req, res) {
console.log("req body app use", req.body);
var str= req.path;
if(str.localeCompare(controlPathDatabaseSave) == 0)
{
console.log("controlPathDatabaseSave");
mongoDbHandleSave(req.body);
res.setHeader('Content-Type', 'application/json');
res.write('Message taken: \n');
res.write('Everything all right with database saving');
res.send("OK");
console.log("response body", res.body);
}
});
My client side code as below:
function saveDatabaseData()
{
console.log("saveDatabaseData");
var oReq = new XMLHttpRequest();
oReq.open("POST", "http://192.168.80.143:2800/save", true);
oReq.setRequestHeader("Content-type", "application/json;charset=UTF-8");
oReq.onreadystatechange = function() {//Call a function when the state changes.
if(oReq.readyState == 4 && oReq.status == 200) {
console.log("http responseText", oReq.responseText);
}
}
oReq.send(JSON.stringify({links: links, nodes: nodes}));
}
--Mongodb save code
function mongoDbHandleSave(reqParam){
//Connect to the db
MongoClient.connect(MongoDBURL, function(err, db)
{
if(!err)
{
console.log("We are connected in accordance with saving");
} else
{
return console.dir(err);
}
/*
db.createCollection('user', {strict:true}, function(err, collection) {
if(err)
return console.dir(err);
});
*/
var collection = db.collection('user');
//when saving into database only use req.body. Skip JSON.stringify() function
var doc = reqParam;
collection.update(doc, doc, {upsert:true});
});
}
You can see my REST call in google chrome developer editor. (First six call has 200 ok. Last one is in pending state)
--Client output
--Server output
Thanks in advance,

Since it looks like these are Ajax requests from a browser, each browser has a limit on the number of simultaneous connections it will allow to the same host. Browsers have varied that setting over time, but it is likely in the 4-6 range. So, if you are trying to run 6 simultaneous ajax calls to the same host, then you may be running into that limit. What the browser does is hold off on sending the latest ones until the first ones finish (thus avoiding sending too many at once).
The general idea here is to protect servers from getting beat up too much by one single client and thus allow the load to be shared across many clients more fairly. Of course, if your server has nothing else to do, it doesn't really need protecting from a few more connections, but this isn't an interactive system, it's just hard-wired to a limit.
If there are any other requests in process (loading images or scripts or CSS stylesheets) to the same origin, those will count to the limit too.
If you run this in Chrome and you open the network tab of the debugger, you could actually see on the timline exactly when a given request was sent and when its response was received. This should show you immediately whether the later requests are being held up at the browser or at the server.
Here's an article on the topic: Maximum concurrent connections to the same domain for browsers.
Also, keep in mind that, depending upon what your requests do on the server and how the server is structured, there may be a maximum number of server requests that can efficiently processed at once. For example, if you had a blocking, threaded server that was configured with one thread for each of four CPUs, then once the server has four requests going at once, it may have to queue the fifth request until the first one is done causing it to be delayed more than the others.

Related

MySQL queries taking longer when response is empty

I'm building my first website using Node.js as the webserver and fetch to request to the server. When the MySQL database is run from localhost, my requests take less than 30ms each and there are no issues with the webpage. However, when the server is hosted somewhere else, I get very long response times ONLY when the resulting data from the query returns '[].' I'm trying to figure out why this would be the case only when requesting from a non-localhost database (or why this is happening in general).
See the difference in response times between these two links:
{ link removed }
{ link removed }
You'll see that the one which returns '[]' takes significantly longer than the other.
Here's the serverside code used:
router.get('/getgames/:matchid/:season', (req, res) => {
let sql = `SELECT * FROM games WHERE match_id='${req.params.matchid}' AND season='${req.params.season}'`;
let query = db.query(sql, (err, result) => {
if (err) throw err;
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify(result));
});
});
I don't think it's necessary to include the clientside code because the request takes just as long when it's put into the URL bar.
It's also important to note that the tables being accessed have less than 10 rows, so I don't think indexing/posting my explains would actually fix this problem (or would it??).
Edit: I've added a covering index to the 'games' table, and the query is still just as slow.
Thanks for your help.
For some reason I'm not sure of, node seems to be waiting for keep-alive to be closed before sending that empty array body. Disabling keep-alive mitigates this issue, see https://nodejs.org/api/http.html#http_server_keepalivetimeout (the default timeout of keep-alive is 5s)
I suppose having a longer body overcomes this issue, that's why it's returning immediately for sets with actual sql results.

Make HTTP requests without opening a new connection

If a browser opens a connection to a remote server, is it possible to access that same connection via Javascript?
I have a small Ethernet module on my network that I program sort of like this (pseudocode):
private var socket
while(true) {
if(socket is disconnected) {
open socket
listen on socket (port 80)
}
if(connection interrupt) {
connect socket
}
if(data receive interrupt) {
serve
}
if(disconnection interrupt) {
disconnect socket
}
}
The point is that it listens on one socket for HTTP requests and serves them.
In my web browser, I can connect to the device, making an HTTP GET request for some HTML/JS that I've written, and it works. A connection is opened on the socket and the files come back as HTTP responses.
Now I want to click a button on the webpage and have the browser send an HTTP POST request over that same connection. In my Javascript, I have (edited and formatted for clarity):
// This function sends an HTTP request
function http(type, url, data, callbacks) {
// make a new HTTP request
var request = new XMLHttpRequest();
// open a connection to the URL
request.open(type, url + (data ? "?" + data : ""));
// add headers
if(type == "POST")
request.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded');
// register callbacks for the request
Object.keys(callbacks).forEach(function(callback) {
request[callback] = function() {
callbacks[callback](request.responseText);
};
});
// send and return the request
request.send();
return request;
}
// Here is where I call the function
http("POST", // use POST method
"http://192.168.1.99", // IP address of the network device
dataToSend, // the data that needs to be sent
{ // callbacks
onloadend: function(data) {
console.log("success. got: " + data); // print 'success' when the request is done
},
onerror: function(data) {
console.log("There was an error."); // print 'error' when it fails
console.log(data);
}
}
);
The issue here is that this opens a new connection to the device, but I want to use the same socket that the browser is already connected to. Is this possible, and if so, how?
There is no application control inside the browser to decide if a new connection is used for the next request or if an existing connection is used. In fact, it is perfectly normal that the browser will use multiple connections in parallel to the same server and your server has to be able to deal with this.
Since your server architecture seems to be only able to deal with one connection at a time you either would need to change the architecture to handle multiple parallel connections or to make sure that you only need to handle a single connection at a time. The latter could be achieved by not supporting HTTP keep-alive, i.e. by closing the connection immediately after each response. This way a new request will result in a new connection (which is not what you wanted according to your question) but your server will also be able to handle this new connection (which is what you likely ultimately need) since the previous one was closed.

Socket.io disconnected unexpectedly

I have node.js service and angular client using socket.io to transport some message during long time http request.
Service:
export const socketArray: SocketIO.Socket[] = [];
export let socketMapping: {[socketId: string]: number} = {};
const socketRegister: hapi.Plugin<any> = {
register: (server) => {
const io: SocketIO.Server = socket(server.listener);
// Whenever a session connected to socket, create a socket object and add it to socket array
io.on("connection", (socket) => {
console.log(`socket ${socket.id} connected`);
logger.info(`socket ${socket.id} connected`);
// Only put socket object into array if init message received
socket.on("init", msg => {
logger.info(`socket ${socket.id} initialized`);
socketArray.push(socket);
socketMapping[socket.id] = msg;
});
// Remove socket object from socket array when disconnected
socket.on("disconnect", (reason) => {
console.log(`socket ${socket.id} disconnected because: ${reason}`)
logger.info(`socket ${socket.id} disconnected because: ${reason}`);
for(let i = 0; i < socketArray.length; i ++) {
if(socketArray[i] === socket) {
socketArray.splice(i, 1);
return;
}
}
});
});
},
name: "socketRegister",
version: "1.0"
}
export const socketSender = async (socketId: string, channel: string, content: SocketMessage) => {
try {
// Add message to db here
// await storeMessage(socketMapping[socketId], content);
// Find corresponding socket and send message
logger.info(`trying sending message to ${socketId}`);
for (let i = 0; i < socketArray.length; i ++) {
if (socketArray[i].id === socketId) {
socketArray[i].emit(channel, JSON.stringify(content));
logger.info(`socket ${socketId} send message to ${channel}`);
if (content.isFinal == true) {
// TODO: delete all messages of the process if isFinal is true
await deleteProcess(content.processId);
}
return;
}
}
} catch (err) {
logger.error("Socket sender error: ", err.message);
}
};
Client:
connectSocket() {
if (!this.socket) {
try {
this.socket = io(socketUrl);
this.socket.emit('init', 'some-data');
} catch (err) {
console.log(err);
}
} else if (this.socket.disconnected) {
this.socket.connect();
this.socket.emit('init', 'some-data');
}
this.socket.on('some-channel', (data) => {
// Do something
});
this.socket.on('disconnect', (data) => {
console.log(data);
});
}
They usually work fine but produce disconnection error randomly. From my log file, we can see this:
2018-07-21T00:20:28.209Z[x]INFO: socket 8jBh7YC4A1btDTo_AAAN connected
2018-07-21T00:20:28.324Z[x]INFO: socket 8jBh7YC4A1btDTo_AAAN initialized
2018-07-21T00:21:48.314Z[x]INFO: socket 8jBh7YC4A1btDTo_AAAN disconnected because: ping timeout
2018-07-21T00:21:50.849Z[x]INFO: socket C6O7Vq38ygNiwGHcAAAO connected
2018-07-21T00:23:09.345Z[x]INFO: trying sending message to C6O7Vq38ygNiwGHcAAAO
And at the same time of disconnect message, front-end also noticed a disconnect event which saying transport close.
From the log, we can get the work flow is this:
Front-end started a socket connection and sent an init message to back-end. It also save the socket.
Back-end detected the connection and received init message
Back-end put the socket to the array so that it can be used anytime anywhere
The first socket was disconnected unexpectedly and another connection is published without front-end's awareness so front-end never send a message to initialize it.
Since front-end's saved socket is not changed, it used the old socket id when made http request. As a result, back-end sent message with the old socket which was already removed from socket array.
The situation doesn't happen frequently. Does anyone know what could cause the disconnect and unknown connect issue?
It really depends what "long time http request" is doing. node.js runs your Javascript as a single thread. That means it can literally only do one thing at a time. But, since many things that servers do are I/O related (read from a database, get data from a file, get data from another server, etc...) and node.js uses event-driven asynchronous I/O, it can often have many balls in the air at the same time so it appears to be working on lots of requests at once.
But, if your complex http request is CPU-intensive, using lots of CPU, then it's hogging the single Javascript thread and nothing else can get done while it is hogging the CPU. That means that all incoming HTTP or socket.io requests have to wait in a queue until the one node.js Javascript thread is free so it can grab the next event from the event queue and start to process that incoming request.
We could only really help you more specifically if we could see the code for this "very complex http request".
The usual way around CPU-hogging things in node.js is to offload CPU-intensive stuff to other processes. If it's mostly just this one piece of code that causes the problem, you can spin up several child processes (perhaps as many as the number of CPUs you have in your server) and then feed them the CPU-intensive work and leave your main node.js process free to handle incoming (non-CPU-intensive) requests with very low latency.
If you have multiple operations that might hog the CPU, then you either have to farm them all out to child processes (probably via some sort of work queue) or you can deploy clustering. The challenge with clustering is that a given socket.io connection will be to one particular server in your cluster and if it's that process that just happens to be executing a CPU-hogging operation, then all the socket.io connections assigned to that server would have bad latency. So, regular clustering is probably not so good for this type of issue. The work-queue and multiple specialized child processes to handle CPU-intensive work are probably better because those processes won't have any outside socket.io connections that they are responsible for.
Also, you should know that if you're using synchronous file I/O, that blocks the entire node.js Javascript thread. node.js can not run any other Javascript during a synchronous file I/O operation. node.js gets its scalability and its ability to have many operations in flight at the same from its asynchronous I/O model. If you use synchronous I/O, you completely break that and ruin scalability and responsiveness.
Synchronous file I/O belongs only in server startup code or in a single purpose script (not a server). It should never be used while processing a request in a server.
Two ways to make asynchronous file I/O a little more tolerable are by using streams or by using async/await with promisified fs methods.

Node.js client request hangs

I have a node.js process that uses a large number of client requests to pull information from a website. I am using the request package (https://www.npmjs.com/package/request) since, as it says: "It supports HTTPS and follows redirects by default."
My problem is that after a certain period of time, the requests begin to hang. I haven't been able to determine if this is because the server is returning an infinite data stream, or if something else is going on. I've set the timeout, but after some number of successful requests, some of them eventually get stuck and never complete.
var options = { url: 'some url', timeout: 60000 };
request(options, function (err, response, body) {
// process
});
My questions are, can I shut down a connection after a certain amount of data is received using this library, and can I stop the request from hanging? Do I need to use the http/https libraries and handle the redirects and protocol switching myself in order the get the kind of control I need? If I do, is there a standardized practice for that?
Edit: Also, if I stop the process and restart it, they pick right back up and start working, so I don't think it is related to the server or the machine the code is running on.
Note that in request(options, callback), the callback will be fired when request is completed and there is no way to break the request.
You should listen on data event instead:
var request = require('request')
var stream = request(options);
var len = 0
stream.on('data', function(data) {
// TODO process your data here
// break stream if len > 1000
len += Buffer.byteLength(data)
if (len > 1000) {
stream.abort()
}
})

Node.js http.ServerRequest response never arrives

I'm creating a reverse HTTP proxy using Node.js for fun. The code is pretty simple at the moment. It listens on 127.0.0.1:8080 for HTTP requests and forwards these to hostname.com, responses from hostname.com are then forwarded back to the client. Nothing fancy is done yet such as rewriting redirect headers, etc. The code is as follows:
var http = require('http');
var server = http.createServer(
function(request, response) {
var proxy = http.createClient(8080, 'hostname.com')
var proxyRequest = proxy.request(request.method, request.url, request.headers);
proxyRequest.on('response', function(proxyResponse) {
proxyResponse.on('data', function(chunk) {
response.write(chunk, 'binary');
});
proxyResponse.on('end', function() {
response.end();
});
response.writeHead(proxyResponse.statusCode, proxyResponse.headers);
});
request.on('data', function(chunk) {
proxyRequest.write(chunk, 'binary');
});
request.on('end', function() {
proxyRequest.end();
});
proxyRequest.on('close', function(err) {
if (err) {
console.log('close error: ' + err + ' for ' + request.url);
}
});
});
server.listen(8080);
server.on('clientError', function(exception) {
console.log('boo a clientError occured :(');
});
All appears to work well until I browse to a page that requires many additional resources (such as images) to be fetched. Naturally the browser will generate a number of GET requests to the reverse proxy to fetch these additional resources.
When I do browse to such a page some of the http.ServerRequests for the additional resources never receive responses. If I restart the page request it almost always results in success as all the resources that were successfully fetched on the first attempt were cached (hence the browser doesn't try GET them again) and so now the browser only needs to grab a few missing ones.
At a guess I would imagine I'm hitting some kind of connection limit although I'm not sure. Any help would be greatly appreciated!
If you set up Wireshark on the proxy, you'll almost certainly see what's happening. (Note that you may need a second machine for this, because some TCP/IP stacks don't provide anything that Wireshark can listen on for loopback traffic - see this)
I'm almost certain that the problem(s) you are running into here are all down to the Connection: header - proxies MUST parse this header and handle it correctly. At a guess, I would say your code is handling the first request in a Connection: keep-alive stream and ignoring the rest. As a proxy, you are supposed to parse and remove/replace this header, and any associated headers (in this case the Keep-Alive: header), before forwarding the request to the server.
If you want to build a HTTP/1.1 proxy, it's very important that you read RFC 2616 and adhere to the many, many rules that it places on their behaviour. The particular problem you are running into here is documented in section 14.10.

Categories