I am setting up a new HTTP server to execute a long command and return the response from that shell command to the client.
I run v4.17.1 of Express. Requests from clients have repeatedly timed out when running this command. (I app.use(cors()) if that makes any difference).
app.get("/dl", (req, res) => {
require("child_process").exec("command -url".concat(req.query.url), (err, stdout, stderr) => {
if (err || stderr) res.status(500).send(`err: ${err.message}, stderr: ${stderr}`);
res.status(200).send(stdout);
}
});
Browsers just timeout when I try to run this command because it just takes A LONG TIME. If I can't use 102 Processing that's fine just I would like another solution. Thanks!
I'd suggest not using an HTTP 102. You can read more about why: https://softwareengineering.stackexchange.com/a/316211/79958
I'd also STRONGLY recommend against your current logic using a query parameter. Someone could inject commands that would be executed on the server.
"If I can't use 102 Processing..."
Don't use 102 Processing as it is designed specifically for WebDAV. Please check RFC2518 for detail information.
"I would like another solution"
You can return 200 OK for GET /dl once the HTTP request is received and the child process is launched, indicating: "Hey, client, I've received your request and started the job successfully":
app.get("/dl", (req, res) => {
require("child_process").exec("command -url".concat(req.query.url));
res.status(200).end();
});
Then, in the child process, save the execution result somewhere (in a file, in DB etc.), and mapping the result to the query url:
query url A --> child process result A
query url B --> child process result B
query url C --> child process failed information
In client side, after receive 200 OK for GET /dl request, start a poll -- sending request to server every 5 seconds (or whatever time interval you need), with the previous success query url as parameter, trying to get its result in the above mapping. It would be:
If the result is found in the above mapping, client get what it want, and stop the poll.
If nothing is found in the above mapping, client keeps polling after another 5 seconds.
If failed information is found, or polling is timeout, client give up, stop the poll, and display the error message.
Related
I've got a small Express JS api that I'm building to handle and process multiple incoming requests from the browser and am having some trouble figuring out the best approach to handle them.
The use case is that there's a form, with potentially up-to 30 or so people submitting form data to the Express JS api at any given time, the API then POSTS this data of to some place using axios, and each one needs to return a response back to the browser of the person that submitted the data, my endpoint so far is:
app.post('/api/process', (req, res) => {
if (!req.body) {
res.status(400).send({ code: 400, success: false, message: "No data was submitted" })
return
}
const application = req.body.Application
axios.post('https://example.com/api/endpoint', application)
.then(response => {
res.status(200).send({ code: 200, success: true, message: response })
})
.catch(error => {
res.status(200).send({ code: 200, success: false, message: error })
});
})
If John and James submit form data from different browsers to my Express JS api, which is forwarded to another api, I need the respective responses to go back to the respective browsers...
Let's make clear for you, A response of a request will only send to the requester, But if you need to send a process request and send a response like, hey i received your request and you can use another get route to get the result sometimes later, then you need to determine which job you mean. So You can generate a UUID when server receives a process request and send it back to the sender as response, Hey i received your process request, you can check the result of process sometimes later and this UUID is your reference code. Then you need to pass the UUID code as GETparam or query param and server send you the correct result.
This is the usual way when you are usinf WebSockettoo. send a process req to server and server sends back a reference UUID code, sometime later server sends the process result to websocket of requester and says Hey this is the result of that process with that UUID reference code.
I hope i said clear enough.
We need to send an HTTP CODE = 200 with a body 'OK' in reply to a notification through Zapier.
Is it possible to use the following code in Zapier:
var http = require('http');
const server = http.createServer((req,res) => {
res.statusCode = 200;
res.end('OK');
}).listen(80);
It returns an error:
Error: You did not define `output`! Try `output = {id: 1, hello: "world"};`
And the reply doesn't work.
David here, from the Zapier Platform team.
To cut to the chase - though it might be possible to start an http server (there's no reason it wouldn't be, as far as I know), it's not going to do what it seems like you're hoping to do. Namely, you can't send a custom response to an incoming webhook. From the docs:
There is no way to customize the response to the request you send to the Catch Hook URL, as the response is sent before the Zap triggers and runs on the webhook request.
If you need behavior like that, I'd suggest running a webserver.
The specific Code step error you're seeing has to do with not defining output to the function. Something goes in and something must come out. You can customize the output based on the input and use that output, but something has to be returned from the function (even if it's just {}).
I send JSON requests one by one to the nodejs server. After 6th request, server can't reply to the client immediately and then it takes a little while(15 seconds or little bit more and send back to me answer 200 ok) It occurs a writing json value into MongoDB and time is important option for me in terms with REST call. How can I find the error in this case? (which tool or script code can help me?) My server side code is like that
var controlPathDatabaseSave = "/save";
app.use('/', function(req, res) {
console.log("req body app use", req.body);
var str= req.path;
if(str.localeCompare(controlPathDatabaseSave) == 0)
{
console.log("controlPathDatabaseSave");
mongoDbHandleSave(req.body);
res.setHeader('Content-Type', 'application/json');
res.write('Message taken: \n');
res.write('Everything all right with database saving');
res.send("OK");
console.log("response body", res.body);
}
});
My client side code as below:
function saveDatabaseData()
{
console.log("saveDatabaseData");
var oReq = new XMLHttpRequest();
oReq.open("POST", "http://192.168.80.143:2800/save", true);
oReq.setRequestHeader("Content-type", "application/json;charset=UTF-8");
oReq.onreadystatechange = function() {//Call a function when the state changes.
if(oReq.readyState == 4 && oReq.status == 200) {
console.log("http responseText", oReq.responseText);
}
}
oReq.send(JSON.stringify({links: links, nodes: nodes}));
}
--Mongodb save code
function mongoDbHandleSave(reqParam){
//Connect to the db
MongoClient.connect(MongoDBURL, function(err, db)
{
if(!err)
{
console.log("We are connected in accordance with saving");
} else
{
return console.dir(err);
}
/*
db.createCollection('user', {strict:true}, function(err, collection) {
if(err)
return console.dir(err);
});
*/
var collection = db.collection('user');
//when saving into database only use req.body. Skip JSON.stringify() function
var doc = reqParam;
collection.update(doc, doc, {upsert:true});
});
}
You can see my REST call in google chrome developer editor. (First six call has 200 ok. Last one is in pending state)
--Client output
--Server output
Thanks in advance,
Since it looks like these are Ajax requests from a browser, each browser has a limit on the number of simultaneous connections it will allow to the same host. Browsers have varied that setting over time, but it is likely in the 4-6 range. So, if you are trying to run 6 simultaneous ajax calls to the same host, then you may be running into that limit. What the browser does is hold off on sending the latest ones until the first ones finish (thus avoiding sending too many at once).
The general idea here is to protect servers from getting beat up too much by one single client and thus allow the load to be shared across many clients more fairly. Of course, if your server has nothing else to do, it doesn't really need protecting from a few more connections, but this isn't an interactive system, it's just hard-wired to a limit.
If there are any other requests in process (loading images or scripts or CSS stylesheets) to the same origin, those will count to the limit too.
If you run this in Chrome and you open the network tab of the debugger, you could actually see on the timline exactly when a given request was sent and when its response was received. This should show you immediately whether the later requests are being held up at the browser or at the server.
Here's an article on the topic: Maximum concurrent connections to the same domain for browsers.
Also, keep in mind that, depending upon what your requests do on the server and how the server is structured, there may be a maximum number of server requests that can efficiently processed at once. For example, if you had a blocking, threaded server that was configured with one thread for each of four CPUs, then once the server has four requests going at once, it may have to queue the fifth request until the first one is done causing it to be delayed more than the others.
Sockets unlike HTTP doesn't have anything that is req, res it is always like:
client.on('data', function(data)...
Event gets executed when there is data on the stream.
Now I want to do a Server 2 Server communication. I'm writing a game where I am gonna have a main server and this main server communicates with the games desktop client.
One server is a World server and the other is a Login server. The client directly connects to the world server and if the data is a login data then the world server passes it to the login server.
But I cant wrap my head around how to do this in node. As a previous webdev I can only think of:
login.send(dataToSendToOtherServer, function(responseOfOtherServer) {
if (responseOfOtherServer === 1)
client.write(thisDataIsGoingToTheDesktopClient)
})
So how can I do something like this for the sockets in node.js?
I tried something like:
Client.prototype.send = function(data, cb) {
// convert json to string
var obj = JSON.stringify(data)
this.client.write(obj)
// wait for the response of this request
this.client.on('data', function(req) {
var request = JSON.parse(req)
// return response as callback
if (data.type === request.type) cb(request)
})
}
But with this every request the response gets +1.
Since you're dealing with plain TCP/IP, you need to come up with your own higher-level protocol to specify things like how to determine when a message is complete (since TCP offers no guarantee it will all arrive in one gulp). Common ways of dealing with this are:
Fixed-length messages: buffer up received data until it's the right length.
Prefixing each message with a length count: buffer up received data until the specified length has been reached.
Designating some character or sequence as an end-of-message indicator: buffer up received data until it ends with that sequence/character.
In your case, you could buffer up received data until JSON.parse succeeds on the accumulated data, assuming each message consists of legal JSON.
Looking at the example given at the nodejs domain doc page: http://nodejs.org/api/domain.html, the recommended way to restart a worker using cluster is to call first disconnect in the worker part, and listen to the disconnect event in the master part. However, if you just copy/paste the example given, you will notice that the disconnect() call does not shutdown the current worker:
What happens here is:
try {
var killtimer = setTimeout(function() {
process.exit(1);
}, 30000);
killtimer.unref();
server.close();
cluster.worker.disconnect();
res.statusCode = 500;
res.setHeader('content-type', 'text/plain');
res.end('Oops, there was a problem!\n');
} catch (er2) {
console.error('Error sending 500!', er2.stack);
}
I do a get request at /error
A timer is started: in 30s the process will be killed if not already
The http server is shut down
The worker is disconnected (but still alive)
The 500 page is displayed
I do a second get request at error (before 30s)
New timer started
Server is already closed => throw an error
The error is catched in the "catch" block and no result is sent back to the client, so on the client side, the page is waiting without any message.
In my opinion, it would be better to just kill the worker, and listen to the 'exit' event on the master part to fork again. This way, the 500 error is always sent during an error:
try {
var killtimer = setTimeout(function() {
process.exit(1);
}, 30000);
killtimer.unref();
server.close();
res.statusCode = 500;
res.setHeader('content-type', 'text/plain');
res.end('Oops, there was a problem!\n');
cluster.worker.kill();
} catch (er2) {
console.error('Error sending 500!', er2);
}
I'm not sure about the down side effects using kill instead of disconnect, but it seems disconnect is waiting the server to close, however it seems this is not working (at least not like it should)
I just would like some feedbacks about this. There could be a good reason this example is written this way that I've missed.
Thanks
EDIT:
I've just checked with curl, and it works well.
However I was previously testing with Chrome, and it seems that after sending back the 500 response, chrome does a second request BEFORE the server actually ends to close.
In this case, the server is closing and not closed (which means the worker is also disconnecting without being disconnected), causing the second request to be handled by the same worker as before so:
It prevents the server to finish to close
The second server.close(); line being evaluated, it triggers an exception because the server is not closed.
All following requests will trigger the same exception until the killtimer callback is called.
I figured it out, actually when the server is closing and receives a request at the same time, it stops its closing process.
So he still accepts connection, but cannot be closed anymore.
Even without cluster, this simple example illustrates this:
var PORT = 8080;
var domain = require('domain');
var server = require('http').createServer(function(req, res) {
var d = domain.create();
d.on('error', function(er) {
try {
var killtimer = setTimeout(function() {
process.exit(1);
}, 30000);
killtimer.unref();
console.log('Trying to close the server');
server.close(function() {
console.log('server is closed!');
});
console.log('The server should not now accepts new requests, it should be in "closing state"');
res.statusCode = 500;
res.setHeader('content-type', 'text/plain');
res.end('Oops, there was a problem!\n');
} catch (er2) {
console.error('Error sending 500!', er2);
}
});
d.add(req);
d.add(res);
d.run(function() {
console.log('New request at: %s', req.url);
// error
setTimeout(function() {
flerb.bark();
});
});
});
server.listen(PORT);
Just run:
curl http://127.0.0.1:8080/ http://127.0.0.1:8080/
Output:
New request at: /
Trying to close the server
The server should not now accepts new requests, it should be in "closing state"
New request at: /
Trying to close the server
Error sending 500! [Error: Not running]
Now single request:
curl http://127.0.0.1:8080/
Output:
New request at: /
Trying to close the server
The server should not now accepts new requests, it should be in "closing state"
server is closed!
So with chrome doing 1 more request for the favicon for example, the server is not able to shutdown.
For now I'll keep using worker.kill() which makes the worker not to wait for the server to stops.
I ran into the same problem around 6 months ago, sadly don't have any code to demonstrate as it was from my previous job. I solved it by explicitly sending a message to the worker and calling disconnect at the same time. Disconnect prevents the worker from taking on new work and in my case as i was tracking all work that the worker was doing (it was for an upload service that had long running uploads) i was able to wait until all of them are finished and then exit with 0.