I'm trying to make a fetch request that takes a long time (2-5m), but I'm getting this error message in the Chrome console after around 1m30:
Failed to load resource: net::ERR_EMPTY_RESPONSE
So, I tried to wrap my XHR request in a timer, like that:
client-side code:
xhr.open('GET', '/refresh', true);
xhr.timeout = 600000;
xhr.onload = function () {
window.location.reload();
};
xhr.send(null);
server-side code:
app.get("/refresh", (req, res) => {
var process = spawn('python3', ["scrape.py"]);
process.stdout.on('data', function (data){
console.log("-> " + data.toString());
if ((data.toString().indexOf("done") > -1)) {
res.json({done: "done"});
}
});
});
But I'm still getting the same error after around 1m30, and also the console log 'TIMEOUT'.. Why?
If I run the exact same code with a Python script that lasts 20 seconds, it works perfectly.
Thanks
To prevent the browser from timing out due to lack of response, you must send it a response within the timeout (sounds like 90s based on your test). This means your application most likely needs to be redesigned. Since it doesn't look like you're using websockets, everything is stateless and synchronous on the server, so you will have to make the requests pseudo-asynchronous by hand:
The historical pattern is to:
Spawned process writes its result somewhere
The server should return a token (/refresh?token=1234) where when requested, returns the result when the result is ready, otherwise returns something like status 204.
Return the token in the initial response
In your client, your timeout keeps checking the token, if status == 204, reset the timer, otherwise you have the complete response and break out of the timer.
Related
I'm running some javascript which uses a setInterval to trigger an AJAX request and then performs some actions based on the returned output. I'm quite confused with it, because it works perfectly on my home server, but now that I've put it out onto the web, I'm having problems.
The following error appears on Google Chrome:
http://www.domain.com/ajax/sound.php
Failed to load resource: net::ERR_EMPTY_RESPONSE
The error doesn't occur consistently however. Sometimes the scripts run for several minutes before an error occurs. Sometimes it all breaks down in seconds.
I've already checked the obvious solution - that my server-side script is returning nothing. I did this by commenting out the entire script and having it do nothing but return information. That didn't help.
I have several AJAX requests running from the same page, and all of them eventually return the same error (with their respective pages of code). I've tried isolating the requests and performing them one at a time at a slowed down rate, and have determined that the requests work in a general sense, but as soon as one of them sends an error, they all completely stop working and start sending the same error.
Once the errors occur, I get no response when I try to access any part of my site (even parts with no AJAX). Safari says "...the server unexpectedly dropped the connection. This sometimes occurs when the server is busy. Wait for a few minutes, and then try again." I've tried this in Explorer, Chrome, and Firefox as well with similar results. Thankfully, the site does come back up after a few minutes of making no AJAX requests.
An example of one of the AJAX requests is as follows:
//At the set interval, we create a string for the request:
function alef(){
string = "a='a'";
request(sound, "ajax/sound.php", string);
}
//That function fires off an AJAX request:
function request(fix, url, string){
var xhttp = new XMLHttpRequest();
xhttp.onreadystatechange = function() {
if (xhttp.readyState == 4 && xhttp.status == 200) {
fix(xhttp.responseText);
}
}
xhttp.open("POST", url, true);
xhttp.setRequestHeader("Content-type", "application/x-www-form-urlencoded");
xhttp.send(string);
}
//The AJAX request returns a result to be processed by the following function:
function sound(text){
if(text == "sound"){
insound.play();
}
}
Presume that my sound.php files says:
<?php echo "sound"; ?>
It doesn't say only that, but even when it did for testing purposes, I had the same problem.
Any solutions?
I have a node.js process that uses a large number of client requests to pull information from a website. I am using the request package (https://www.npmjs.com/package/request) since, as it says: "It supports HTTPS and follows redirects by default."
My problem is that after a certain period of time, the requests begin to hang. I haven't been able to determine if this is because the server is returning an infinite data stream, or if something else is going on. I've set the timeout, but after some number of successful requests, some of them eventually get stuck and never complete.
var options = { url: 'some url', timeout: 60000 };
request(options, function (err, response, body) {
// process
});
My questions are, can I shut down a connection after a certain amount of data is received using this library, and can I stop the request from hanging? Do I need to use the http/https libraries and handle the redirects and protocol switching myself in order the get the kind of control I need? If I do, is there a standardized practice for that?
Edit: Also, if I stop the process and restart it, they pick right back up and start working, so I don't think it is related to the server or the machine the code is running on.
Note that in request(options, callback), the callback will be fired when request is completed and there is no way to break the request.
You should listen on data event instead:
var request = require('request')
var stream = request(options);
var len = 0
stream.on('data', function(data) {
// TODO process your data here
// break stream if len > 1000
len += Buffer.byteLength(data)
if (len > 1000) {
stream.abort()
}
})
Looking at the example given at the nodejs domain doc page: http://nodejs.org/api/domain.html, the recommended way to restart a worker using cluster is to call first disconnect in the worker part, and listen to the disconnect event in the master part. However, if you just copy/paste the example given, you will notice that the disconnect() call does not shutdown the current worker:
What happens here is:
try {
var killtimer = setTimeout(function() {
process.exit(1);
}, 30000);
killtimer.unref();
server.close();
cluster.worker.disconnect();
res.statusCode = 500;
res.setHeader('content-type', 'text/plain');
res.end('Oops, there was a problem!\n');
} catch (er2) {
console.error('Error sending 500!', er2.stack);
}
I do a get request at /error
A timer is started: in 30s the process will be killed if not already
The http server is shut down
The worker is disconnected (but still alive)
The 500 page is displayed
I do a second get request at error (before 30s)
New timer started
Server is already closed => throw an error
The error is catched in the "catch" block and no result is sent back to the client, so on the client side, the page is waiting without any message.
In my opinion, it would be better to just kill the worker, and listen to the 'exit' event on the master part to fork again. This way, the 500 error is always sent during an error:
try {
var killtimer = setTimeout(function() {
process.exit(1);
}, 30000);
killtimer.unref();
server.close();
res.statusCode = 500;
res.setHeader('content-type', 'text/plain');
res.end('Oops, there was a problem!\n');
cluster.worker.kill();
} catch (er2) {
console.error('Error sending 500!', er2);
}
I'm not sure about the down side effects using kill instead of disconnect, but it seems disconnect is waiting the server to close, however it seems this is not working (at least not like it should)
I just would like some feedbacks about this. There could be a good reason this example is written this way that I've missed.
Thanks
EDIT:
I've just checked with curl, and it works well.
However I was previously testing with Chrome, and it seems that after sending back the 500 response, chrome does a second request BEFORE the server actually ends to close.
In this case, the server is closing and not closed (which means the worker is also disconnecting without being disconnected), causing the second request to be handled by the same worker as before so:
It prevents the server to finish to close
The second server.close(); line being evaluated, it triggers an exception because the server is not closed.
All following requests will trigger the same exception until the killtimer callback is called.
I figured it out, actually when the server is closing and receives a request at the same time, it stops its closing process.
So he still accepts connection, but cannot be closed anymore.
Even without cluster, this simple example illustrates this:
var PORT = 8080;
var domain = require('domain');
var server = require('http').createServer(function(req, res) {
var d = domain.create();
d.on('error', function(er) {
try {
var killtimer = setTimeout(function() {
process.exit(1);
}, 30000);
killtimer.unref();
console.log('Trying to close the server');
server.close(function() {
console.log('server is closed!');
});
console.log('The server should not now accepts new requests, it should be in "closing state"');
res.statusCode = 500;
res.setHeader('content-type', 'text/plain');
res.end('Oops, there was a problem!\n');
} catch (er2) {
console.error('Error sending 500!', er2);
}
});
d.add(req);
d.add(res);
d.run(function() {
console.log('New request at: %s', req.url);
// error
setTimeout(function() {
flerb.bark();
});
});
});
server.listen(PORT);
Just run:
curl http://127.0.0.1:8080/ http://127.0.0.1:8080/
Output:
New request at: /
Trying to close the server
The server should not now accepts new requests, it should be in "closing state"
New request at: /
Trying to close the server
Error sending 500! [Error: Not running]
Now single request:
curl http://127.0.0.1:8080/
Output:
New request at: /
Trying to close the server
The server should not now accepts new requests, it should be in "closing state"
server is closed!
So with chrome doing 1 more request for the favicon for example, the server is not able to shutdown.
For now I'll keep using worker.kill() which makes the worker not to wait for the server to stops.
I ran into the same problem around 6 months ago, sadly don't have any code to demonstrate as it was from my previous job. I solved it by explicitly sending a message to the worker and calling disconnect at the same time. Disconnect prevents the worker from taking on new work and in my case as i was tracking all work that the worker was doing (it was for an upload service that had long running uploads) i was able to wait until all of them are finished and then exit with 0.
I am sending an ajax XMLHttpRequest using the POST method. When the request is sent, I am getting a readyState of 4 with a status of 12030. I know 12030 is a Microsoft specific state code that indicate the connection was not sustained. However, I can't seem to find where my code would be causing this error. If I navigate to the page without using the ajax request, it loads fine. Below is the javascript method and the call line.
AJAX Method
/*Sends ajax request with post data that updates the content view via ajax on completion
* #param message : message after completion of ajax request
* #param url : url to request
* #param params : post parameters as string
*/
function changeAjaxPost(message, url, params) {
var ajx;
if (window.HXMLHttpRequest) {
UtilLogger.log(HtmlLogger.FINE, "Using XMLHttpRequest");
ajx = new XMLHttpRequest();
}
else {
UtilLogger.log(HtmlLogger.FINE, "Using ActiveXObject");
ajx = new ActiveXObject("Microsoft.XMLHTTP");
}
ajx.open("POST", url, true);
ajx.setRequestHeader("X-Requested-With", "XMLHttpRequest");
ajx.setRequestHeader("Content-Type", "text/html");
ajx.setRequestHeader("Content-length", params.length);
ajx.setRequestHeader("Connection", "close");
ajx.send(params);
ajx.onreadystatechange = function () {
document.write(ajx.readyState + ":" + ajx.status);
if (ajx.readyState == 4 && ajx.status == 200) {
alert(message);
updateContent();
}
else if (ajx.readyState == 4 && ajx.status == 400) {
alert("Page Error. Please refresh and try again.");
}
else if (ajx.readyState == 4 && ajx.status == 500) {
alert("Server Error. Please refresh and try again.");
}
}
}
Call Line
changeAjaxPost("Excerpt Saved", "./AJAX/myadditions_content.aspx", params);
http://danweber.blogspot.com/2007/04/ie6-and-error-code-12030.html
IE6 and error code 12030
If you are running Internet Explorer 6 and using Ajax, you may get some XMLHttpRequests terminated with code 12030.
Microsoft's knowledge base at http://support.microsoft.com/kb/193625 shows that this code is
12030 ERROR_INTERNET_CONNECTION_ABORTED
The connection with the server has been terminated.
Googling turned up no help; the people encountering this don't seem to be aware of how network sockets work, so I had to actually figure it out on my own.
This happens when the client thinks a connection has been kept open, and the server thniks it is closed. The server has sent a FIN, and the client has responded to that with an ACK. Running "netstat" on the Windows client shows that the connection is in the CLOSE_WAIT state, so IE6 really ought to have realized this before trying. This is entirely the client's fault. If you wait about 60 seconds, the Windows OS stack will retire the connection.
If you need to support IE6, you have some solutions, in various degrees of ugly:
retry the ajax request in case of error code 12030
if the browser is ie, send an empty request to the server ahead of each ajax request
bundle up your ajax requests such that the time between them is ( (greater than server_timeout) AND (less than server_timeout + one minute)
IE7, fwiw, will issue a RST over the CLOSE_WAIT socket as soon as it realizes it has an outgoing connection to make. That, and the socket will only stay in that CLOSE_WAIT state for about 5 seconds anyway.
Sometimes, using
setRequestHeader("Connection","close");
can cause problems in some browsers.
Removing this solves the problem.
Credit goes to #MikeRobinson
I'm working on a project which uses user authentication. I'm facing a issue with my AJAX requests if there is no authenticated session present when the request is made.
I've a session timeout of 3min, so if the user keeps idle for 3 min then do some action which causes a AJAX request then the request will fail and return a 403 error. Here What I'm planning to do is intercept all the AJAX request from the page and sent a ping to the server which will return a JSON object saying whether there is a valid session. If there is one then the client will continue with the current request else it will reload the current page which will take the user to the login page and the user has to provide the credentials again.
Here is my implementation.
$("#param-ajax").ajaxSend(function(evt, request, settings) {
var pingurl = GtsJQuery.getContextPath() + '/ping.json';
var escapedurl = pingurl.replace(/\//g, "\\/");
var regexpr1 = eval('/^' + escapedurl + '\\?.*$/');
var regexpr2 = eval('/^' + escapedurl + '$/');
// Proceed with the ping only if the url is not the ping url else it will
// cause recursive calls which will never end.
if (!regexpr1.test(settings.url) && !regexpr2.test(settings.url)) {
var timeout = false;
$.ajax({
url : pingurl,
cache : false,
data : {
url : settings.url
},
async : false,
complete : function(request, status) {
if (status == "error") {
try {
// GtsJQuery.getJsonObject() converts the string
// response to a JSON object
var result = GtsJQuery
.getJsonObject(request.responseText)
if (result.timeout) {
timeout = true;
return;
}
} catch (e) {
// ignore the error. This should never occure.
}
}
}
});
// Reload the window if there is a timeout -- means there is no valid
// sesstion
if (timeout) {
window.location.reload();
}
}
});
Here everything work fine included the window.location.reload(), but the original ajax request is not aborted. Since the original AJAX request is not aborted after the page reload is triggered, the AJAX request also is sent to the server. I want some mechanism which will allow me to abort the original request if the timeout turns out to be true.
This post offers some answer, but the issue remains with the third party plugins like datatables which uses AJAX. We cannot write a error handler for those AJAX requests.
Thank you.
If I am understanding the situation, you do not need any of that. In your original ajax request, simply add an error function that will redirect the user.
errHandler = function(XMLHttpRequest, textStatus, errorThrown) {
if( textStatus.match(/forbidden/i) ) {
redirectUserToLoginHere();
}
}
$.ajax({
success: yourFunctionHere,
error: errHandler
})
Then you might be able to make some ajax wrapper which always has that errHandler so you don't have to place it in every single ajax call.
EDIT:
after some experimentation, if an 'ajaxSend' handler throws an Error, then the original request will never be sent.
Also, if the handler does
document.location = '/login';
then the original request is never sent.
Hopefully that helps :)
I changed my concept now, I'm checking for the xmlHTTPRequest in the server side using the request header 'x-requested-with'.
If it is a xmlHTTPRequest then 'x-requested-with' will have the value 'XMLHttpRequest'. Both the javascript libraries(EXTjs and jQuery) I'm using sets this header correctly.
Here is my server side code
boolean isAjaxRequest = StringUtils.endsWithIgnoreCase(request.getHeader("x-requested-with"), "XMLHttpRequest")
EDIT
If the given request is a ajax request the response will be json data which will have status 403 and it will contain a key called timeout with value true
ex: {timeout: true, ....}
Then we will handle this in the $.ajaxError() event handler will handle the error. If the error status is 403 and the timeout value is true then I'll reload the page.