We have recently noticed an issue where a common page on our site temporarily freezes when navigated to from Internet Explorer with the message "This web page is not responding due to a long running script".
After investigating, I can see that it is caused by an AJAX XMLHttpRequest that is taking 30 - 45 seconds to complete. Normally when there are performance issues with our AJAX calls like this, the long wait time is during the wait for the server's response. But here, it is the wait to create the request and send it that is taking so long:
Note that there are no issues with this request in Google Chrome at all, it only takes 200ms:
These results are consistent on every page refresh. Note also that this is not a large request nor a large response. The request body is actually empty:
and the response is quite small:
I figured since the problem appears to be client-side, there must be something off with our scripts, but we use the same generic function for all of our AJAX calls and don't have this problem with anything else:
JSONRequest: function (url, type, data, success, error) {
var customError = function (er) {
console.log(er);
}
if (error !== 'undefined' && error != null)
customError = error;
$.ajax({
url: url,
type: type,
cache: false,
data: JSON.stringify(data),
contentType: 'application/json; charset=utf-8',
success: success,
error: customError
});
},
I am quite puzzled here. Is this simply a matter of "Avoid IE", or is there something I am missing? 30+ seconds to create a small request and send it seems absurdly long, especially when it is fast in Chrome. What gives?
*Note that I am testing with IE 11.
For some reason, a line of JQuery that unchecks all check boxes in a list of them was causing the issue:
$('.list-check-buttons.check-button.only-button input:checkbox').prop('checked', false)
I verified this by stepping through in the Chrome Debugger and also removing this line resolved the issue for the most part. (It turns out we didn't need this line as the same logic was also being handled in another place which strangely enough was not causing performance issues.)
Related
Using Vue Resource with Vue.js v1
This is also running using vue-router and built with Browserfy.
I have vue-resource working on the whole for post and gets but I have one particular one which occasionally times out or so it seems. I confirm the server is getting the post request and then sending the response back but the webpage sits waiting as if it's missed the response. It seems to happen randomly. If i reload the page and resend then it works.
I've tried to replicate by pausing the server before the sending of the response, wait a bit then continue. In that case it works everytime and webpage continues as expected.
my post functions is as follows (slightly edited to shrink so easier to read)
this.saving = true;
// POST save to the database
this.$http.post('/api/savebr',
{
tNumber: parseInt(this.tNumber),
bNumber: parseInt(this.cNumber)
},
// I added the timeout for testing
{ timeout: 2000 }
).then((response) => {
this.completed.push(parseInt(this.cNumber));
this.saving = false;
}, (response) =>
{
// error callback
console.log(response);
this.saving = false;
alert('Could not save to server. Please try again.');
})
}
In above the timeout works if I pause the server. When the server continues the webpage ignores the response which I guess is correct.
When the webpage randomly misses the response the error callback is never done. It works if I alter response from server to be an error which is expected too.
The timeout option I added recently to see if that makes any odds and also tried with zero setting.
Is it possible to detect the timeout, cancel the current post and send a new one?
If I repeat the post, suddenly both seem to work and so the success code runs twice even though originally the first request seems to be stuck.
Sorry for long post and all pointers gratefully received.
I am in quite an interesting situation. I work for a middle sized company and we have IE 11 for our browser but for a particular application I am programming, its document mode is set to IE 5. I am making calls to a web service located within the company Intranet (the particular URL is https) and the website where I'm making the call is http (not sure if I can set document.domain or anything here). I am currently using ajax and I'm able to successfully communicate with service and get code back using the code below. Although, each time I do get the message that says, "the information you're accessing is coming from another source." If I hit yes, then I get information back from the server, if I hit no, then the data does not come through. I tried to call this same method from a different page on the same website, and instead get the access denied error, without even having the prompt. That one really has me confused.
I've read through many many different articles on the subject of CORS at this point and know there's a few things to try to avoid this message. Credentials are omitted below. I am using Jquery 1.9.1 minimized version.
$.support.cors = true;
$.ajax
({
type: "POST",
crossDomain: true,
beforeSend: function(request)
{
request.setRequestHeader("Authorization", "");
return true;
},
url: webServiceURL,
dataType: "xml",
async: true,
data: soapMessage,
contentType: "text/xml; charset=\"utf-8\"",
success: OnSuccess,
error: OnError
});
}
function OnSuccess(data, status)
{
alert(status);
alert(data);
var documentValArr = parseXMLRrec(data);
insertDataIntoTable(documentValArr);
}
function OnError(request, status, error)
{
alert(request.responseText);
alert(error);
alert(status);
}
Enable cross domain sources IE setting. I suppose this can try to be set for that specific URL in that intranet zone, but it still feels wrong to me. Not sure how the security group at work would feel about it either.
Add the URL I'm trying to access to the Intranet trusted sites. I think this would fix the issue I'm seeing and this is something I definitely would want to try.
Also, when testing on my local workstation and calling the web service through IE 11 doc 5 mode, I do not get the message.
Work with the group that owns the server where the service lives, and see if they can somehow add URL's that can access the service alright. This would be interesting because there are quite a lot of production sites.
Because I'm using IE 11 in doc 5 mode I could try to use the XDomainRequest object but I am not sure if it would be a worth while effort to do this if the earlier options work out.
So there my interesting scenario is. I would very much like some feed back from people on the situation, what they would do, and what type of things and solutions go for. THank you very much for all input and ideas!
I have a scenario where I need to make a cross-domain request upon leaving a page. The endpoint has all the proper CORS headers configured.
This is the request:
$.ajax({
type: "POST",
url: URL,
data: {
// some stuff
},
async: false
});
When tested as-is, it works great. When I try to trigger this request from an unload handler:
$(window).unload(function() {
// same as above
});
it intermittently fails. On the occasion that it fails, I get this stack trace:
Error: A network error occurred.
at Object.x.ajaxTransport.x.support.cors.e.crossDomain.send (http://cdnjs.cloudflare.com/ajax/libs/jquery/2.0.3/jquery.min.js:6:9344)
at Function.x.extend.ajax (http://cdnjs.cloudflare.com/ajax/libs/jquery/2.0.3/jquery.min.js:6:4804)
at http://my.web.site.com/main.js:129:11
at x.event.dispatch (http://cdnjs.cloudflare.com/ajax/libs/jquery/2.0.3/jquery.min.js:5:10006)
at x.event.add.y.handle (http://cdnjs.cloudflare.com/ajax/libs/jquery/2.0.3/jquery.min.js:5:6789)
With a jQuery DOMException and error code 19.
What's going on here? It looks like there's some race condition here that I'm unaware of, but other than setting the request to be synchronous, I'm not sure what else I can do.
You simply can't rely on unload handlers. May it be jQuery's implementation or the native onbeforeunload. (Maybe onbeforeunload will work better then jQuery's implementation)
There are so many ways a user can leave the page (crash, ALT+F4, close tab, ...).
Most browsers just don't wait long enough, so your async requests will fail in most cases.
You should probably reconsider your design.
I ended up using the beforeunload event instead:
$(window).on('beforeunload' function() {
// same code
});
which works just fine on all modern browsers. The beforeunload event behavior is consistent, and as long as the request is sent out synchronously, the event execution - and thus the window unloading - is paused.
This is the exact event services such as Gmail, Facebook and even Stack Overflow use to confirm is user is genuinely interested in leaving a page while in the middle of an action.
I'm developing this extension https://builder.addons.mozilla.org/addon/1022928/latest/
The code central to this question is in Data/panel.js
And it's working pretty well, except that whenever I hit "Gem" to post a jquery call, it just hangs at the loading icon, I don't get any feedback in the console as to why the call is not going through and being processed as it should.
So how do I debug that with the new firefox add-on sdk builder beta. I've tried writing to console.log(), and I've read that it's supposed to work for others, but I really can't see any of my log messages, just errors that are synchronous in code, and hence not ajax errors.
Returning to my question: How do I debug a hanging ajax call in my firefox extension's panel?
HTTPFox extension shows that your request was sent successfully and result is a 500 Internal Error response. So jQuery would have called an error callback - but you didn't give it any (see jQuery.post() documentation, the third parameter is the success callback). To define an error callback you should ideally use jQuery.ajax() method directly:
$.ajax({
type: "POST"
url: url,
data {title:$("#txtTitle").val(), url:encodeURIComponent(taburl)},
success: function(data, textStatus) {
...
},
error: function(data, textStatus) {
...
}
});
Or you could use the request package of the Add-on SDK that provides similar API.
To sum up: you don't see an error message because there was no error. In case of errors you should indeed expect an exception that will be visible in Error Console if not caught.
The problem I am having is that when I use jquery ajax post, with very low frequency (< 2%), the post parameters never make it to the server. I do see the post request in the access log. It seems to happen only on IE (I've observed it on 7, 8, and 9 in the logs).
When I switch the call from type "post" to type "get" the issue goes away.
Has anyone else ever seen this odd behavior on IE? Thanks!
I have seen this for various ajax calls, but here is a typical one:
var data= {
"guess" : "m1",
"eas" : "hello world"
};
$.ajax({
url: "http://myco.com/ajaxcall.action",
data: data,
type : 'post',
dataType: 'json',
success: function(data) {},
error: function() {}
});
Update: passing "cache: false" does not fix the issue.
I have spent the last week tracking down a similar problem in my own application (uses Dojo, not JQuery). From your description and frequency of occurrence, I would say it's the same issue.
When HTTP persistent connections are used between browser and server (the default behavior), an HTTP connection can be closed down by the server at any time. This creates a very small timing hole when the browser starts to send a new request at the same time the server closes the connection. Most browsers will use a different connection or open a new connection and resend the request. This is the behavior suggested in RFC 2616 section 8.1.4:
A client, server, or proxy MAY close the transport connection at any
time. For example, a client might have started to send a new request
at the same time that the server has decided to close the "idle"
connection. From the server's point of view, the connection is being
closed while it was idle, but from the client's point of view, a
request is in progress.
This means that clients, servers, and proxies MUST be able to recover
from asynchronous close events. Client software SHOULD reopen the
transport connection and retransmit the aborted sequence of requests
without user interaction so long as the request sequence is
idempotent (see section 9.1.2).
Internet explorer does try to resend the request when this happens, but when it happens to be a POST, it mangles it up by sending the headers (with Content-Length) but no actual data. That is a malformed request and should always lead to an HTTP error (usually after some timeout waiting for the data that never comes).
This bug is documented by Microsoft as KB 895954 (see http://support.microsoft.com/kb/895954). Microsoft first recognized this bug in IE 6. They provided a hotfix, and appear to have shipped the hotfix with every version of IE since then including IE 9. There are two problems with the fix:
The hotfix is not activated by default. You have to create a really weird key using regedit to activate the fix: HKEY_LOCAL_MACHINE\Software\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_SKIP_POST_RETRY_ON_INTERNETWRITEFILE_KB895954.
The fix doesn't really fix the problem. The "fixed" behavior is that when the connection is closed when trying to send a request, it does not even try to resend it. It simply passes the error along to the javascript application.
It appears that you have to add error handlers in your code and re-post the request yourself if it fails. I am looking into this solution for my application. My concern is that I'm not sure how to tell if the error I get is caused by a failed attempt to send the query, or some error sent back from the server as a result of the query (in which case I don't want to resend it).
I wrote a C program to simulate a web server and explicitly close a connection to see how the browser handles it. I have found that IE reproduces the errant behavior 100% of the time, while Firefox, Safari and Chrome recover by properly resending the POST on another connection 100% of the time. Perhaps the answer is, "don't use IE."
As a direct answer to your question: Yes we have just come across this issue and could not find a reasonable explanation. It only affects IE and with a very low frequency - took a long while to get to the conclusion that it is a sporadic jQuery Ajax in IE bug. We had to 'fix' the issue by returning a fail from the server under this condition and re-posting the data after a 1 second delay!
Hacky as hell but seemed to be the only way.
There was definitely no clash with DOM elements etc. and no logical reason for this to happen, the page can be updated many times by the user successfully with intermittent fails.
Must be a bug.
I think you have to prevent caching in Internet Explorer. Try to set option cache to false.
Example:
$.ajax({
url: "http://myco.com/ajaxcall.action",
data: data,
type : 'post',
dataType: 'json',
success: function(data) {},
error: function() {},
cache: false
});
The params sent to the PHP are received from IE in GET:
$.ajax ({
url: "path/to/ajax.php"
,method: "POST"
,data: {
var1: "value1"
,var2: true
,varX: 123123
}
,cache: false
,success: function (data) {
alert (data);
}
});
Then on PHP you should use REQUEST instead of POST:
$var1 = $_REQUEST ["var1"]; // value1
$var2 = $_REQUEST ["var2"]; // true
$var3 = $_REQUEST ["var3"]; // 123123
This example could use it for compatibility with IE7