Chrome and Microsoft IE are aborting a POST response but it's working fine in Firefox. When I run a test through Fiddler I notice the POST headers in Chrome/IE do not contain Cache-Control: max-age=0 and Origin: ... but are otherwise the same.
In Chrome when I press the submit button the POST is sent to the server, processed, and the client aborts the response. After a few reposts, the client finally accepts the response; however the server has already processed the request so it results in duplicate info. This never happens on Firefox, it just always accepts the first response.
It seems to only happen if the request is large (ie: contains a lot more fields for the server to process) leading me to think this has something to do with the time it takes for the server to process the request (in Firefox the request shows as taking about 9 seconds).
Is there something in Firefox that would cause it to wait longer for this response? Or vice-versa, something in IE/Chrome that could be making it end prematurely?
It may not be relevant but this is a Perl Mason site. The response headers in the page which has the form being submitted:
HTTP/1.1 200 OK
Date: Tue, 07 Aug 2018 19:08:57 GMT
Server: Apache
Set-Cookie: ...; path=/
Set-Cookie: TUSKMasonCookie=...; path=/
Expires: Mon, 1 Jan 1990 05:00:00 GMT
Pragma: no-cache
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0, max-age=0
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
It turns out it was Javascript on the page responsible for the reposting. A setTimeout() which recursively called its own function was continuing to be set even after the form data had been posted within the method.
In Chrome/IE/Edge the form submission would be POSTed and the function would continue to set another timeout calling itself. The subsequent call would again POST and abort the connection waiting on the original.
Firefox however would not repost although it too would continue to set and recall the function.
The fix was to add a flag to track when the POST was submitted and when set, to stop the timeout cycle:
function countdown(secs, starttime) {
var d = new Date();
if (!starttime) {
starttime = d.getTime() / 1000;
}
var nowtime = d.getTime() / 1000;
var timeleft = (starttime + secs) - nowtime;
timeleft = Math.max(0, timeleft);
var isposted = false; // <-- Added as fix
if (timeleft == 0) {
isposted = true; // <-- Added as fix
alert("Time is up. Click OK.");
var frm = document.getElementById("frm");
frm.submit();
}
if (!isposted) { // <-- Added as fix
timeout = setTimeout(["countdown(", secs, ", ", starttime, ")"].join(""), 1000);
}
}
Related
I am using XHR to get some HTML. The HTML response contains an img-element, and this image is for some reason cached.
The server serves the image along with the following headers:
cache-control: no-cache, must-revalidate
content-security-policy: default-src 'self'; ...
content-type: image/png
date: Mon, 08 Oct 2018 03:41:00 GMT
expires: Sat, 26 Jul 1997 05:00:00 GMT
server: nginx (Ubuntu)
status: 200
strict-transport-security: max-age=30879000; includeSubDomains; preload
x-content-type-options: nosniff
x-frame-options: SAMEORIGIN
x-xss-protection: 1; mode=block
And XHR is used like this:
var xmlHttp = new XMLHttpRequest();
xmlHttp.onreadystatechange = function () {
if (xmlHttp.readyState === 4 && xmlHttp.status === 200) {
addToPage(xmlHttp.responseText);
}
};
xmlHttp.open("GET", url, true);
xmlHttp.setRequestHeader('X-Requested-With', 'XMLHttpRequest');
xmlHttp.send();
Example XHR response:
<h1>Title</h1>
<p>Stuff</p>
<img src="/captcha-generator">
It works as expected when not using XHR. For some reason the image gets cached when it's linked to from within the XHR response.
Why is the image cached, and how do I force the browser to fetch the new image?
I can of course append a unique parameters to break the cache,(eg. /captcha-generator?r={random-string} but I would like to avoid that.
In case of Development mode it is happing .So Incase of google crome go to right top
corner click on costomize and control google crome .then open with New inconginto window
for development mode
Is it possible to know the nature of a URL before GETting it ?
I have one URL in particular that ends with m3u but is not a simple file I can download. This is actually a radio stream. As I expect a (finite) file, the GET method never ends. The timeout options does not works in this case (normal).
const options = {timeout: 5000};
return HTTP.call('GET', "http://av.rasset.ie/av/live/radio/junior.m3u", options);
The safe solution should be to ask for the type of the response before actually getting the file.
How can I do that?
Thanks,
Mickael.
I guess you can run a HEAD request first (instead of GET) and verify the headers. Then after making GET you will know how to react.
Unfortunately in this particular case HEAD works for the first request (which returns a redirect):
curl -v -X HEAD http://icecast1.rte.ie/junior http://av.rasset.ie/av/live/radio/junior.m3u
Warning: Setting custom HTTP method to HEAD with -X/--request may not work the
Warning: way you want. Consider using -I/--head instead.
* Trying 104.16.107.29...
* TCP_NODELAY set
* Connected to av.rasset.ie (104.16.107.29) port 80 (#0)
> HEAD /av/live/radio/junior.m3u HTTP/1.1
> Host: av.rasset.ie
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 302 FOUND
< Date: Mon, 13 Nov 2017 11:07:33 GMT
< Content-Type: text/html; charset=utf-8
< Connection: keep-alive
< Set-Cookie: __cfduid=d89353ae357a0452208835b3092f0fbee1510571253; expires=Tue, 13-Nov-18 11:07:33 GMT; path=/; domain=.rasset.ie; HttpOnly
< Location: http://icecast1.rte.ie/junior
< X-Server-Name: djd
< Cache-Control: max-age=0
< Accept-Ranges: bytes
< X-Varnish: 2802121867
< X-Served-By: MISS: mt-www2.rte.ie
< CF-Cache-Status: MISS
< Server: cloudflare-nginx
< CF-RAY: 3bd1449d2764410c-HAM
* no chunk, no close, no size. Assume close to signal end
<
^C
But fails for second (probably HEAD is not supported, should return 405):
curl -v -X HEAD http://icecast1.rte.ie/junior
Warning: Setting custom HTTP method to HEAD with -X/--request may not work the
Warning: way you want. Consider using -I/--head instead.
* Trying 89.207.56.171...
* TCP_NODELAY set
* Connected to icecast1.rte.ie (89.207.56.171) port 80 (#0)
> HEAD /junior HTTP/1.1
> Host: icecast1.rte.ie
> User-Agent: curl/7.51.0
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 400 Bad Request
< Server: Icecast 2.4.2
< Date: Mon, 13 Nov 2017 11:07:42 GMT
< Content-Type: text/html; charset=utf-8
< Cache-Control: no-cache
< Expires: Mon, 26 Jul 1997 05:00:00 GMT
< Pragma: no-cache
<
<html><head><title>Error 400</title></head><body><b>400 - unknown request</b></body></html>
* Curl_http_done: called premature == 0
* Closing connection 0
Even if in this particular case, running HEAD runs into an HTTP Parse Error, while I know the stream is OK (if someone can still explains me why, I would be grateful), I think Opal gave the general solution:
You can run a HEAD request first
I have a page that has some heavy content and takes around 1.5 minutes to fully load. I have added the retrieve data on scroll functionality where am calling an ajax request when the user scrolls to retrieve data.
I have wrapped the data retrieval behavior to a function handleEventsLoad() and called this function on load trying to cover the case where the user scrolls before the page fully loads reaching the end:
$(document).ready(function()
{
$(window).scroll(function()
{
handleEventsLoad();
});
handleEventsLoad();
});
The ajax request is called properly, though, the first time this request is called I cannot read any session data at PHP level in $_SESSION knowing that the session is being set on page load and if I print it in the HTML I can see its value.
After the page fully loads the first time and I refresh the page, the same request is called but this time I can read the session data properly and the auto load on scroll functions as expected (the difference here is that all the data have been cached and the pages takes 7 sec to load).
Does the loading process have anything to do with reading the session at ajax call?
Here is a sample of the ajax :
function handleEventsLoad()
{
if($('#sidebar').length > 0
&& $(window).scrollTop() >= $('#sidebar').offset().top + $('#sidebar').outerHeight() - window.innerHeight)
{
$.ajax({
url: "ajaxUrl.php",
type: 'GET',
dataType: 'json',
data: {/* passing some data */},
success:function(data)
{
// some code here
},
error: function(xhr, status, error)
{
var err = eval("(" + xhr.responseText + ")");
console.log(err.Message);
}
});
}
}
ajaxUrl.php
<?php
session_start();
print_r($_SESSION);
return;
?>
UPDATE
Please find below the difference between HTTP requests between the first and second ajax calls:
Request:
First Call:
Connection: keep-alive
Second Call:
Cookie: _ga=GA1.2.1253088293.1457289524; _gat=1; PHPSESSID=79c38493322374f1bc19541f4c538b02
Connection: keep-alive
Cache-Control: max-age=0
Response:
First Call:
Set-Cookie: PHPSESSID=79c38493322374f1bc19541f4c538b02; expires=Mon,
07-Mar-2016 18:38:41 GMT; path=/; domain=www.mydomain.com; secure;
HttpOnly PHPSESSID=79c38493322374f1bc19541f4c538b02; expires=Mon,
07-Mar-2016 18:38:41 GMT; path=/
Second Call:
Set-Cookie: PHPSESSID=79c38493322374f1bc19541f4c538b02; expires=Mon, 07-Mar-2016 18:40:06 GMT; path=/
I've read that it could be wise (for caching purposes) to call a file with it's last modified date and let the server resolve it to the original file. In this way you could set caching to for example 10 years and use a static name for a certain version of the file.
However since I also load in javascript asynchronously on my site, I need to be able to do the same in javascript/jQuery.
This is my current code, how would I be able to get the last-modified date of the script in question being loaded?
//load new js
if (typeof loadedScripts[url] === 'undefined') {
$.getScript("javascript/" + url + ".js", function() {
if (typeof window["load_" + url] !== 'undefined') {
promises = promises.concat(window["load_" + url](html));
}
loadedScripts[url] = 1;
});
}
else {
if (typeof window["load_" + url] !== 'undefined') {
promises = promises.concat(window["load_" + url](html));
}
}
(It also executes a function called on load, but that is not interesting for this question)
I know it is possible to get the last modified date of the current document with document.lastModified, but I'm unsure how it would translate into a $.getScript call.
I have also enabled caching:
//set caching to true
$.ajaxSetup({
cache: true
});
For caching purposes, I rather would suggest the ETag.
http://en.wikipedia.org/wiki/HTTP_ETag
We use it for busting the cache if needed and it works great.
However, jQuery's Ajax function provides an ifModified param:
http://api.jquery.com/jquery.ajax/
Here's the explanation:
Allow the request to be successful only if the response has changed
since the last request. This is done by checking the Last-Modified
header. Default value is false, ignoring the header. In jQuery 1.4
this technique also checks the 'etag' specified by the server to catch
unmodified data.
Using this param, the first request to get the Script would look like this:
GET /script.js HTTP/1.1
Host: www.orange-coding.net
HTTP/1.1 200 OK
Last-Modified: Wed, 02 Jan 2013 14:20:58 GMT
Content-Length: 4096
And a second request would look like this:
GET /script.js HTTP/1.1
Host: www.orange-coding.net
If-Modified-Since: Wed, 06 Oct 2010 08:20:58 GMT
HTTP/1.1 304 Not Modified
I am told that MsXML2 follows redirects. However I get a "HTTP 0" error from the script when accessing a URL that has moved.
The reason I need to make it work is because this is a Windows (Sidebar) Gadget used by 300 000 users. And I am moving the website, and want all calls for old versions to still go through.
This is the code simplified:
function MyHttpCall() {
var httpReq = new ActiveXObject("Msxml2.XMLHTTP.6.0");
httpReq.onreadystatechange = function() {
if (httpReq.readyState < 4) return;
if (httpReq.status != 200) alert("HTTP " + httpReq.status);
alert ("Houston we have contact");
}
httpReq.open("GET", myURL, true);
httpReq.setRequestHeader("Cache-Control", "no-store, no-cache, must-revalidate");
httpReq.setRequestHeader("Cache-Control", "post-check=0, pre-check=0");
httpReq.setRequestHeader("Pragma", "no-cache");
httpReq.setRequestHeader("If-Modified-Since", "Tue, 01 Jan 2008 00:00:00 GMT");
httpReq.send();
}
I assume this has to do with httpReq.status != 200, but I thought the readystatechange is continously firing events once state changes. Fire one for HTTP 301, and another one for HTTP 200.
Accoarding to a Microsoft article, cross-domain redirects are not allowed in MsXML. That could most possibly be the case.