I'm a developer of a landing page authoring system.To make our pages load faster we use a force-cache technique - all the resources (images/js/css/fonts) in a landing page are set to expire in a year. The file paths are unique and the files never change.
It should work find, but there is a problem when browsing those pages in Chrome - when it loads Javascript files that were already cached (*) the browser sometimes connects to the server with an IF-MODIFIED-SINCE header (getting a 304 reply) and sometimes it doesn't. I don't want the browser to connect to the server at all for the Javascript files that were already cached, it slows down the page loading.
(*) already cached: just enter the page more then once, not using F5 (which obviously causes reloading) but by pressing Enter in the address bar, or opening a new tab and entering the same address.
An example page can viewed here: http://mati.ravpage.co.il/placeholders
I analyzed the issue by entering Chrome's dev tools (F12), inspecting the network tab, sorting by Status. Sometimes only the "placeholders" entry get a 304, sometimes also the Javascript entries.
For the Javascript file bugs_mati_...1391870865.js, the HTTP headers are:
Access-Control-Allow-Origin:*
Cache-Control:max-age=31536000, public
Connection:keep-alive
Content-Encoding:gzip
Content-Type:application/javascript
Date:Sat, 08 Feb 2014 14:57:08 GMT
Expires:Sun, 08 Feb 2015 14:55:25 GMT
Last-Modified:Sat, 08 Feb 2014 14:47:48 GMT
Server:NetDNA-cache/2.2
Transfer-Encoding:chunked
Vary:Accept-Encoding,User-Agent
X-Cache:HIT
I don't understand why Chrome behaves the way it does. I can't even reproduce the issue with high confidence, the behavior is erratic. I am lost.
I would appreciate any help,
Thank you in advanced,
Mati
Related
The problem definition
On my page, www.xxx.com/page, there is a script:
<script type="text/javascript" src="main.1234.js"></script>
Browser relolves it as www.xxx.com/main.1234.js instead of www.xxx.com/page/main.1234.js
More details about my setup
In fact, there are two applications running on the same server machine:
www.xxx.com/ (app #1)
www.xxx.com:82/ (app #2)
The page actually exists in the app #2 as www.xxx.com:82/page. If I access the page directly, then everything works as it should work (i.e. browser resolves links as expected).
But my setup is a bit more complicated. My goal is to hide app #2 from any public access, and make it available only via app #1 as www.xxx.com/page. In order to achieve that, I setup app #1 so, that if a user requests www.xxx.com/page, then app #1 under the hood performs a request to www.xxx.com:82/page of app #2 and returns received content back to the user.
From user's point of view everything should look like content of www.xxx.com:82/page resides under www.xxx.com/page. And it almost works. The only problem is that for some reason browser resolves URLs as I described under "The problem definition". How to fix it?
Additional info, hope it may help
I suppose, that an answer should be hidden in the responces. I suppose, that a cause is that the browser receives different response headers. The following are lists of headers the browser receives in each of these two cases:
Response from app #1 (www.xxx.com/page) where browser incorrectly resolves URLs:
Cache-Control:private
Content-Length:775
Content-Type:text/html;charset=UTF-8
Date:Fri, 19 Jan 2018 11:34:40 GMT
Expires:Thu, 01 Jan 1970 00:00:00 UTC
Set-Cookie:zimidy-initialSessionIdHash=-226086716; Path=/
Strict-Transport-Security:max-age=31536000 ; includeSubDomains
X-Content-Type-Options:nosniff
X-Frame-Options:SAMEORIGIN
X-XSS-Protection:1; mode=block
Response from app #2 (www.xxx.com:82/page) where browser correctly resolves URLs:
Accept-Ranges:bytes
Cache-Control:public, max-age=0
Connection:keep-alive
Date:Fri, 19 Jan 2018 11:33:16 GMT
ETag:W/"307-1610e1964c4"
Last-Modified:Fri, 19 Jan 2018 11:06:40 GMT
X-Powered-By:Express
The URL of
main.1234.js
starts from the location your page is in. The URL of
/main.1234.js
starts from the location of the baseurl. You have problably meant the baseurl. If your path is foo/bar/mypage, then linking main.1234.js will search for the file in foo/bar/. If you put a slash to the start, it will search for the file in the baseurl, which should be the root folder.
RolandStarke gave me an advice, which helped me to solve the problem.
Also, an explanation of the behaviour can be found here.
To make relative URLs to work properly, ending slash is required. I used it in the link from app #2, but not in the link from app #1. After addition of ending slash, everything started to work.
So, this link doesn't work properly: www.xxx.com/app
But this one, with ending slash, does work as expected: www.xxx.com/app/
I have web page which refers large number of JS, Images files. when teh page is loaded second time, 304 requests goes to the server. I would like to get http 200 (cache) status for cached object rather than 304.
I am using asp.net 4, iis 7.
Setting the Expiry header is not working, it still sends 304 requests. I want http status 200 (cache).
Please let me know if there is any technique for this.
You've said that setting the expiry doesn't help, but it does if you set the right headers.
Here's a good example: Google's library hosting. Here are the (relevant) headers Google sends back if you ask for a fully-qualified version of a library (e.g., jQuery 1.7.2, not jQuery 1., or jQuery 1.7.):
Date:Thu, 07 Jun 2012 14:43:04 GMT
Cache-Control:public, max-age=31536000
Expires:Fri, 07 Jun 2013 14:43:04 GMT
(Date isn't really relevant, I just include it so you know when the headers above were generated.) There, as you can see, Cache-Control has been set explicitly with a max-age of a year, and the Expires header also specifies an expiration date a year from now. Subsequent access (within a year) to the same fully-qualified library doesn't even result in an If-Modified-Since request to the server, because the browser knows the cached copy is fresh.
This description of caching in HTTP 1.1 in RFC 2616 should help.
I have a Flash application that sends a getURL request for an image file every 60 seconds.
This works fine in all browsers except IE9 with Internet Option set to automatically check for newer versions of stored pages.
I setup Charles proxy (http://xk72.com) to watch the requests being sent by my flash app and confirmed that the request is being surpressed by IE9 when the setting is set to Auto, but works fine if I change the setting to check everytime I visit the webpage. This, however, is not an option! I need this to work in all browsers regardless of how the options are set.
Even if I do a page refresh (F5), the ASP page does not reload. The only way to get it to reload is close the browser and restart it.
I have tried adding content headers to disable caching but it does not appear to work.
For Example, here is my request headers:
HTTP/1.1 200 OK
Date Sun, 02 Oct 2011 23:58:31 GMT
Server Microsoft-IIS/6.0
X-Powered-By ASP.NET
Expires Tue, 09 Nov 2010 14:59:39 GMT
Cache-control no-cache
max-age 0
Content-Length 9691
Content-Type text/html
Set-Cookie ASPSESSIONIDACQBSACA=ECJPCLHADMFBDLCBHLJFPBPH; path=/
Cache-control private
I have read the Microsoft blog (http://blogs.msdn.com/b/ie/archive/2010/07/14/caching-improvements-in-internet-explorer-9.aspx) which states that if I add the content headers, the browser should respect my request, but it obviously does not.
I don't think this is a Flash issue since the html page that holds the Flash object will not even reload.
You can append a random number to the end of the url.
I am working on an eshop with a calculator to calculate your loan. I need some fresh insight on this... imagine this situation:
User clicks on one of the buttons it will do a POST request (jQuery) and fill the required data.
User clicks add to cart and goes to cart
User clicks back button (browser)
Page is loading, server is filling the data in the calculator (default), BUT after its done, browser fills the JS data from cache and a funny thing happens. The data gets combined and when user adds the product to his cart he will get wrong, but valid price. Valid part is what server fills and the rest comes from cache.
I have tried using meta tags to prevent caching, I have told jquery to not cache the POST request and even in my responder, I have multiple headers that say - DO NOT CACHE. Yet still the POST data gets cached and I have no idea how to turn it off or refresh it or whatever... When I look at the headers that come back with the json data, header expires is set to year 2099, even though I have said it should be a year from past. Apart from that, I really dont know what could be causing this issue.
Here are the headers set in responder and what gets back:
header("Expires: Mon, 26 Jul 1999 05:00:00 GMT" );
header("Last-Modified: " . gmdate( "D, d M Y H:i:s" ) . "GMT" );
header("Cache-Control: no-cache, must-revalidate" );
header("Pragma: no-cache" );
header("Content-type: text/x-json");
This gets back (from firebug):
Date Fri, 17 Sep 2010 08:39:38 GMT
X-Powered-By PHP/5.1.6
Connection Keep-Alive
Content-Length 126
Pragma no-cache
Last-Modified Fri, 17 Sep 2010 08:39:38GMT
Server Apache
Content-Type text/x-json
Cache-Control no-cache, must-revalidate
Expires Mon, 26 Jul 2099 05:00:00 GMT
Note: when I disable cache in browser preferences it works like a charm.
Any ideas are welcome!
Edit:
I have tried the dummy few hours before, but that doesnt solve it. To put it simple the whole problem is that when user clicks back button, the page wont refresh, but its read from cache, or at least the data that came from ajax are cached and get filled.
So basically I need to refresh the page in a smart way when user clicks back button. Note that cookies are not an option, because (maybe a small percentage, but still) some people dont have cookies allowed.
If you want to handle back/forward buttons, one way is to use bbq plugin for jQuery, which is capable of modifying the # part of the url.
Thus you will be able to hook up to back/forward events and have complete control over what's executed and when, firing whichever Ajax requests you need.
It means though that you'll have to change the concept of your page flow - no direct posting to the server, only via ajax, + client side rendering via some template mechanism.
This is somewhat of emerging trend, examples being google with their instant search.
Add a dummy parameter to all your ajax queries '&t='+new Date().getTime();
This will ensure that a fresh request is sent every time.
Solve it some with a little hack, since it looks like pure browser issue.
I'm sending 2 cookies to the browser. One is a browser identifier which expires in 1 year, and the other is a session tracker without an expiration. The response headers for a fresh request look like this
HTTP/1.1 200 OK
Content-Type: text/html; charset=utf-8
X-XSS-Protection: 0
ETag: "b502a27282a5c621f34d522c3fcc8e3e"
Set-Cookie: bid=ahFmaXJld29ya3Njb21wdXRlcnIPCxIHQnJvd3NlchimigcM; expires=Fri, 12-Aug-2011 05:21:55 GMT; Path=/
Set-Cookie: rid=1281569589; Path=/about
Expires: Wed, 11 Aug 2010 23:33:09 GMT
Cache-Control: private, max-age=345600
Date: Wed, 11 Aug 2010 23:33:09 GMT
I'm trying to access both cookies from JavaScript on the page.
In Firefox and Chrome document.cookie gives me this
"rid=1281568223; bid=ahFmaXJld29ya3Njb21wdXRlcnIPCxIHQnJvd3Nlchj2nAYM"
In IE6, IE7, IE8 document.cookie only gives me this
"bid=ahFmaXJld29ya3Njb21wdXRlcnIPCxIHQnJvd3Nlchj2nAYM"
Is the 'path' attribute in my rid cookie throwing off IE or would it be the missing expiration date (which I thought was supposed to be optional)? I assume it is not the fact that I'm setting more than 1 cookie, because that's done all the time.
IE will only allow you access to those cookies if you are in a subdirectory! So if you set the cookie's path to be /about, and your page is actually /about then you cannot access it.
So it seems for IE you can access the cookie on pages beneath /about like /about/us but not on a page that is /about itself. Go figure :/
Alexis and Rishi I think have this spot on. And this is the only place on the web I've found this information on how IE treats cookies with paths. And what a ball ache! IE strikes again.
Incidentally, in IE 11 at least, it performs a 'starts with' comparison on the full path, so setting a cookie with a path of '/abou' can be accessed on the '/about' page. Though in my current project, this is little consolation, as I'm not in the position to make the assumption that taking one character off the end of the path will reliably identify unique paths in a site.
I also am having a similar issue with IE. I'm setting three cookies without a path (so assumed to be "/"). I am running in a dev envronment on my own machine. When I open the page as http://localhost/page.aspx, I get the expected result and my javascript can find the cookies, however, if I load the same page as http://mymachine.mydomain.com/page.aspx I can watch (in the debugger) the same three cookies being added to the response, but when I get to the javascript function that looks for them, all my cookies are null. Needless to say, this works ok on FireFox.