I am working on an eshop with a calculator to calculate your loan. I need some fresh insight on this... imagine this situation:
User clicks on one of the buttons it will do a POST request (jQuery) and fill the required data.
User clicks add to cart and goes to cart
User clicks back button (browser)
Page is loading, server is filling the data in the calculator (default), BUT after its done, browser fills the JS data from cache and a funny thing happens. The data gets combined and when user adds the product to his cart he will get wrong, but valid price. Valid part is what server fills and the rest comes from cache.
I have tried using meta tags to prevent caching, I have told jquery to not cache the POST request and even in my responder, I have multiple headers that say - DO NOT CACHE. Yet still the POST data gets cached and I have no idea how to turn it off or refresh it or whatever... When I look at the headers that come back with the json data, header expires is set to year 2099, even though I have said it should be a year from past. Apart from that, I really dont know what could be causing this issue.
Here are the headers set in responder and what gets back:
header("Expires: Mon, 26 Jul 1999 05:00:00 GMT" );
header("Last-Modified: " . gmdate( "D, d M Y H:i:s" ) . "GMT" );
header("Cache-Control: no-cache, must-revalidate" );
header("Pragma: no-cache" );
header("Content-type: text/x-json");
This gets back (from firebug):
Date Fri, 17 Sep 2010 08:39:38 GMT
X-Powered-By PHP/5.1.6
Connection Keep-Alive
Content-Length 126
Pragma no-cache
Last-Modified Fri, 17 Sep 2010 08:39:38GMT
Server Apache
Content-Type text/x-json
Cache-Control no-cache, must-revalidate
Expires Mon, 26 Jul 2099 05:00:00 GMT
Note: when I disable cache in browser preferences it works like a charm.
Any ideas are welcome!
Edit:
I have tried the dummy few hours before, but that doesnt solve it. To put it simple the whole problem is that when user clicks back button, the page wont refresh, but its read from cache, or at least the data that came from ajax are cached and get filled.
So basically I need to refresh the page in a smart way when user clicks back button. Note that cookies are not an option, because (maybe a small percentage, but still) some people dont have cookies allowed.
If you want to handle back/forward buttons, one way is to use bbq plugin for jQuery, which is capable of modifying the # part of the url.
Thus you will be able to hook up to back/forward events and have complete control over what's executed and when, firing whichever Ajax requests you need.
It means though that you'll have to change the concept of your page flow - no direct posting to the server, only via ajax, + client side rendering via some template mechanism.
This is somewhat of emerging trend, examples being google with their instant search.
Add a dummy parameter to all your ajax queries '&t='+new Date().getTime();
This will ensure that a fresh request is sent every time.
Solve it some with a little hack, since it looks like pure browser issue.
Related
I'm trying to get a webpage working so that a page that constantly updates throughout the day can continue to function like normal and if possible cache correctly/better.
Currently I'm using the following htaccess file:
<FilesMatch "\.(html|htm|js|css|php)>
FileETag None
Header unset ETag
Header set Cache-Control "max-age=0, no-cache, no-store, must-revalidate"
Header set Pragma "no-cache"
Header set Expires "Wed, 11 Jan 1984 05:00:00 GMT"
</FilesMatch>
Which basically is making everything re-download every time they refresh, it would be good if there was/is a way to only make it re-download if something has changed (I have no idea how to do this, could it be done by checking one specific file or something?)
I am also trying to get a way to have a "Go Offline" or "Cache Website" (Better if it is automatic) so that it caches/downloads a copy of the entire webpage + some other pages of my choice (if possible) which would mean if I was to load the page on a mobile phone and then disable wifi/data and refresh the page it would still be accessible.
I've tried using the manifest files (.appcache and .manifest) but it just seems to cache the entire page regardless and when I make a change to the webpage it doesn't update even if the data/wifi is enabled, I have to clear the cache before it works...
Any ideas?
I'm POSTing an HTML string to an Apache server with the receiving function simply writing the string to a database. There's nothing big or clever about it and it works on every other implementation.
I am encodeURIComponent-ing the string before POST and the server is returning a 301 redirect and diverting to a GET request on a non-www version of the domain. There's nothing in htaccess beyond a standard Wordpress configuration. In any case, this is simply a text string. I have also replace the function on the server with simply an exit() to remove that from the equation.
I use the same mechanism to post numerous other data to this server via the same target function without a problem.
I've discovered the problem is sending the characters >< together - as it's sending HTML these occur a lot.
So I'm sending via: encodeURIComponent("<span class='teststring'><span>")
which POSTS.. action=updatemenu&mstring=%3Cspan%20class%3D'teststring'%3E%3Cspan%3E
and returns.. POST http://www.DOMAINREMOVED.co.uk/twdc/CMS/TellMe.php 301 Moved Permanently 301ms
followed by..
GET http://DOMAINREMOVED.co.uk/twdc/CMS/TellMe.php 301 Moved Permanently 108ms
If I remove either the > or < from the >< pattern it works fine! Reducing the above encodedURI string to just >< results in the same error.
I am at a complete loss. Anyone come across this before or have any ideas? I guess ultimately I can replace the string in question with something safe but that has implications as all user input would have to be encoded/decoded just in case. Surely this shouldn't be necessary?
I just tried switching out all the >< in the POST string with replace(/%3E%3C/g,"~~") on the encodeURIComponent result and it's passed to the server without the redirect/error.
Edit 00:00 19th July..
I've noticed with the >< in POST, this is the response header. The x-pingback doesn't appear in the response header without the offending characters.
Cache-Control no-cache, must-revalidate, max-age=0
Connection Keep-Alive
Content-Length 0
Content-Type text/html; charset=UTF-8
Date Thu, 18 Jul 2013 22:57:07 GMT
Expires Wed, 11 Jan 1984 05:00:00 GMT
Keep-Alive timeout=5, max=100
Location http://*domain*.co.uk/twdc/CMS/TellMe.php
Pragma no-cache
Server Apache
X-Pingback http://*domain*.co.uk/xmlrpc.php
X-Powered-By PHP/5.3.17
I guess this is WordPress-related. Can anyone shed any light on this?
To be clear the current site is Wordpress-based; the replacement is not but they are coexisting during the build of the new one.
I'm still not sure what the issue with Wordpress and POSTing HTML characters is however I have found that if I encode the data I want to send as a JSON object I get no objection from the server.
I presume that stops whatever function wordpress uses from parsing the HTML in some way.. I would still like to know the explanation though!
I have web page which refers large number of JS, Images files. when teh page is loaded second time, 304 requests goes to the server. I would like to get http 200 (cache) status for cached object rather than 304.
I am using asp.net 4, iis 7.
Setting the Expiry header is not working, it still sends 304 requests. I want http status 200 (cache).
Please let me know if there is any technique for this.
You've said that setting the expiry doesn't help, but it does if you set the right headers.
Here's a good example: Google's library hosting. Here are the (relevant) headers Google sends back if you ask for a fully-qualified version of a library (e.g., jQuery 1.7.2, not jQuery 1., or jQuery 1.7.):
Date:Thu, 07 Jun 2012 14:43:04 GMT
Cache-Control:public, max-age=31536000
Expires:Fri, 07 Jun 2013 14:43:04 GMT
(Date isn't really relevant, I just include it so you know when the headers above were generated.) There, as you can see, Cache-Control has been set explicitly with a max-age of a year, and the Expires header also specifies an expiration date a year from now. Subsequent access (within a year) to the same fully-qualified library doesn't even result in an If-Modified-Since request to the server, because the browser knows the cached copy is fresh.
This description of caching in HTTP 1.1 in RFC 2616 should help.
I have a Flash application that sends a getURL request for an image file every 60 seconds.
This works fine in all browsers except IE9 with Internet Option set to automatically check for newer versions of stored pages.
I setup Charles proxy (http://xk72.com) to watch the requests being sent by my flash app and confirmed that the request is being surpressed by IE9 when the setting is set to Auto, but works fine if I change the setting to check everytime I visit the webpage. This, however, is not an option! I need this to work in all browsers regardless of how the options are set.
Even if I do a page refresh (F5), the ASP page does not reload. The only way to get it to reload is close the browser and restart it.
I have tried adding content headers to disable caching but it does not appear to work.
For Example, here is my request headers:
HTTP/1.1 200 OK
Date Sun, 02 Oct 2011 23:58:31 GMT
Server Microsoft-IIS/6.0
X-Powered-By ASP.NET
Expires Tue, 09 Nov 2010 14:59:39 GMT
Cache-control no-cache
max-age 0
Content-Length 9691
Content-Type text/html
Set-Cookie ASPSESSIONIDACQBSACA=ECJPCLHADMFBDLCBHLJFPBPH; path=/
Cache-control private
I have read the Microsoft blog (http://blogs.msdn.com/b/ie/archive/2010/07/14/caching-improvements-in-internet-explorer-9.aspx) which states that if I add the content headers, the browser should respect my request, but it obviously does not.
I don't think this is a Flash issue since the html page that holds the Flash object will not even reload.
You can append a random number to the end of the url.
I'm sending 2 cookies to the browser. One is a browser identifier which expires in 1 year, and the other is a session tracker without an expiration. The response headers for a fresh request look like this
HTTP/1.1 200 OK
Content-Type: text/html; charset=utf-8
X-XSS-Protection: 0
ETag: "b502a27282a5c621f34d522c3fcc8e3e"
Set-Cookie: bid=ahFmaXJld29ya3Njb21wdXRlcnIPCxIHQnJvd3NlchimigcM; expires=Fri, 12-Aug-2011 05:21:55 GMT; Path=/
Set-Cookie: rid=1281569589; Path=/about
Expires: Wed, 11 Aug 2010 23:33:09 GMT
Cache-Control: private, max-age=345600
Date: Wed, 11 Aug 2010 23:33:09 GMT
I'm trying to access both cookies from JavaScript on the page.
In Firefox and Chrome document.cookie gives me this
"rid=1281568223; bid=ahFmaXJld29ya3Njb21wdXRlcnIPCxIHQnJvd3Nlchj2nAYM"
In IE6, IE7, IE8 document.cookie only gives me this
"bid=ahFmaXJld29ya3Njb21wdXRlcnIPCxIHQnJvd3Nlchj2nAYM"
Is the 'path' attribute in my rid cookie throwing off IE or would it be the missing expiration date (which I thought was supposed to be optional)? I assume it is not the fact that I'm setting more than 1 cookie, because that's done all the time.
IE will only allow you access to those cookies if you are in a subdirectory! So if you set the cookie's path to be /about, and your page is actually /about then you cannot access it.
So it seems for IE you can access the cookie on pages beneath /about like /about/us but not on a page that is /about itself. Go figure :/
Alexis and Rishi I think have this spot on. And this is the only place on the web I've found this information on how IE treats cookies with paths. And what a ball ache! IE strikes again.
Incidentally, in IE 11 at least, it performs a 'starts with' comparison on the full path, so setting a cookie with a path of '/abou' can be accessed on the '/about' page. Though in my current project, this is little consolation, as I'm not in the position to make the assumption that taking one character off the end of the path will reliably identify unique paths in a site.
I also am having a similar issue with IE. I'm setting three cookies without a path (so assumed to be "/"). I am running in a dev envronment on my own machine. When I open the page as http://localhost/page.aspx, I get the expected result and my javascript can find the cookies, however, if I load the same page as http://mymachine.mydomain.com/page.aspx I can watch (in the debugger) the same three cookies being added to the response, but when I get to the javascript function that looks for them, all my cookies are null. Needless to say, this works ok on FireFox.