I am writing an extension for the Chrome browser (and later hope to port to Firefox). The extension downloads a configuration file from my server - an xml file via XMLHttpRequest. What I am find is that it downloads the file once and every subsequent call simply seems to use the cached original version of the file. It doesn't matter whether or not I change the file on the server.
I read that you could try
xmlhttp.setRequestHeader( 'Pragma', 'Cache-Control: no-cache');
and so I've done this but it doesn't seem to make any difference. The only way I can get the new file seems to be to delete the browser cache - which obviously is not a solution for my growing users.
This seems like a problem that I wouldn't be the first person to experience - so given thatcacheing rules seem to uphold this as a policy that can't be easily avoided, my question is, what's the better design? is there a best practice I don't know about? Should I be pushing rather than pulling somehow?
An easy way is to add a useless parameter containing the time to the request. Since time tends to go forwards and never backwards, you can be reasonably sure that your query is unique and therefore won't be cached.
For instance (assuming the URL is in a string url):
url += '?_time=' + (new Date()).getTime();
or, if your URL already has query parameters,
url += '&_time=' + (new Date()).getTime();
Related
I have a JavaScript application that uses REST API server as a data provider.
There is one method on API that takes GET request and returns raw response that contains email (as far as I can see there is some kind of .eml content).
I use a simple xmlhttprequest.
The question is: how could I take a response (the file content) and delegate it ti browser so the browser can begin a downloading process ?
Is it possible to do at all with GET method ?
Javascript does not support downloading and saving arbitrary files on a user's computer due to obvious security concerns.
There are, however, a few ways to indirectly trigger the download using javascript. One of those ways would be using an invisible iframe and setting the source to the path towards the file.
You might be waiting for browsers to implement window.saveAs, see also the question Using HTML5/Javascript to generate and save a file
There are several snipets you could try, for instance https://github.com/eligrey/FileSaver.js or https://gist.github.com/MrSwitch/3552985
Depending on how you have your client running you could use local storage.
to store the item
localStorage.setItem('NAME', DATA);
and to retrieve
localStorage.getItem('NAME');
and to delete
localStorage.removeItem('NAME');
and then set up a callback or promise to render into the html. If you use axios you can set this up with a promise https://github.com/mzabriskie/axios
I've noticed that Firefox does not cache GET requests automatically. Following the code I use:
var ajax = new XMLHttpRequest();
ajax.open("GET","page.php?val=" + val,true);
ajax.send();
With jquery is possible to give cache: true;, how can I save in the cache with Vanilla Javascript (client side)? Is also possible to decide for how long? Can you give me an example of code? Thank you in advance!
Web caching is largely controlled by the headers sent from the Server (Expires:, etc.). Browsers sometimes "cheat" and don't really cache even though the headers would allow them to ...probably because the user had used their UI to turn off caching, for example by setting the cache size to zero. But browsers that "cheat" the other direction, caching anyway even though the headers don't allow it, are (for good reason) extremely uncommon.
If caching is not happening for you, it's a function of the file and the server (or perhaps the browser configuration), not of any browser type or version. (To say the same thing a different way, your Firefox would cache just fine if the Server sent the needed headers.) The headers are controlled a variety of ways by different servers and different providers. For the Apache server, the nitty-gritty may be in a ".htaccess" file, pre-written templates of which are often available.
To a first approximation, with HTML4, you simply cannot control web caching from the client side, no matter what tool you use and no matter what your program does. A generic exception is provided by the new "online application cache" or "appcache" in HTML5 ...but with other restrictions, for example those about "one per site" and "same origin".
You can cache responses using a simple hash, something like:
var cache = {};
function getData(variable) {
if (cache[variable]) {
return cache[variable];
}
// previous ajax code to get the data...
// in the response handler, do:
cache[variable] = data;
}
That's a naive implementation of a caching mechanism: only works for the lifetime of the page (i.e., until the page is refreshed or navigated away from), doesn't have any expiration mechanism, and other shortcomings I'm sure. For instance, you could use localStorage to get around the refresh issue.
But hey, I'm not getting paid to write this :).
I'm trying to determine the best way to cache my JavaScript and CSS files.
There are several ways of doing this:
Using the Date, Expires and Cache-Control headers
Using the ETag header
Cache forever and change the filename when the file changes
Append a querystring to the filename in the HTML with the last mod time or an MD5 of the file contents
I was under the impression that the last method (4) was the most reliable and would result in the fewest unnecessary requests, but my friend just told me that sometimes the querystring method is unreliable and you actually need to change the filename.
Are there any downsides to setting the HTTP headers to cache forever and just using a query-string with the last mod time, or are there scenarios where another method would be more beneficial?
I'm a big fan of method 4, but I use the Session Id, on it. So, a user that enters my website will load it once per session (a session usually dies if the visitor keeps inactive for more than 20 minutes or if he closes the browser window).
In Asp.net, I use that syntax:
<script src="js/DetalhesCurso.js?<%=Session.SessionID%>"></script>
Your third method is the most reliable. Some CDNs/proxies ignore the query string altogether, and just serve the same cached file regardless of the query string value.
Amazon and Azure do support it, but others might not.
Do note that in method #3 you don't actually have to update the filename itself. You can just use some URL rewriting to always get that same file. You'll only have to update your HTML.
I'm looking to make one XmlHttpRequest (every 30 minutes -- retrieving weather forecast) and use the XML response over multiple html documents. I've looked near and far and can only get the parsed XML to show on one document.
Is there a way to reference different documents from one javascript function?
No framework, just straight javascript/ajax.
forecastXMLreq = new XMLHttpRequest();
forecastXMLreq.open("GET",forecastURL,false);
forecastXMLreq.send();
forecastXML = forecastXMLreq.responseXML;
var day1 = forecastXML.getElementsByTagName("weekday_short")[1].childNodes[0].nodeValue;
document.getElementById("day1").innerHTML = day1.toUpperCase();
Multiple html files, one XHR call is what I'm looking for
The easiest way would be to leverage regular http caching. On the next pages, you still need to request the file in your code, but the browser can potentially skip the request and just automatically and transparently fetch from the local disk cache instead. You aren't guaranteed it will be cached for the full 30 minutes, as the browser has a limited amount of cache space and it decides what to purge, and when.
Just configure the server to send the following http header for that xml response
Cache-Control: max-age=1800
more info on http caching
http://www.mnot.net/cache_docs/
An alternative is to make use of the limited support that browser offer for html5 local storage. No web server config is needed, although browser support is limited, and you don't need to re request the file in the code, but then again you will have different code for retrieving it from local storage.
Is it possible to use JavaScript to dynamically change the HTTP Headers received when loading an image from an external source? I'm trying to control the caching of the image (Expires, Max-Age, etc...) client-side since I do not have access to the server.
As the others have said, no, it is not possibly to manipulate http headers and caching directives from the server in client code.
What is possible
What you do have the ability to do is ensure you get a new file. This can be done by appending a unique string to the URL of the request as a query string parameter.
e.g. if you wanted to ensure you got a new file each hour
<script type="text/javascript">
var d = new Date();
url += ("?" +d.getYear() + "_" + d.getDay() + "_" + d.getHours());
</script>
What this does is add a value containing the year, day and hour to the url, so it will be unique for each hour, hence ensuring a new file request. (Not tested!)
Obviously this can be made much more generic and fine tuned, but hopefully you'll get the idea.
What is impossible
What you can't do is ensure you will not retrieve a new version from the server.
Caching directives are in the server's responsibility. You can't manipulate them on the client side.
Maybe it's an option for you to install a proxy server, e.g. if you are aiming at company employees?
I do not think Javascript can actually do that : the images are requested by the browser, and it's up to him to define HTTP-headers to issue.
One way to use some custom headers would be with some kind of Ajax-request, not passing by any <img> tag ; but you'd have to know what to do with the returned data... Don't think it would help much.
If you want your images to be kept in cache by the browser, you server has to send the right headers in the responses (like Etag, and/or Expires -- see mod_expires, for Apache, for instance)
If you want to be absolutly sure the browser will download a new image, and not use the version it has in cache, you should use a different URL each time.
This is often done using the timestamp as a parameter to the URL ; like example.com/image.jpg?123456789 (123456789 being, more or less, the current timestamp -- obviously less than more, but you get the idea : each second, the browser will see the URL has changed)
EDIT after the edit of the question :
The Expires header is generated by the server, and is one of the headers that come in the Response (it's not a header the client sends in the Request ; see List of HTTP headers).
So, you absolutly have no control over it from the client-side : it's the server that must be configured to do the work, here...
If you want more answers : what are you trying to do exactly ? Why ?