I'm trying to determine the best way to cache my JavaScript and CSS files.
There are several ways of doing this:
Using the Date, Expires and Cache-Control headers
Using the ETag header
Cache forever and change the filename when the file changes
Append a querystring to the filename in the HTML with the last mod time or an MD5 of the file contents
I was under the impression that the last method (4) was the most reliable and would result in the fewest unnecessary requests, but my friend just told me that sometimes the querystring method is unreliable and you actually need to change the filename.
Are there any downsides to setting the HTTP headers to cache forever and just using a query-string with the last mod time, or are there scenarios where another method would be more beneficial?
I'm a big fan of method 4, but I use the Session Id, on it. So, a user that enters my website will load it once per session (a session usually dies if the visitor keeps inactive for more than 20 minutes or if he closes the browser window).
In Asp.net, I use that syntax:
<script src="js/DetalhesCurso.js?<%=Session.SessionID%>"></script>
Your third method is the most reliable. Some CDNs/proxies ignore the query string altogether, and just serve the same cached file regardless of the query string value.
Amazon and Azure do support it, but others might not.
Do note that in method #3 you don't actually have to update the filename itself. You can just use some URL rewriting to always get that same file. You'll only have to update your HTML.
Related
I need to embed a parameter with all my pages url. Like:
index page = www.abc.com?param=value
about us page = www.abc.com/about-us.html?param=value
When i google it I found param tag. But it is child tag of Object Tag. So I don't know how to use this to address my issue.
Note: Am adding parameter to maintain my version upgrades so that browser will fetch from server whenever new updates added not fetching from cache like Google.
How to achieve that?
When i google it I found param tag. But it is child tag of Object Tag. So I don't know how to use this to address my issue.
You can't. It has nothing to do with your issue. Object parameters and query string parameters are entirely unrelated.
Am adding parameter to maintain my version upgrades so that browser will fetch from server whenever new updates added not fetching from cache like Google.
That is used when linking to resources that change infrequently and you normally want to be heavily cached, but which occasionally change in a way that would break parts of a site if not refreshed in the browser. Primarily this applies to stylesheets and JavaScript files.
For regular pages, you usually don't want such strict caching rules so you should configure your HTTP server to put appropriate cache control headers in the HTTP response for the HTML document.
For instance:
Cache-Control:max-age=3600
ETag:"44ab-51ae9454a67e2"
mnot has a good guide if you want a more in depth explanation about how to control caching.
I am writing an extension for the Chrome browser (and later hope to port to Firefox). The extension downloads a configuration file from my server - an xml file via XMLHttpRequest. What I am find is that it downloads the file once and every subsequent call simply seems to use the cached original version of the file. It doesn't matter whether or not I change the file on the server.
I read that you could try
xmlhttp.setRequestHeader( 'Pragma', 'Cache-Control: no-cache');
and so I've done this but it doesn't seem to make any difference. The only way I can get the new file seems to be to delete the browser cache - which obviously is not a solution for my growing users.
This seems like a problem that I wouldn't be the first person to experience - so given thatcacheing rules seem to uphold this as a policy that can't be easily avoided, my question is, what's the better design? is there a best practice I don't know about? Should I be pushing rather than pulling somehow?
An easy way is to add a useless parameter containing the time to the request. Since time tends to go forwards and never backwards, you can be reasonably sure that your query is unique and therefore won't be cached.
For instance (assuming the URL is in a string url):
url += '?_time=' + (new Date()).getTime();
or, if your URL already has query parameters,
url += '&_time=' + (new Date()).getTime();
Is there any way (server or client side) to force the browser to pull a new version of a file (image) from the server. The image in question is otherwise cached for a long time. I know I can append a random number, for instance, to the URL of the image but this is not acceptable in this situation. I need for the image to be refresh from the exact same URL.
What I'm doing: a YouTube like portal where users upload videos. Each video has a thumbnail which is shown on various pages on the portal. User can, at any time, change the thumbnail (he can select from three generated thumbnails). So when this happens (a new image overwrites the 'original' image), I wan't to refresh the video's thumbnail so that the owner (I don't care if other users see the old thumbnail) will see the new thumbnail no matter where the thumbnail is shown.
I'm afraid this can't be done but I'm asking here just to be sure.
update: I'm using nginx and PHP on the server side
You could use ETAGs on your thumbnails. This would prevent the transmission of the actual thumbnail data if it hasn't changed (i.e. still has the same hash). However, you would still face the clients HTTP requests to check if the ETAG has changed (normly to be answered by HTTP 304.
But combined with a rather short freshness threshold (say a couple of minutes), you could achieve tradeoff between caching and freshness while still conserving resources. If you need absolute freshness, you might have to stick to ETAGs though. If you create a clever hash function, you could handle the ETAG requests on your frontend loadbalancer (or at least near it) which could thus be rather cheap.
Edit: Add alternative from my other comment.
An alternative could be to use added request parameters to force a re-fetch when the resource changed as suggested in another answer. A variation of that schema (which is used by many Rails applications) is to append the timestamp of the last change (or some kind of hash) as a parameter to the file which only changes when the file actually does change. Something like this, or one of the above methods, is actually the only way to be really sure to not have unnecessary cache validation requests while at the same time having always the freshest resource.
Add at the end of the filename a get parameter, such as:
example.jpg?refresh=yesplease
You could also refresh that image each visit by using a rand() param.
In php:
example.jpg?refresh=<?php echo rand(1,999); ?>
Let's say I have a page that refers to a .js file. In that file I have the following code that sets the value of a variable:
var foo;
function bar()
{
foo = //some value generated by some type of user input
}
bar();
Now I'd like to be able to navigate to another page that refers to the same script, and have this variable retain the value set by bar(). What's the best way to transport the value of this variable, assuming the script will be running anew once I arrive on the next page?
You can use cookies.
Cookies were originally invented by
Netscape to give 'memory' to web
servers and browsers. The HTTP
protocol, which arranges for the
transfer of web pages to your browser
and browser requests for pages to
servers, is state-less, which means
that once the server has sent a page
to a browser requesting it, it doesn't
remember a thing about it. So if you
come to the same web page a second,
third, hundredth or millionth time,
the server once again considers it the
very first time you ever came there.
This can be annoying in a number of
ways. The server cannot remember if
you identified yourself when you want
to access protected pages, it cannot
remember your user preferences, it
cannot remember anything. As soon as
personalization was invented, this
became a major problem.
Cookies were invented to solve this
problem. There are other ways to solve
it, but cookies are easy to maintain
and very versatile.
See: http://www.quirksmode.org/js/cookies.html
You can pass the value in the query string.
When the user navigate to the other page append the value to the query string and load it in the next.
Another option is jStorage. jStorage is probably better used for cached data and lossy user preferences (e.g. saved username in a login form), as it doesn't have full browser support (but IE6+ and most other common browsers support it) and cannot be relied upon (like cookies).
You can use YUI's Cookie Library http://developer.yahoo.com/yui/cookie/
Is it possible to use JavaScript to dynamically change the HTTP Headers received when loading an image from an external source? I'm trying to control the caching of the image (Expires, Max-Age, etc...) client-side since I do not have access to the server.
As the others have said, no, it is not possibly to manipulate http headers and caching directives from the server in client code.
What is possible
What you do have the ability to do is ensure you get a new file. This can be done by appending a unique string to the URL of the request as a query string parameter.
e.g. if you wanted to ensure you got a new file each hour
<script type="text/javascript">
var d = new Date();
url += ("?" +d.getYear() + "_" + d.getDay() + "_" + d.getHours());
</script>
What this does is add a value containing the year, day and hour to the url, so it will be unique for each hour, hence ensuring a new file request. (Not tested!)
Obviously this can be made much more generic and fine tuned, but hopefully you'll get the idea.
What is impossible
What you can't do is ensure you will not retrieve a new version from the server.
Caching directives are in the server's responsibility. You can't manipulate them on the client side.
Maybe it's an option for you to install a proxy server, e.g. if you are aiming at company employees?
I do not think Javascript can actually do that : the images are requested by the browser, and it's up to him to define HTTP-headers to issue.
One way to use some custom headers would be with some kind of Ajax-request, not passing by any <img> tag ; but you'd have to know what to do with the returned data... Don't think it would help much.
If you want your images to be kept in cache by the browser, you server has to send the right headers in the responses (like Etag, and/or Expires -- see mod_expires, for Apache, for instance)
If you want to be absolutly sure the browser will download a new image, and not use the version it has in cache, you should use a different URL each time.
This is often done using the timestamp as a parameter to the URL ; like example.com/image.jpg?123456789 (123456789 being, more or less, the current timestamp -- obviously less than more, but you get the idea : each second, the browser will see the URL has changed)
EDIT after the edit of the question :
The Expires header is generated by the server, and is one of the headers that come in the Response (it's not a header the client sends in the Request ; see List of HTTP headers).
So, you absolutly have no control over it from the client-side : it's the server that must be configured to do the work, here...
If you want more answers : what are you trying to do exactly ? Why ?