I have a really simple site that I created. I am trying to test JS caching in the browser but it doesn't seem to be working. I thought that most major browsers cached your JS file by default as long as the file name doesn't change. I have the site running in IIS 7 locally.
For my test I have a simple JS file that is doing a document write on body load. If I make a change to the JS file (change the text the document write is writing), then save the file, I see that updated when refreshing the browser. Why is this? Shouldn't I see the original output as long as the JS file name hasn't changed?
Here is the simple site I created to test.
When you refresh your browser, the browser sends a request to the server for all the resources required to display the page. If the browser has a cached version of any of the required resources, it may send an If-Modified-Since header in the request for that resource. When a server receives this header, rather than just serving up the resource, it compares the modified time of the resource to the time submitted in the If-Modified-Since header. If the resource has changed, the server will send back the resource as usual with a 200 status. But, if the resource has not changed, the server will reply with a status 304 (Not Modified), and the browser will use its cached version.
In your case, the modified date has changed, so the browser sends the new version.
The best way to test caching in your browser would probably be to use fiddler and monitor requests and responses while you navigate your site. Avoid using the refresh button in your testing as that frequently causes the browser to request fresh copies of all resources (ie, omitting the If-Modified-Since header).
Edit: The above may be an over-simplification of what's going on. Surely a web search will yield plenty of in-depth articles that can provide a deeper understanding of how browser caching works in each browser.
Related
I understand how to use version numbers and force the download of updated files on my own website but what can be done under this circumstance.....
I have some small scripts i've written for public use , and i have about 200 different websites who link my js file on their website. When i make an update to the file , i have to get them all to manually change the version number of the file so they and their users are re downloading the latest update.
Is there anything i can do on my server , the host , that can force the other sites to redownload latest version without anything manual on their end of things ?
There are 2 persistent problems in computing: Cache invalidation, naming things, and off-by-one errors.
If you want clients to get new versions of a file without changing the name of the file then you simply have to lower the max-age you set in the caching headers so that they check more frequently and get the new version in a reasonable period of time.
That's it. End of list.
You can somewhat mitigate the effects of the increased request load by also implementing an ETag header that the client will send back on subsequent requests and can be used to detect if the resource is unchanged and optionally serve a 304 Not modified response.
However, depending on the cost of implementing and running ETag checks you might just want to re-serve the existing resource and be done with it.
Or use a CDN which should handle all the ETag nonsense for you.
I am writing a social media engine using Javascript and PHP, with flat files as my main information transfer tool. When my program adds to text files that are over a day old, they will not show up when requested by an AJAX program, until they are accessed directly by a URL and refreshed twice. Is there a way to prevent this from happening? Please do not suggest the use of a database.
The most likely reason to why you need to access the flat files directly by URL and refresh twice is that your browser is caching them. Refreshing updates the browser's cache with the latest version.
When a web server is serving static content it tells the web browser to cache the content for quite a while since the content being static is unlikely to change for some time.
When a web server is serving dynamic content that almost always means that the content is going to change very fast and that it might be a bad idea to cache it.
Now the reason to why you shouldn't access your flat files directly with AJAX is not because of the cache issue (although it does resolve the issue) but because of security. What happens if you have some secret information in the file? Sure you can tell the browser not to fetch that part but the user still will have full access (by URL) to the file.
The same way you don't let the browser access your database you don't let the browser access your flat files directly. This also means that they should be stored outside the document root or protected from public access by other means.
I am building a simple web page that will run on a computer connected to a large TV, displaying some relevant information for whomever passes it.
The page will (somehow) fetch some text files which are located on a svn server and then render the them into html.
So I have two choices how to do this:
Set up a cron job that periodically checks the svn server for any changes, and if so updates the files from svn, and and (somehow) updates the page. This has the problem of violating the Access-Control-Allow-Origin policy, since the files now exist locally, and what is a good way to refresh a page that runs in full screen mode?
Make the javascript do the whole job: Set it up to periodically ajax request the files directly from the svn server, check for differences, and then render the page. This somehow does not seem as elegant.
Update
The Access-Control-Allow-Origin policy doesn't seem to be a problem when running on a web server, even though the content is on the same domain..
What I did in the end was a split between the two:
A cron job update the files from svn.
The javascript periodicly requests the files using window.setInterval and turning on the ifModified flag on the ajax request to only update the html if a changed had occured.
On what basis does javascript files get cached? Say I load a file with the name 'm-script.js' from one site and on another website I use the same name 'm-script.js' but with different contents. Will the browser fetch the new one, or just look at the name and load it from the cache? The urls for both the m-script.js file are different (obviously).
Thanks.
If the url is different the cached copy will not be used. A new request will be made and the new file will be downloaded.
There would be a huge security and usability issue with the browser if a Javascript file cached from one website was used on another.
Browsers cache files by their full URI.
This thread( How to force browser to reload cached CSS/JS files? ) will help you to understand.
Since nobody has mentioned it yet, there is a lot more involved in HTTP caching than just the URI. There are various headers that control the process, e.g. Cache-Control, Expires, ETag, Vary, and so on. Requesting a different URI is always guaranteed to fetch a new copy, but these headers give more control over how requests to the potentially-cached resource are issued (or not issued, or issued but receive back a 304 Not Modified, or...).
Here is a detailed document describing the process. You can also google things like "caching expires" or "caching etag" for some more specific resources.
Yes, I need to enable cross site scripting for internal testing of an application I am working on. I would have used Chrome's disable-xss-auditor or disable-web-security switches, but it looks like they are no longer included in the chrome build:
http://src.chromium.org/svn/trunk/src/chrome/common/chrome_switches.cc
What I am basically trying to achieve is to have a javascript application running locally on pages served by Apache (also running locally) be allowed to run scripts from a resource running on another server on our network.
Failing a way to enable xss for Firefox, Chrome, or my least favourite - IE, would there be a way to run some kind of proxy process to modify headers to allow the xss to happen? Any quick way to use Apache mod rewrite or some such to do this?
Again, this is for testing only. In production, all these scripts run from the same server, so there isn't even a need to sign them, but during development and testing, it is much easier to work only on the parts of the application you are concerned with and not have to run the rest that requires an full-on application server setup.
What you need is just a little passthrough service running on the first server that passes requests over to the second server, and returns the results it gets back from the second server.
You don't say what language the server side of your application is written in or what kind of data is passed to or returned from your service, so I can't be more specific than that, but it really should be about 15 lines of code to write the passthrough service.
What are asking for isn't cross-site scripting (which is a type of security vulnerability in which user input (e.g. from the URL) is injected into the page in such a way that third party scripts could be added via a link).
If you just want to run a script on a different server, then just use an absolute URI.
<script src="http://example.com/foo.js"></script>
If you need to perform Ajax requests to a remote server, use CORS or run a proxy on the current origin.
Again, this is for testing only
Just for testing, look at Charles Proxy. It's Map Remote feature allows you to (transparently) forward some requests to a remote server (based on wild card URL matching).