How do we do a hard reload/refresh in Chrome using Javascript?
window.location.reload(true);
location.reload(true);
didn't do it for me.
===========================
What I meant by 'didn't do it for me...'
The session cookies (ex. JSESSIONID) were not renewed specially the HttpOnly ones.
What I want to achieve...'
I wanted to reload the page like it's the first time I accessed the URL.
I wanted to simulate the steps below (which is like accessing the URL for the first time)
- Open browser
- Type URL
- Hit Enter
I wonder if there is a more powerful Javascript command that reloads as if it is the first time.
The best (and practically only) way to ensure a page is hard reloaded is by using your server. One of the ways you can do this is by serving headers, when a page is requested, that invalidate resources such as Cache-Control which will tell the browser to not cache resources and always revalidate resources which means everything must be redownloaded each time.
I would not recommend serving this header on every request in your production application, though.
Cache-Control: no-store, max-age=0
On your page you can add meta tags that will tell the browser what to do, provided there weren't headers already passed to the browser. This depends on how you serve your content, but you can add the following to your head element which will tell the browser to store absolutely nothing and is equivalent to hitting a page for the first time:
<head>
...
<meta http-equiv="Cache-Control" content="no-store, max-age=0">
...
</head>
location.reload() should work. Works for me!
I'm working on a browser extension that needs to read the data in a pdf file that pops up.
When the popup comes up and I go to inspect, I only find the following information:
<embed id="plugin" type="application/x-google-chrome-pdf" src="https://thisisnottherealurl-soignorethis part......../something.aspx" stream-url="chrome-extension://xxxxxx/xxxx" headers="cache-control: no-cache, no-store,must-revalidate
content-type: application/pdf
date: Wed, 03 Mar 1999 15:31:26 GMT
expires: -1
pragma: no-cache
server: Microsoft-IIS/10.0
x-aspnet-version: 4.0.30319
x-powered-by: ASP.NET
" background-color="0xFF525659" top-toolbar-height="0" javascript="allow" full-frame="" pdf-viewer-update-enabled="">
I know for a fact that the information is in XML format, and I am certain that it is found in the embed tag. I can view it by changing the settings to 'save' the file rather than to view it. What I cannot seem to find, neither in the Network information nor the Source, is where that information is at nor how I can have the browser extension go through it for me.
For anyone else interested in this method, I found some interesting work arounds.
Apparently the pdf documents create a dynamic extension and uses Chrome APIs inside of the browser which appears to run the code for making the pdf.
This makes is somewhat more difficult than usual to get a look at the network traffic and the processes.
An interesting work around, aside from the above comment, that I had found is that the pdf document can be selected and cut/pasted into clipboard, or even into a variable.
After some testing, I found that my browser extension does have capability in the new pdf window. Thus I was able to extract the information that way.
This isn't exactly what I had been looking for, but I found it to be quite interesting and thought someone else could use it.
Remember to take into account asynchronous running of the code.
The code for select/copy that I generally use is:
let sel = window.getSelection(), range = document.createRange(); range.selectNodeContents(document.documentElement);
sel.removeAllRanges();
let textStuff = sel.addRange(range);
sel.removeAllRanges();
Problem is however that it appears that the pdf document might actually be embedded in the css, thus avoiding the usual method of copy/paste from the dom.
If the copy/paste doesn't work for you, I also found a somewhat interesting method of simulating the copy paste at:
How to implement ctrl click behavior to copy text from an embedded pdf in a webapp?
Is there a way I can put some code on my page so when someone visits a site, it clears the browser cache, so they can view the changes?
Languages used: ASP.NET, VB.NET, and of course HTML, CSS, and jQuery.
If this is about .css and .js changes, then one way is "cache busting" by appending something like "_versionNo" to the file name for each release. For example:
script_1.0.css // This is the URL for release 1.0
script_1.1.css // This is the URL for release 1.1
script_1.2.css // etc.
or after the file name:
script.css?v=1.0 // This is the URL for release 1.0
script.css?v=1.1 // This is the URL for release 1.1
script.css?v=1.2 // etc.
You can check this link to see how it could work.
Look into the cache-control and the expires META Tag.
<META HTTP-EQUIV="CACHE-CONTROL" CONTENT="NO-CACHE">
<META HTTP-EQUIV="EXPIRES" CONTENT="Mon, 22 Jul 2002 11:12:01 GMT">
Another common practices is to append constantly-changing strings to the end of the requested files. For instance:
<script type="text/javascript" src="main.js?v=12392823"></script>
Update 2012
This is an old question but I think it needs a more up to date answer because now there is a way to have more control of website caching.
In Offline Web Applications (which is really any HTML5 website) applicationCache.swapCache() can be used to update the cached version of your website without the need for manually reloading the page.
This is a code example from the Beginner's Guide to Using the Application Cache on HTML5 Rocks explaining how to update users to the newest version of your site:
// Check if a new cache is available on page load.
window.addEventListener('load', function(e) {
window.applicationCache.addEventListener('updateready', function(e) {
if (window.applicationCache.status == window.applicationCache.UPDATEREADY) {
// Browser downloaded a new app cache.
// Swap it in and reload the page to get the new hotness.
window.applicationCache.swapCache();
if (confirm('A new version of this site is available. Load it?')) {
window.location.reload();
}
} else {
// Manifest didn't changed. Nothing new to server.
}
}, false);
}, false);
See also Using the application cache on Mozilla Developer Network for more info.
Update 2016
Things change quickly on the Web.
This question was asked in 2009 and in 2012 I posted an update about a new way to handle the problem described in the question. Another 4 years passed and now it seems that it is already deprecated. Thanks to cgaldiolo for pointing it out in the comments.
Currently, as of July 2016, the HTML Standard, Section 7.9, Offline Web applications includes a deprecation warning:
This feature is in the process of being removed from the Web platform.
(This is a long process that takes many years.) Using any of the
offline Web application features at this time is highly discouraged.
Use service workers instead.
So does Using the application cache on Mozilla Developer Network that I referenced in 2012:
Deprecated This feature has been removed from the Web standards.
Though some browsers may still support it, it is in the process of
being dropped. Do not use it in old or new projects. Pages or Web apps
using it may break at any time.
See also Bug 1204581 - Add a deprecation notice for AppCache if service worker fetch interception is enabled.
Not as such. One method is to send the appropriate headers when delivering content to force the browser to reload:
Making sure a web page is not cached, across all browsers.
If your search for "cache header" or something similar here on SO, you'll find ASP.NET specific examples.
Another, less clean but sometimes only way if you can't control the headers on server side, is adding a random GET parameter to the resource that is being called:
myimage.gif?random=1923849839
I had similiar problem and this is how I solved it:
In index.html file I've added manifest:
<html manifest="cache.manifest">
In <head> section included script updating the cache:
<script type="text/javascript" src="update_cache.js"></script>
In <body> section I've inserted onload function:
<body onload="checkForUpdate()">
In cache.manifest I've put all files I want to cache. It is important now that it works in my case (Apache) just by updating each time the "version" comment. It is also an option to name files with "?ver=001" or something at the end of name but it's not needed. Changing just # version 1.01 triggers cache update event.
CACHE MANIFEST
# version 1.01
style.css
imgs/logo.png
#all other files
It's important to include 1., 2. and 3. points only in index.html. Otherwise
GET http://foo.bar/resource.ext net::ERR_FAILED
occurs because every "child" file tries to cache the page while the page is already cached.
In update_cache.js file I've put this code:
function checkForUpdate()
{
if (window.applicationCache != undefined && window.applicationCache != null)
{
window.applicationCache.addEventListener('updateready', updateApplication);
}
}
function updateApplication(event)
{
if (window.applicationCache.status != 4) return;
window.applicationCache.removeEventListener('updateready', updateApplication);
window.applicationCache.swapCache();
window.location.reload();
}
Now you just change files and in manifest you have to update version comment. Now visiting index.html page will update the cache.
The parts of solution aren't mine but I've found them through internet and put together so that it works.
For static resources right caching would be to use query parameters with value of each deployment or file version. This will have effect of clearing cache after each deployment.
/Content/css/Site.css?version={FileVersionNumber}
Here is ASP.NET MVC example.
<link href="#Url.Content("~/Content/Css/Reset.css")?version=#this.GetType().Assembly.GetName().Version" rel="stylesheet" type="text/css" />
Don't forget to update assembly version.
I had a case where I would take photos of clients online and would need to update the div if a photo is changed. Browser was still showing the old photo. So I used the hack of calling a random GET variable, which would be unique every time. Here it is if it could help anybody
<img src="/photos/userid_73.jpg?random=<?php echo rand() ?>" ...
EDIT
As pointed out by others, following is much more efficient solution since it will reload images only when they are changed, identifying this change by the file size:
<img src="/photos/userid_73.jpg?modified=<? filemtime("/photos/userid_73.jpg")?>"
A lot of answers are missing the point - most developers are well aware that turning off the cache is inefficient. However, there are many common circumstances where efficiency is unimportant and default cache behavior is badly broken.
These include nested, iterative script testing (the big one!) and broken third party software workarounds. None of the solutions given here are adequate to address such common scenarios. Most web browsers are far too aggressive caching and provide no sensible means to avoid these problems.
Updating the URL to the following works for me:
/custom.js?id=1
By adding a unique number after ?id= and incrementing it for new changes, users do not have to press CTRL + F5 to refresh the cache. Alternatively, you can append hash or string version of the current time or Epoch after ?id=
Something like ?id=1520606295
<meta http-equiv="pragma" content="no-cache" />
Also see https://stackoverflow.com/questions/126772/how-to-force-a-web-browser-not-to-cache-images
Here is the MDSN page on setting caching in ASP.NET.
Response.Cache.SetExpires(DateTime.Now.AddSeconds(60))
Response.Cache.SetCacheability(HttpCacheability.Public)
Response.Cache.SetValidUntilExpires(False)
Response.Cache.VaryByParams("Category") = True
If Response.Cache.VaryByParams("Category") Then
'...
End If
Not sure if that might really help you but that's how caching should work on any browser. When the browser request a file, it should always send a request to the server unless there is a "offline" mode. The server will read some parameters like date modified or etags.
The server will return a 304 error response for NOT MODIFIED and the browser will have to use its cache. If the etag doesn't validate on server side or the modified date is below the current modified date, the server should return the new content with the new modified date or etags or both.
If there is no caching data sent to the browser, I guess the behavior is undetermined, the browser may or may not cache file that don't tell how they are cached. If you set caching parameters in the response it will cache your files correctly and the server then may choose to return a 304 error, or the new content.
This is how it should be done. Using random params or version number in urls is more like a hack than anything.
http://www.checkupdown.com/status/E304.html
http://en.wikipedia.org/wiki/HTTP_ETag
http://www.xpertdeveloper.com/2011/03/last-modified-header-vs-expire-header-vs-etag/
After reading I saw that there is also a expire date. If you have problem, it might be that you have a expire date set up. In other words, when the browser will cache your file, since it has a expiry date, it shouldn't have to request it again before that date. In other words, it will never ask the file to the server and will never receive a 304 not modified. It will simply use the cache until the expiry date is reached or cache is cleared.
So that is my guess, you have some sort of expiry date and you should use last-modified etags or a mix of it all and make sure that there is no expire date.
If people tends to refresh a lot and the file doesn't get changed a lot, then it might be wise to set a big expiry date.
My 2 cents!
I implemented this simple solution that works for me (not yet on production environment):
function verificarNovaVersio() {
var sVersio = localStorage['gcf_versio'+ location.pathname] || 'v00.0.0000';
$.ajax({
url: "./versio.txt"
, dataType: 'text'
, cache: false
, contentType: false
, processData: false
, type: 'post'
}).done(function(sVersioFitxer) {
console.log('Versió App: '+ sVersioFitxer +', Versió Caché: '+ sVersio);
if (sVersio < (sVersioFitxer || 'v00.0.0000')) {
localStorage['gcf_versio'+ location.pathname] = sVersioFitxer;
location.reload(true);
}
});
}
I've a little file located where the html are:
"versio.txt":
v00.5.0014
This function is called in all of my pages, so when loading it checks if the localStorage's version value is lower than the current version and does a
location.reload(true);
...to force reload from server instead from cache.
(obviously, instead of localStorage you can use cookies or other persistent client storage)
I opted for this solution for its simplicity, because only mantaining a single file "versio.txt" will force the full site to reload.
The queryString method is hard to implement and is also cached (if you change from v1.1 to a previous version will load from cache, then it means that the cache is not flushed, keeping all previous versions at cache).
I'm a little newbie and I'd apreciate your professional check & review to ensure my method is a good approach.
Hope it helps.
In addition to setting Cache-control: no-cache, you should also set the Expires header to -1 if you would like the local copy to be refreshed each time (some versions of IE seem to require this).
See HTTP Cache - check with the server, always sending If-Modified-Since
There is one trick that can be used.The trick is to append a parameter/string to the file name in the script tag and change it when you file changes.
<script src="myfile.js?version=1.0.0"></script>
The browser interprets the whole string as the file path even though what comes after the "?" are parameters. So wat happens now is that next time when you update your file just change the number in the script tag on your website (Example <script src="myfile.js?version=1.0.1"></script>) and each users browser will see the file has changed and grab a new copy.
Force browsers to clear cache or reload correct data? I have tried most of the solutions described in stackoverflow, some work, but after a little while, it does cache eventually and display the previous loaded script or file. Is there another way that would clear the cache (css, js, etc) and actually work on all browsers?
I found so far that specific resources can be reloaded individually if you change the date and time on your files on the server. "Clearing cache" is not as easy as it should be. Instead of clearing cache on my browsers, I realized that "touching" the server files cached will actually change the date and time of the source file cached on the server (Tested on Edge, Chrome and Firefox) and most browsers will automatically download the most current fresh copy of whats on your server (code, graphics any multimedia too). I suggest you just copy the most current scripts on the server and "do the touch thing" solution before your program runs, so it will change the date of all your problem files to a most current date and time, then it downloads a fresh copy to your browser:
<?php
touch('/www/sample/file1.css');
touch('/www/sample/file2.js');
?>
then ... the rest of your program...
It took me some time to resolve this issue (as many browsers act differently to different commands, but they all check time of files and compare to your downloaded copy in your browser, if different date and time, will do the refresh), If you can't go the supposed right way, there is always another usable and better solution to it. Best Regards and happy camping. By the way touch(); or alternatives work in many programming languages inclusive in javascript bash sh php and you can include or call them in html.
For webpack users:-
I added time with chunkhash in my webpack config. This solved my problem of invalidating cache on each deployment. Also we need to take care that index.html/ asset.manifest is not cached both in your CDN or browser. Config of chunk name in webpack config will look like this:-
fileName: [chunkhash]-${Date.now()}.js
or If you are using contenthash then
fileName: [contenthash]-${Date.now()}.js
This is the simple solution I used to solve in one of my applications using PHP.
All JS and CSS files are placed in a folder with version name. Example : "1.0.01"
root\1.0.01\JS
root\1.0.01\CSS
Created a Helper and Defined the version Number there
<?php
function system_version()
{
return '1.0.07';
}
And Linked JS and SCC Files like below
<script src="<?= base_url(); ?>/<?= system_version();?>/js/generators.js" type="text/javascript"></script>
<link rel="stylesheet" type="text/css" href="<?= base_url(); ?>/<?= system_version(); ?>/css/view-checklist.css" />
Whenever I make changes to any JS or CSS file, I change the System Verson in Helper and rename the folder and deploy it.
I had the same problem, all i did was change the file names which are linked to my index.html file and then went into the index.html file and updated their names, not the best practice but if it works it works. The browser sees them as new files so they get redownloaded on to the users device.
example:
I want to update a css file, its named styles.css, change it to styless.css
Go into index.html and update , and change it to
in case interested I've found my solution to get browsers refreshing .css and .js in the context of .NET MVC (.net fw 4.8) and the use of bundles.
I wanted to make browsers refresh cached files only after a new assembly is deployed.
Buinding on Paulius Zaliaduonis response, my solution is as follows:
store your application base url in the web config app settings (the HttpContext is not yet available at runtime during the RegisterBundle...), then make this parameter changing according to the configuration (debug, staging, release...) by the xml transform
In BundleConfig RegisterBundles get the assembly version by the means of reflection, and...
...change the default tag format of both styles and scripts so that the bundling system generates link and script tags appending a query string parameter on them.
Here is the code
public static void RegisterBundles(BundleCollection bundles)
{
string baseUrl = system.Configuration.ConfigurationManager.AppSettings["by.app.base.url"].ToString();
string assemblyVersion = Assembly.GetExecutingAssembly().GetName().Version.ToString();
Styles.DefaultTagFormat = $"<link href='{baseUrl}{{0}}?v={assemblyVersion}' rel='stylesheet'/>";
Scripts.DefaultTagFormat = $"<script src='{baseUrl}{{0}}?v={assemblyVersion}'></script>";
}
You'll get tags like
<script src="https://example.org/myscriptfilepath/script.js?v={myassemblyversion}"></script>
you just need to remember to to build a new version before deploying.
Ciao
Do you want to clear the cache, or just make sure your current (changed?) page is not cached?
If the latter, it should be as simple as
<META HTTP-EQUIV="Pragma" CONTENT="no-cache">
If you search on Google 'new york state beach cleanup', you'll see that the first result is for the website http://najomawi.com, but the title doesn't look quite right for such a site. You'll also notice that if you click this link it instead takes you to a website for Nike shoes. It only happens if you use the Google results link though (and I believe it happens in Bing, Yahoo and others). If you put http://najomawi.com directly into your browser bar, it takes you to the correct site. Confused, I checked the page source code (both with 'View Page Source' and Chrome's inspector) and found this...
<script>
var s=document.referrer;
if(s.indexOf("google")>0 || s.indexOf("bing")>0 || s.indexOf("aol")>0 || s.indexOf("yahoo")>0)
{
self.location='http://www.theredkicks.com';
}
</script>
I have no idea how this got there. It appears in the the head tags of the home page, which is index.html. There is no PHP code, no other JS, nothing other than CSS stylesheets that I am aware of. The entire site is pretty much static HTML and CSS sheets. So how did this get there? And how can I get rid of it?
The JavaScript code is very simple. It just checks if document.referrer contains the name of the most relevant search engines and, if so, redirects the load to another page, in this case, http://www.theredkicks.com.
Your site certainly was hacked somehow or your host provider is not very honest.
Notice that there's nothing attached to the query string in this redirect, so this is not an "affiliate" (wrong) way to make money. The only person that is gaining something with this is the redirect target.
Also, it's very interesting that your page is aparently being processed trought ASP. That is strange, as long as you say that your site is made only by static HTML and CSS.
Look at the cookie, is something like this:
ASPSESSIONIDSATCSAAC=INMLBOADDKNKMPACCK
And also the headers:
HTTP/1.1 200 OK
Date: Fri, 10 Jan 2014 01:30:49 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
Content-Length: 15168
Content-Type: text/html
Cache-control: private
I don't know where you are hosting your site, but you should claim urgent solution for this problem there.
No. This is an injection being done conditionally, pointing to your DNS server/records being compromised.
Your DNS primary and secondary records are being routed through siteprotect.com. I have no idea if you have chosen siteprotect as your DNS handlers, but siteprotect.com doesn't actually resolve at the moment. I also have no idea who "siteprotect" are.
If your actual host is not "siteprotect" and you have not heard of them, reset your DNS records to those of your host and change your passwords etc. If your host is "siteprotect" they may be aware of the problem and working on it.
PROBLEM:
I am hosting a widget on a client's website that will be different for each page on the site.
To render the widget, the client includes a script tag on their pages. This script tag is loaded for every page on the site and the code that it returns depends on the page.
So, if this script gets cached, the end result is that we serve a widget for the wrong page.
Right now, when we serve the script, we set in the response headers
Cache-Control: max-age=0
Expires : 24 hours in the past
yet sometimes browsers still cache the script.
QUESTION:
Is there a way to use http headers to stop caching in all cases or are we going to have to take a completely different approach?
UPDATE:
The headers that topek recommended greatly improved the non-cacheability of the scripts. However, (again in Chrome who seems to be the most cache-aggressive) when using the back, forward, or reload buttons the script is still cached. If you actually CLICK on anything it will be fetched from the server.
It seems that the only foolproof way to stop caching will be to set script sources that are guaranteed to be different for each page load (as suggested by esilija and tejs).
Those two headers should do the trick:
response.setHeader("Cache-Control", "no-cache, must-revalidate");
response.setHeader("Expires", "Sat, 26 Jul 1997 05:00:00 GMT");
or you set name according to the current page, e.g. when the user requests the page http://domain/posts/1 then the script name could be http://domain/script/scriptname/posts/1. With approach the script would still be cachable per page.
Do not append a query string on the script like script.js?random_string. Proxies don't play well with this approach. If you want to place a random string in the name, then put it before .js like this script-0934234234.js and rewrite the request on your server.