How can I force the web browser to do a hard refresh of the page via JavaScript?
Hard refresh means getting a fresh copy of the page AND refresh all the external resources (images, JavaScript, CSS, etc.).
⚠️ This solution won't work on all browsers. MDN page for location.reload():
Note: Firefox supports a non-standard forceGet boolean parameter for location.reload(), to tell Firefox to bypass its cache and force-reload the current document. However, in all other browsers, any parameter you specify in a location.reload() call will be ignored and have no effect of any kind.
Try:
location.reload(true);
When this method receives a true value as argument, it will cause the page to always be reloaded from the server. If it is false or not specified, the browser may reload the page from its cache.
More info:
The location object
window.location.href = window.location.href
Accepted answer above no longer does anything except just a normal reloading on mostly new version of web browsers today. I've tried on my recently updated Chrome all those, including location.reload(true), location.href = location.href, and <meta http-equiv="Cache-Control" content="no-cache, no-store, must-revalidate" />. None of them worked.
My solution is by using server-side capability to append non-repeating query string to all included source files reference as like below example.
<script src="script.js?t=<?=time();?>"></script>
So you also need to control it dynamically when to keep previous file and when to update it. The only issue is when files inclusion is performed via script by plugins you have no control to modify it. Don't worry about source files flooding. When older file is unlinked it will be automatically garbage collected.
Changing the current URL with a search parameter will cause browsers to pass that same parameter to the server, which in other words, forces a refresh.
(No guarantees if you use intercept with a Service Worker though.)
const url = new URL(window.location.href);
url.searchParams.set('reloadTime', Date.now().toString());
window.location.href = url.toString();
If you want support older browsers:
if ('URL' in window) {
const url = new URL(window.location.href);
url.searchParams.set('reloadTime', Date.now().toString());
window.location.href = url.toString();
} else {
window.location.href = window.location.origin
+ window.location.pathname
+ window.location.search
+ (window.location.search ? '&' : '?')
+ 'reloadTime='
+ Date.now().toString()
+ window.location.hash;
}
That said, forcing all your CSS and JS to refresh is a bit more laborious. You would want to do the same process of adding a searchParam for all the src attributes in <script> and href in <link>. That said it won't unload the current JS, but would work fine for CSS.
document.querySelectorAll('link').forEach((link) => link.href = addTimestamp(link.href));
I won't bother with a JS sample since it'll likely just cause problems.
You can save this hassle by adding a timestamp as a search param in your JS and CSS links when compiling the HTML.
This is a 2022 update with 2 methods, considering SPA's with # in url:
METHOD 1:
As mentioned in other answers one solution would be to put a random parameter to query string. In javascript it could be achieved with this:
function urlWithRndQueryParam(url, paramName) {
const ulrArr = url.split('#');
const urlQry = ulrArr[0].split('?');
const usp = new URLSearchParams(urlQry[1] || '');
usp.set(paramName || '_z', `${Date.now()}`);
urlQry[1] = usp.toString();
ulrArr[0] = urlQry.join('?');
return ulrArr.join('#');
}
function handleHardReload(url) {
window.location.href = urlWithRndQueryParam(url);
// This is to ensure reload with url's having '#'
window.location.reload();
}
handleHardReload(window.location.href);
The bad part is that it changes the current url and sometimes, in clean url's, it could seem little bit ugly for users.
METHOD 2:
Taking the idea from https://splunktool.com/force-a-reload-of-page-in-chrome-using-javascript-no-cache, the process could be to get the url without cache first and then reload the page:
async function handleHardReload(url) {
await fetch(url, {
headers: {
Pragma: 'no-cache',
Expires: '-1',
'Cache-Control': 'no-cache',
},
});
window.location.href = url;
// This is to ensure reload with url's having '#'
window.location.reload();
}
handleHardReload(window.location.href);
Could be even combined with method 1, but I think that with headers should be enought:
async function handleHardReload(url) {
const newUrl = urlWithRndQueryParam(url);
await fetch(newUrl, {
headers: {
Pragma: 'no-cache',
Expires: '-1',
'Cache-Control': 'no-cache',
},
});
window.location.href = url;
// This is to ensure reload with url's having '#'
window.location.reload();
}
handleHardReload(window.location.href);
UPDATED to refresh all the external resources (images, JavaScript, CSS, etc.)
Put this in file named HardRefresh.js:
function hardRefresh() {
const t = parseInt(Date.now() / 10000); //10s tics
const x = localStorage.getItem("t");
localStorage.setItem("t", t);
if (x != t) location.reload(true) //force page refresh from server
else { //refreshed from server within 10s
const a = document.querySelectorAll("a, link, script, img")
var n = a.length
while(n--) {
var tag = a[n]
var url = new URL(tag.href || tag.src);
url.searchParams.set('r', t.toString());
tag.href = url.toString(); //a, link, ...
tag.src = tag.href; //rerun script, refresh img
}
}
}
window.addEventListener("DOMContentLoaded", hardRefresh);
window.addEventListener("deviceorientation", hardRefresh, true);
This code do a fully controled forced hard refresh for every visitor, so that any update will show up without a cashing problem.
Duplicated DOM rendering is not a performance issue, because the first render is from cache and it stops rendering in <script src="js/HardRefresh.js"> where it reload a page from server. When it run a refreshed page it also refresh urls in page.
The last refresh time x is stored in localStorage. It is compared with the current time t to refresh within 10 seconds. Assuming a load from server not take more than 10 sec we manage to stop a page refresh loop, so do not have it less than 10s.
For a visitor of page the x != t is true since long time ago or first visit; that will get page from server. Then diff is less than 10s and x == t, that will make the else part add query strings to href and src having sources to refresh.
The refresh() function can be called by a button or other conditioned ways. Full control is managed by refining exclusion and inclusion of urls in your code.
For angular users and as found here, you can do the following:
<form [action]="myAppURL" method="POST" #refreshForm></form>
import { Component, OnInit, ViewChild } from '#angular/core';
#Component({
// ...
})
export class FooComponent {
#ViewChild('refreshForm', { static: false }) refreshForm;
forceReload() {
this.refreshForm.nativeElement.submit();
}
}
The reason why it worked was explained on this website: https://www.xspdf.com/resolution/52192666.html
You'll also find how the hard reload works for every framework and more in this article
explanation: Angular
Location: reload(), The Location.reload() method reloads the current URL, like the Refresh button. Using only location.reload(); is not a solution if you want to perform a force-reload (as done with e.g. Ctrl + F5) in order to reload all resources from the server and not from the browser cache. The solution to this issue is, to execute a POST request to the current location as this always makes the browser to reload everything.
The most reliable way I've found is to use a chache buster by adding a value to the querystring.
Here's a generic routine that I use:
function reloadUrl() {
// cache busting: Reliable but modifies URL
var queryParams = new URLSearchParams(window.location.search);
queryParams.set("lr", new Date().getTime());
var query = queryParams.toString();
window.location.search = query; // navigates
}
Calling this will produce something like this:
https://somesite.com/page?lr=1665958485293
after a reload.
This works to force reload every time, but the caveat is that the URL changes. In most applications this won't matter, but if the server relies on specific parameters this can cause potential side effects.
I have an injected external widget in my React application that have some links and when clicking in these links a query param is appended to the URL.
Imagine something like this:
base url: http://localhost:8080/
url after adding the query param: http://localhost:8080/?shouldOpenModal
Is there any way to detect this appended query param to the URL using React? It's important in here that the page isn't fully reloaded, since there's information that cannot be lost and we also can't use external libraries such as React Router DOM (which is widely used in similar questions in here).
So far, to append the URL without triggering a reload is using this:
if (history.pushState) {
const newUrl = window.location.protocol + "//" +
window.location.host +
window.location.pathname + '?stOpenBenefitsModal';
window.history.pushState({path:newurl},'',newurl);
}
You also need to update some app state when update url to re-render your component tree:
....
[state, setState] = useState(window.location)
const navigate = (url)=>{
window.history.push('','',url);
setState(window.location)
}
const handlePopState = ()=>{
setState(window.location)
}
useEffect(()=>{
// sync back and forward browser button with state
window.addEvenListener('popstate',handlePopState)
},[])
....
//render something based on state value
We use an internal system (with FF as default browser)
We need to avoid that the user open the same URL in different tabs.
As the tabs share the same PHP session we get a mess.
So actually I'm looking to the way to check programmatically if certain URL is already opened in one of the opened tabs.
Client side (JS) or server side (PHP).
We use now the FF extension "Duplicate Tabs Closer" that helps.
But I'd prefer to keep full control (give warning, choose for which URL it works).
You can write cookie after your page loaded in the first tab, check it on the server side and show the user warning instead of actual page content if this cookie is set and the second tab is opened. To handle the case when a user closes the only opened tab you can remove that cookie in onbeforeunload handler.
Working off of Oleksandr's answer, you can store a map of number of times a url is opened, in a cookie. When a page is opened, increment the number or set it to 0. When a page is closed, decrement it or delete it.
function incrementTabsOpen() {
let tabsOpen = readObjCookie('tabsOpen') || {};
if (tabsOpen[window.location.href]) tabsOpen[window.location.href]++;
else tabsOpen[window.location.href] = 0;
writeObjCookie('tabsOpen', tabsOpen);
}
function decrementTabsOpen() {
let tabsOpen = readObjCookie('tabsOpen') || {};
if (tabsOpen[window.location.href]) tabsOpen[window.location.href]--;
if (tabsOpen[window.location.href] === 0) delete tabsOpen[window.location.href];
writeObjCookie('tabsOpen', tabsOpen);
}
// https://stackoverflow.com/a/11344672/3783155
function readObjCookie(name) {
let result = document.cookie.match(new RegExp(name + '=([^;]+)'));
if (result) result = JSON.parse(result[1]);
return result;
}
function writeObjCookie(name, value) {
document.cookie = name + '=' + JSON.stringify(value);
}
and
window.addEventListener('load', function() {
incrementTabsOpen();
};
window.addEventListener('unload', function() {
decrementTabsOpen();
};
I am trying to track Facebook ad results using the Facebook Pixel during appropriate events (page views, lead generation, order form view, purchase). I can do all of this for GA using GTM with no problem, but on Facebook I only have partial success.
The main issue is I have a cross domain setup as shown below:
domain1.com/offer - landing page (FB Page View Pixel should fire)
domain1.com/ordergate - request email before showing order form page (FB Page View Pixel should fire)
crm.com/formsubmission - the actual form submits to my crm (FB Lead Pixel should fire)
crm.com/orderform - order form (FB order form view pixel should fire)
domain1.com/thankyou - the thank you page (FB order pixel should fire)
So my trigger on GTM to fire FB pixel was the "referrer" containing "facebook". However, because of the multi-step process, the referrer is lost by the time the order form or sale is completed.
I have since then learned I need to do the following:
User lands from facebook, write cookie with an appropriately short expiration time that stores this information on domaiin1.com.
When the user clicks a link and is redirected to crm.com, check if the user has the cookie, and if they do, add something like ?reffacebook=true to the redirect URL.
On crm.com, if the URL has ?reffacebook=true write the same cookie you wrote on (1) with an equally short expiration time.
UPDATE
So I have figured out step 2 using the following script on page view when the Facebook cookie is set:
function updateLinks(parameter, value)
{
var links = document.getElementsByTagName('a');
var includeDomains = self.location.host;
for (var i=0;i<links.length;i++)
{
if(links[i].href != "#" && links[i].href != "/" && links[i].href != "" && links[i].href != window.location) //Ignore links with empty src attribute, linking to site root, or anchor tags (#)
{
var updateLink = true;
if(links[i].href.toLowerCase().indexOf(includeDomains.toLowerCase()) != -1) //Domain of current link is included i the includeDomains array. Update Required...
{
updateLink = false;
}
if(!updateLink)
{
//Do nothing - link is internal
}
else
{
var queryStringComplete = "";
var paramCount = 0;
var linkParts = links[i].href.split("?");
if(linkParts.length > 1) // Has Query String Params
{
queryStringComplete = "?";
var fullQString = linkParts[1];
var paramArray = fullQString.split("&");
var found = false;
for (j=0;j<paramArray.length;j++)
{
var currentParameter = paramArray[j].split("=");
if(paramCount > 0)
queryStringComplete = queryStringComplete + "&";
if(currentParameter[0] == parameter) //Parameter exists in url, refresh value
{
queryStringComplete = queryStringComplete + parameter + "=" + value;
found = true;
}
else
{
queryStringComplete = queryStringComplete + paramArray[j]; //Not related parameter - re-include in url
}
paramCount++;
}
if(!found) //Add new param to end of query string
queryStringComplete = queryStringComplete + "&" + parameter + "=" + value;
}
else
{
queryStringComplete = "?" + parameter + "=" + value;
}
links[i].href = links[i].href.split("?")[0] + queryStringComplete;
}
}
else
{
//Do nothing
}
}
}
So with this code I can now properly attribute people with the facebook referral across domains...
...but I still have a problem with form submits.
So when the contact gets to step 4, it is a redirect from the form submission. It does not carry any cookie or query string, so neither of the FB pixels (order form view or order) is being fired.
I'm not sure how I would handle this. My first thought is to pass a hidden field into the form submission (say reffacebook=true). Then somehow expose that in the url in a form of a query string so that it can be detected by GTM.
This seems to be somewhat complicated though, as I would have to edit all my forms to have this variable, edit my CRM so it knows to receive it, and then edit the form landing page to expose that variable in the url.
Hey I hope that I understood what is this all about. Here you want to track traffic between cross domains right? I am not into any coding or anything like that to achieve such a tracking. Because I don't know any coding seriously (I apologies my self for not even trying to learn. I realize my self is that knowing Java script have a lot of benefits in advanced marketing). Ok Here is my point. If we want to track traffic between domains and retarget them later, wouldn't it be done by Facebook itself just by using the same pixel in both domains? This is what I used to believe in the case of multiple domains while doing Facebook ads. Here the important Thing is the audience should be the same from domain A to domain B (In your case it looks like yes the audience is same there for there is no issue for doing that I think). But not sure whether Facebook will track the traffic between domains successfully or not just by placing same FB Pixel in both domains.
Thank you.
#SalihKp, I think you have a point however the issue is that i believe facebook does cross domain with third party cookies which are not working optimally now adays
#David Avellan actually since the user returns to the landing domain for the thank you page, then the final conversion should work using 1st party cookies, but what you want in between might be an issue.
i am looking at now a case where they user lands on a.com and convert
I was on Facebook and realised that when I change page the page address changes but the page does not redirect but loads via ajax instead.
You can tell because the console does not clear when you click the link but the URL changes.
Weird, but anyone know how it is done?
Facebook runs with massive AJAX calls that changes the page state and the sections.
So to make a page linkable to somebody by copying the URL address, every time you call an AJAX relevant function they updates the URL using a fake anchor "#!" plus the real address.
Simply when you load the real page (using F5 or linking that so somebody) a JS parser catchs the string after #! (if there is) and redirect you to baseaddress + that.
I belive something like this (untested):
var urlstr = new String(location.href);
var urlparm = urlstr.split('#!');
var last = urlparm.length - 1;
if( (urlparm[last] != urlparm[0]) && (urlparm[last] != "/") )
{ var redir = "http://www.facebook.com" + urlparm[last];
location.href = redir;
}
In Google Chrome instead the URL really changes, I'm according that there is an hash somewhere, but I don't know where and how.