I have PWA app in which I am rendering js files using Webpack:
{% render_bundle 'app' 'js' %}
After launching the PWA app in mobile Chrome the file is not updated. Most probably Chrome uses cached version.
I tried to delete PWA app and install it again but it did not help.
Afterwards I have cleared the mobile Chrome cache manually and files were refreshed, however, most of the users won't do it so I need another solution which does not require any actions from end users.
Answers on similar question suggest to add parameter or version number to the js file.
<script type="text/javascript" src="myfile.js?REVISION"></script>
However, it is not clear how can I do it using Webpack?
One more popular answer explains that I can use hash or chunkhash to generate file name using Webpack:
output: {
path: '/',
filename: '[hash].js',
chunkFilename: '[chunkhash].js',
},
This solution won't work for me because I cannot change the name of the file every time when there are some chnages in it. The name of the file should stay the same because I use django's collectfast app. It checks the md5sum of static files and updates only those ones which have been changed.
The name of the static js file should stay the same. At the same time, I need mechanism which will force mobile Chrome to update changed file.
if ('serviceWorker' in navigator) {
navigator.serviceWorker.register('service-worker.js')
.then(reg => {
reg.onupdatefound = () => {
const installingWorker = reg.installing;
installingWorker.onstatechange = () => {
switch (installingWorker.state) {
case 'installed':
if (navigator.serviceWorker.controller) {
// new update available
setTimeout(() => document.location.reload(true), 1000);
caches.keys().then(keys => {
keys.forEach(key => caches.delete(key));
})
}
break;
}
};
};
})
}
I was trying to do the same thing with Svelte PWA code here:
https://github.com/kuhlaid/svelte2/releases/tag/v0.1.6
I resorted to running the app build process and then using the 'replace-in-file' plugin (see the rollup.config.js script). If you search the source code for '__cVersion__' you will see where I am adding the file revision string to try and force a file cache update (not localstorage).
The OTHER thing that needs to be done for a PWA is making sure we clear the CacheStorage in the users browser if you are building your service working with something like Workbox to precache files within the service worker. This is something you are NOT ABLE to do during the build process since the code to clear the CacheStorage needs to be run at time of accessing the app in the browser. To clear this cache you can insert something along these lines in the javascript of your app:
const l = console.log
if ('caches' in window) {
l('CacheStorage is present for this app');
caches.keys().then(function(cacheArray) {
l(cacheArray); // just print the list of CacheStorage names to the console
// for each cache, try and delete it
cacheArray.forEach(function(cache) {
caches.delete(cache).then((bool) => {
l('deleted cache: '+cache); // print the successful deletion to console
}).catch((err) => {l(err)});
});
});
}
This is all well and good, BUT this begs the next question of how you only execute this ONCE for a new code build/update? Well, possibly a 'code version' variable could be added to your javascript somewhere like:
const codeVersion = __cVersion__;
Then during the build/rollup of your code, you dynamically replace __cVersion__ with your new version string (eg. v0.112) and store this codeVersion value to localStorage for the app. Then each time you build you would check localStorage first for a change in the version string, and if it has changed then run the code to delete the CacheStorage for the app (as mentioned above). This version of my PWA code handles these cases:
https://github.com/kuhlaid/svelte2/releases/tag/v0.1.7
Automatically clearing the cache for end-users is one of the complexities of PWA (or any app) that should be understood upfront by the developer. No one likes an app where you are told by the developer that YOU need to manually clear your browser cache each time they push a code update.
you could try putting one of these in your html,
that would force cache to expire.
<meta http-equiv="cache-control" content="max-age=0" />
<meta http-equiv="cache-control" content="no-cache" />
<meta http-equiv="expires" content="0" />
<meta http-equiv="expires" content="Tue, 01 Jan 1980 1:00:00 GMT" />
<meta http-equiv="pragma" content="no-cache" />
Related
In a PWA, I have a big data file that periodically gets updated. I figured it'd be nice to keep the latest version cached in my service worker for offline support, but only the latest version. I don't want old versions of the data file hanging around using disk space.
I'm using Workbox (version 6.1.2) so I tried writing this in my service worker:
registerRoute(
new RegExp("/gen/real-player-data-*"),
new CacheFirst({
cacheName: "real-player-data",
plugins: [
new ExpirationPlugin({
maxEntries: 1,
purgeOnQuotaError: true,
}),
],
}),
);
For completeness, my full service worker is:
import * as googleAnalytics from "workbox-google-analytics";
import { ExpirationPlugin } from "workbox-expiration";
import {
cleanupOutdatedCaches,
createHandlerBoundToURL,
precacheAndRoute,
} from "workbox-precaching";
import { CacheFirst } from "workbox-strategies";
import { NavigationRoute, registerRoute } from "workbox-routing";
registerRoute(
new RegExp("/gen/real-player-data-*"),
new CacheFirst({
cacheName: "real-player-data",
plugins: [
new ExpirationPlugin({
maxEntries: 1,
purgeOnQuotaError: true,
}),
],
}),
);
// Will be filled in by tools/build-sw.js
precacheAndRoute(self.__WB_MANIFEST);
const handler = createHandlerBoundToURL("/index.html");
const navigationRoute = new NavigationRoute(handler, {
denylist: [
new RegExp("^/files"),
new RegExp("^/fonts"),
new RegExp("^/gen"),
new RegExp("^/ico"),
new RegExp("^/img"),
new RegExp("^/manifest"),
new RegExp("^/sw.js"),
],
});
registerRoute(navigationRoute);
// https://developers.google.com/web/tools/workbox/guides/migrations/migrate-from-v3
cleanupOutdatedCaches();
googleAnalytics.initialize();
My data files are named /gen/real-player-data-HASH.json so I figure this will do what I want - notice when my app requests a new version of my data file, add it to the cache, and remove the old one.
In practice, this seems to only partially work. It does create the cache and store the data there, but old versions of the file never seem to get deleted.
Try it yourself. Going to https://play.basketball-gm.com/new_league/real will install the service worker and request the data file, if you let it fully load. You might need to reload it once to see it show up in the Chrome dev tools. The latest version of the data file is https://play.basketball-gm.com/gen/real-player-data-2a6c8e9b0b.json at the time of writing this question:
(Side note - I'm not sure why it says Content-Length is 0, you can clearly see the data in the bottom pane.)
Now, with the service worker installed, if you just go to an old version of my data file like https://play.basketball-gm.com/gen/real-player-data-540506bc45.json that should get picked up by the route defined above, and I believe it should result in the previous file being removed from the cache.
It does indeed get picked up, but now there are two entries in the cache, the other one did not get deleted:
And they aren't just empty placeholders, you can view the data in both files in the bottom pane.
Try more and you get more in the list, there seems to be no limit:
https://play.basketball-gm.com/gen/real-player-data-18992d5073.json
https://play.basketball-gm.com/gen/real-player-data-fe8f297ea7.json
https://play.basketball-gm.com/gen/real-player-data-fd28409152.json
I'm using Chrome 89 on Ubuntu.
Any idea what I'm doing wrong? Or is there some better way to achieve my goal?
Next day update
I did a bit of console.log debugging within my service worker. It seems that Workbox is basically working correctly, except this block of code which is what actually deletes old entries from the cache:
for (const url of urlsExpired) {
await cache.delete(url, this._matchOptions);
}
Here's what I edited it to, for debugging:
console.log('urlsExpired', urlsExpired);
console.log('keys', await cache.keys());
for (const url of urlsExpired) {
console.log('delete', url, this._matchOptions);
const deleted = await cache.delete(url, this._matchOptions);
console.log('after delete', url, deleted);
}
console.log('keys2', await cache.keys());
And here's the output I see when I do what I wrote above (load the service worker, load the 1st data file, load the 2nd data file, observe this output as it tries and fails to delete the 1st data file from the cache):
So it does identify the old URL it needs to delete from the cache. It does see both the old and new URLs in the cache. But that cache.delete call resolves to false. MDN says:
resolves to true if the cache entry is deleted, or false otherwise
This article says:
If it doesn't find the item, it resolves to false.
So I guess that implies it's not finding the item? But look at the screenshots, the URL matches an entry in the cache. And MDN says the first argument to cache.delete can be a Request object or a URL.
Is this a bug in Chrome? A bug in Workbox? Something else? I'm out of ideas here.
The problem was that the "Vary" header is set in my responses for the data file, which means that ExpirationPlugin won't work unless the ignoreVary option is enabled.
I'm trying to show the newest updates to my app when there is an update available.
The react app is a create-react-app and its hosted on S3 and uses AWS cloudfront for caching.
When I push updates the client doesn't see those updates unless they do a hard refresh, so I looked into updating the serviceworker.
First:
Cloudfront is set to cache time of 0. So it should not display any old code.
The headers in index.html
<meta http-equiv="cache-control" content="max-age=0" />
I have registered the serviceworker in index.js on loading of the app and pass in a config to update the serviceworker if there are new changes:
import swConfig from "./swConfig";
serviceWorker.register(swConfig);
swConfig.js
export default {
onUpdate: (registration) => {
registration.waiting.postMessage("skipWaiting");
registration.unregister().then(() => {
let confirm = window.confirm(
"New updates to the app - please close the tab and reload"
);
if (confirm) {
window.location.reload(true);
} else {
alert("Please close this tab and reload the page");
window.location.reload(true);
}
});
},
onSuccess: (registration) => {
console.info("sw on success state");
},
};
What happens when there is a new update pushed to S3 and cloudfront is invalidated:
User sees an alert to update the app.
App refreshes.
User sees NEW content on the homepage.
If user refreshes - they see the OLD content again (or if they come back to the page at a later point in time, even after closing the tabs)
What is causing the old content to be displayed?
I have a feeling its related to "skipWaiting".
If I remove registration.waiting.postMessage("skipWaiting"); then the page will refresh and the serviceworker will update - but the user WON'T see the NEW content
I'm developing a reactjs based application. I also made service-worker settings on it. After add to home screen , application never checks the server for new updates.
I also tried:
window.location.reload(true);
But it doesn't update new version.
I'm using Apache server to serve build folder and for update I'm getting a new build of my project and serve that on Apache server.
I finally resolved my problem after two days. The problem was in service-worker file. I had to add event listener if page reloaded and server files had changes so it will update the files.
So I added this section to serviceWorker.js in register function:
window.addEventListener('activate', function(event) {
event.waitUntil(
caches.keys().then(function(cacheNames) {
return Promise.all(
cacheNames.filter(function(cacheName) {
// Return true if you want to remove this cache,
// but remember that caches are shared across
// the whole origin
}).map(function(cacheName) {
return caches.delete(cacheName);
})
);
})
);
});
Just don't forget. This listener call when page is reload. So I make API service to check there is new version or not. if there is new version , It have to reload the page to get new files.
this question was so helpful: How to clear cache of service worker?
Update (December.1.2019):
I found better way to update new PWA. Actually that way (above) not work on iOS 13. So I decide check update by API. PWA Send current version to API and if there is new version released , in PWA we should delete all caches:
caches.keys().then(function(names) {
for (let name of names)
caches.delete(name);
});
And after that reload application:
window.location.href = "./";
After reload because there is no cache to load pages on offline mode, so PWA will check server and get new version.
this work for me:
src/index.tsx
// If you want your app to work offline and load faster, you can change
// unregister() to register() below. Note this comes with some pitfalls.
// Learn more about service workers: https://cra.link/PWA
serviceWorkerRegistration.register({
onUpdate: (e) => {
const { waiting: { postMessage = null } = {} as any, update } = e || {};
if (postMessage) {
postMessage({ type: 'SKIP_WAITING' });
}
update().then(() => {
window.location.reload();
});
},
});
My team and I have a project that was originally built as a PWA, but have since decided to scrap that idea as we realized it would need to change much more frequently than originally intended. However, the service worker is already live, as well as a newly redesigned landing page for the website. Despite all our efforts to clear the PWA caching, our clients are still reporting that they are receiving the old cached version of the website.
Currently, we have the service worker set up to delete all caches upon install (and whenever anything at all happens as a precaution), as well as some JavaScript to unregister the service worker when the new page actually loads. However, the problem is that none of this runs until the user makes a request to the website, and at that point the browser is already loading the cached content. Is it possible to clear this cache and prevent the browser from loading any content that was already cached?
Current service-worker.js
// Caching
var cacheCore = 'mkeSculptCore-0330121058';
var cacheAssets = 'mkeSculptAssets-0330121058';
self.addEventListener('install', function (event) {
self.skipWaiting();
caches.keys().then(function (names) {
for (let name of names)
caches.delete(name);
});
});
self.addEventListener('activate', function (event) {
caches.keys().then(function (names) {
for (let name of names)
caches.delete(name);
});
});
self.addEventListener('fetch', function (event) {
caches.keys().then(function (names) {
for (let name of names)
caches.delete(name);
});
});
Script in index.html
(function () {
if ('serviceWorker' in navigator) {
navigator.serviceWorker.getRegistrations().then(function (registrations) {
//returns installed service workers
if (registrations.length) {
for (let registration of registrations) {
registration.unregister();
}
}
});
}
})();
So far, I've read a few other similar StackOverflow answers, including this one, but they tend to rely on users manually doing something to fetch the new content, ie. via a hard reload or disabling the service worker manually through the browser settings. However, in my case, we cannot rely on manual user actions.
One way to solve this issue is to add a timestamp at end of the file(js, css) name so each time when it is making a request, the cache key is not available in the service worker and thus it tends to get a new version of the file at each load.
<script type="text/javascript" src="/js/scipt1.js?t=05042018121212"/>
For appending a new timestamp dynamically in the file name, please check this answer
But this may not be reliable if HTML itself is cached.
add this before to "update" all contents:
$.each(['index.html','file1.js','file2.js','file3.js'],function(index,file) {
$.get(file+'?t='+new Date().getTime(), function(){});
});
location.reload(true);
for the ServiceWorker to stop all windows using it must be closed.
If it's a webapp you can use window.close();
This code just loads a fresh version of the files in the list.
If there are any internal caches they will be all updated.
I recently started to get my feet wet with Electron. I really like the principles behind it but I find it a little confusing to do some things.
For example, how do you process user input? I've an main.js and a BrowserWindow pointing to a local html file (containing some user settings with input field).
How do I access this data when the HTML form is submitted (to either the same file or another one)?
main.js
const {app, BrowserWindow} = require('electron')
let win
function createWindow () {
win = new BrowserWindow({width: 800, height: 600})
win.loadURL('file://' + __dirname + '/index.html')
// Emitted when the window is closed.
win.on('closed', () => {
win = null
})
// Open the DevTools.
// win.webContents.openDevTools()
}
app.on('window-all-closed', () => {
if (process.platform !== 'darwin') {
app.quit()
}
})
app.on('activate', () => {
if (win === null) {
createWindow()
}
})
// In this file you can include the rest of your app's specific main process
// code. You can also put them in separate files and require them here.
//Start the main window
app.on('ready', createWindow)
index.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Document</title>
</head>
<body>
<form action="" method="post">
<input type="text" name="test-1">
</form>
</body>
</html>
With Electron, node.js is not acting as a webserver with routes like it would be in a typical web application scenario. Instead of sending requests to routes, you would create a single page application using a javascript framework like Angular, React, Knockout, etc. At that point, you no longer need to handle routing. You would tie your 'Submit' click event to a javascript function directly within the page, and process the input from there.
You can do everything from the page's javascript context that you can do from the node.js main process context. For instance, if you needed to access the file system from your page, you would use the Remote module to gain access to the node.js native APIs.
For example:
// Gain access to the node.js file system api
function useNodeApi() {
const remote = require('electron').remote;
const fs = remote.require('fs');
fs.writeFile('test.txt', 'Hello, I was written by the renderer process!');
}
I've rarely come across a situation where I needed to pass control back to the main process to accomplish something. Once the BrowserWindow launches, anything you could ever need to do could be done from the renderer process. This pretty much eliminates the need to do things like submit form posts via http.