I am making a web application with offline capabilities using a service worker generated by a Nodejs plugin called sw-precache. Everything works fine and I do have access to the html files or images offline.
But then, since you have no server-side language possible, is there a way to rewrite url client-side like an .htaccess file would do? Like showing a "404 Page not found" page when no file matches the url? I know that redirections are possible using Javascript or meta tags, but rewriting the url?
By default, sw-precache will only respond to fetch events when the URL being requested is a URL for a resource that it has cached. If someone navigations to a URL for a non-existent web page, then sw-precache won't respond to the fetch event.
That does mean that you have a chance to run your own code in an additional fetch event handler that could implement custom behavior, like returning a 404.html page when a user navigates to a non-existent page while offline. You need to jump through a couple of hoops, but here's how to do it:
// In custom-offline-import.js:
self.addEventListener('fetch', event => {
if (event.request.mode === 'navigate') {
event.respondWith(
fetch(event.request)
.catch(() => caches.match('404.html', {ignoreSearch: true}))
// {ignoreSearch: true} is needed, since sw-precache appends a search
// parameter with versioning information.
);
}
});
// In your sw-precache config:
{
// Make sure 404.html is picked up in one of the glob patterns:
staticFileGlobs: ['404.html'],
// See https://github.com/GoogleChrome/sw-precache#importscripts-arraystring
importScripts: ['custom-offline-import.js'],
}
This shouldn't interfere with anything that sw-precache is doing, as it will just be used as fallback.
Related
In a service-worker.js we can,
self.addEventListener("fetch", event => {
event.respondWith( caches.match(event.request).then(cachedResponse => { return cachedResponse || fetch(event.request); }) );
// See if the requested thing already exists in the cache and use it instead of re-downloading it.
// And only attempt to get a file from the remote server if it is not found within any of the caches.
});
Yes but, I'm wondering, in what case or cases would a browser not search through the caches first before trying to download a needed file? And therefore make this piece of code necessary?
Isn't it already the default thing that every browser does i.e. "Looking for the assets in the local system before trying to get them from far away"?
Is this perhaps only and only to try and keep the app/page going even if the internet connection is lost? Like prevent the browser from complaining even though the device has gone offline?
Any useful clarification is welcome and appreciated.
I made a sw.js file that caches my chat website so users can open in offline mode, however, the Service Worker file caused alot of issues including not being able to see new messages and alot of website crashes so I was forced to delete it. Sadly my none of my current users can delete the cache manually! NOte that I kept the sw.js file but it's now empty so is there any code I can write to delete all of my current user caches?
I don't think this is relevant but my app uses django
To delete the cache, you can use inbuild cache API.
caches.keys().then(cacheNames => {
cacheNames.forEach(value => {
caches.delete(value);
});
})
Removing content from your sw.js file is not enough. If there is already a service worker installed and running then I would suggest you to "unRegister" that also. You can do so
programmatically using below code.
navigator.serviceWorker.getRegistrations().then(function(registrations) {
for(let registration of registrations) {
registration.unregister()
}
})
Please note you only need to run this code once in all the user's browser.
I'm new to Vue and created a project with the PWA Service-worker plugin. After deploying a new version of my App I get these messages in console:
After refreshing the page (F5) these messages still appear the same way and the App is still in it's old state. I tried everything to clear the cache but it still won't load the new content.
I haven't changed anything from the default config after creating my project and didn't add any code which interacts with the serviceworker. What is going wrong? Am I missing something?
As I figured out, this question is really only related to beginners in PWA, which don't know that you can (and need) to configure PWA for achieving this. If you feel addressed now (and using VueJS) remember:
To automatically download the new content, you need to configure PWA. In my case (VueJS) this is done by creating a file vue.config.js in the root directory of my project (On the same level as package.json).
Inside this file you need this:
module.exports = {
pwa: {
workboxOptions: {
skipWaiting: true
}
}
}
Which will automatically download your new content if detected.
However, the content won't be displayed to your client yet, since it needs to refresh after downloading the content. I did this by adding window.location.reload(true) to registerServiceWorker.js in my src/ directory:
updated () {
console.log('New content is available: Please refresh.')
window.location.reload(true)
},
Now, if the Service Worker detects new content, it will download it automatically and refresh the page afterwards.
I figured out a different approach to this and from what I've seen so far it works fine.
updated() {
console.log('New content is available; please refresh.');
caches.keys().then(function(names) {
for (let name of names) caches.delete(name);
});
},
What's happening here is that when the updated function gets called in the service worker it goes through and deletes all the caches. This means that your app will start up slower if there is an update but if not then it will serve the cached assets. I like this approach better because service workers can be complicated to understand and from what I've read using skipWaiting() isn't recommend unless you know what it does and the side effects it has. This also works with injectManifest mode which is how I'm currently using it.
pass registration argument then use the update() with that.
the argument uses ServiceWorkerRegistration API
updated (registration) {
console.log('New content is available; please refresh.')
registration.update()
},
I want to create a custom profiler for Javascript as a Chrome DevTools Extension. To do so, I'd have to instrument all Javascript code of a website (parse to AST, inject hooks, generate new source). This should've been easily possible using chrome.devtools.inspectedWindow.reload() and its parameter preprocessorScript described here: https://developer.chrome.com/extensions/devtools_inspectedWindow.
Unfortunately, this feature has been removed (https://bugs.chromium.org/p/chromium/issues/detail?id=438626) because nobody was using it.
Do you know of any other way I could achieve the same thing with a Chrome Extension? Is there any other way I can replace an incoming Javascript source with a changed version? This question is very specific to Chrome Extensions (and maybe extensions to other browsers), I'm asking this as a last resort before going a different route (e.g. dedicated app).
Use the Chrome Debugging Protocol.
First, use DOMDebugger.setInstrumentationBreakpoint with eventName: "scriptFirstStatement" as a parameter to add a break-point to the first statement of each script.
Second, in the Debugger Domain, there is an event called scriptParsed. Listen to it and if called, use Debugger.setScriptSource to change the source.
Finally, call Debugger.resume each time after you edited a source file with setScriptSource.
Example in semi-pseudo-code:
// Prevent code being executed
cdp.sendCommand("DOMDebugger.setInstrumentationBreakpoint", {
eventName: "scriptFirstStatement"
});
// Enable Debugger domain to receive its events
cdp.sendCommand("Debugger.enable");
cdp.addListener("message", (event, method, params) => {
// Script is ready to be edited
if (method === "Debugger.scriptParsed") {
cdp.sendCommand("Debugger.setScriptSource", {
scriptId: params.scriptId,
scriptSource: `console.log("edited script ${params.url}");`
}, (err, msg) => {
// After editing, resume code execution.
cdg.sendCommand("Debugger.resume");
});
}
});
The implementation above is not ideal. It should probably listen to the breakpoint event, get to the script using the associated event data, edit the script and then resume. Listening to scriptParsed and then resuming the debugger are two things that shouldn't be together, it could create problems. It makes for a simpler example, though.
On HTTP you can use the chrome.webRequest API to redirect requests for JS code to data URLs containing the processed JavaScript code.
However, this won't work for inline script tags. It also won't work on HTTPS, since the data URLs are considered unsafe. And data URLs are can't be longer than 2MB in Chrome, so you won't be able to redirect to large JS files.
If the exact order of execution of each script isn't important you could cancel the script requests and then later send a message with the script content to the page. This would make it work on HTTPS.
To address both issues you could redirect the HTML page itself to a data URL, in order to gain more control. That has a few negative consequences though:
Can't reload page because URL is fixed to data URL
Need to add or update <base> tag to make sure stylesheet/image URLs go to the correct URL
Breaks ajax requests that require cookies/authentication (not sure if this can be fixed)
No support for localStorage on data URLs
Not sure if this works: in order to fix #1 and #4 you could consider setting up an HTML page within your Chrome extension and then using that as the base page instead of a data URL.
Another idea that may or may not work: Use chrome.debugger to modify the source code.
Using the text! plugin, is there a way of forcing RequireJS to reload a file rather than returning the cached data?
RequireJS will only cache the file per request. A page reload will fetch it again.
If you see something different it is because:
Either you have caching on your server.
or your browser caches the request. You can of course disable this on your browser.
If you want browsers to fetch a clean file every time, you should have a no-cache header for these resources on your server.
I think that you could add the new html5 cache feature by providing a cache manifest: http://www.html5rocks.com/en/tutorials/appcache/beginner/
then you could use the requirejs "domReady" to get the proper load event:
http://requirejs.org/docs/api.html#pageload
and then listen to the proper event (code taken from the first link):
window.applicationCache.addEventListener('updateready', function(e) {
if (window.applicationCache.status == window.applicationCache.UPDATEREADY) {
// Browser downloaded a new app cache.
if (confirm('A new version of this site is available. Load it?')) {
window.location.reload();
}
} else {
// Manifest didn't changed. Nothing new to server.
}}, false);
at this point whenever you update urlArgs you will get the new js files and with the manifest cache file you will get the new html files