Two service workers at the same time - javascript

I'd like to use two service workers on my site: one to provide a classic offline cache (/sw.js) for my PWA and another for something like a local database "server" which uses background sync and push (/sw-db.js). Since the latter tends to do heavy work (blocking the event loop for a few ms) it's better to keep it separate.
Since the database sw is not used for fetch requests, I would give it a dummy scope, whereas sw.js is scoped for the whole domain.
Does the first, which responds to "fetch" events, also serve the code/URL for /sw-db.js (keeping it somewhat in-sync with site updates) or are service workers always updated via network.

The sw.js script URL that you pass in to navigatior.serviceWorker.register('/path/to/sw.js') will always be fetched bypassing any other service workers when it's time to check for updates. So to answer your question, the other service worker's fetch handler won't be triggered.
The HTTP cache does come into play whenever there's an update check for a service worker script. So you should make sure you're setting proper HTTP cache control headers for your use case.
Usually, a service worker update check is triggered due to a navigation to a page controlled by a service worker, but if you have a service worker with a "dummy" scope, it won't end up controlling any pages. That being said, when a service worker handlers sync or push events you'll also end up triggering the update check. I'm not sure if every sync or push triggers the check, or just a subset of them, such as the ones which cause a new service worker to spawn. But it will happen at least some of the time.

Related

What are the restrictions on what can and cannot be done in a a service worker?

Further to Can I http poll or use socket.io from a Service Worker on Safari iOS? what is the list of what can and cannot be done in a service worker? The answer referenced above says "You cannot ... have an open connection of any sort to your server" which makes sense, but where is that fact documented and how is the restriction enforced?
For example, are certain browser APIs unavailable to Service Workers? or is there an execution quota which prevents a long running process?
Eg. if my service worker has ...
setInterval (()=>{console.log('foo'), 1000})
... will it throw an exception?, will it run and then fail? is the behaviour browser dependent?
Service Workers are only supposed to process attached events.
And those are to be registered by some script from the outside.
Even delaying the execution is not supported in some cases on Safari - Event.waitUntil(promise).
Once your event queue is empty, your user agent is supposed to decide wether it kills of the service. There is no guarantee that anything from then on is going to get executed.
A service worker is not just another thread but a very specific kind of thread. As in it is meant to intercept network and resource fetch requests and do something with it. In its most basic form, it caches if the network is not available, but it can also return a different resource than what was requested, an older version or a placeholder etc.
Eg. if my service worker has ... setInterval (()=>{console.log('foo'),
1000}) ... will it throw an exception? will it run and then fail? is
the behaviour browser dependent?
It will likely work. However, there is very little point of doing this since you neither have any DOM access nor can you directly interact with the user. At most, you can print out errors and warnings though I don't know what warning would require interval polling.
From the question, it sounds you are trying to accomplish some background work without blocking the main thread. In which case, the more generic type (the Worker API) is your friend.

When does Workbox's precaching go into effect?

I got a error; the worbox precaching my Static files,for example js or css;At this time I setted workbox.routing.registerRoute is don't work;
If I delete workbox precaching (must sure server worker cache file),After the refresh file is from cache;
Responses won't come from the service worker until the registered service worker takes control of the current page. Depending on how you're testing things, that might not happen until you've closed all of your previously open tabs for your origin.
You can learn more at "The Service Worker Lifecycle".
I'd recommend starting from scratch by using a Chrome Incognito window, going through the SW registration, and then reloading that Incognito tab. At that point, the newly registered SW should be in control of the page, and you should see your precached JavaScript being used to satisfy the subresource request.
In general, if you are using Workbox precache and runtime routing in the same service worker, and you list you call to precaching first (which is what you're doing), then precaching will take precedence.

What is the use of `self.Clients.claim()`

To register a service worker, I can call
navigator.serviceWorker.register('/worker.js')
Every time the page loads it checks for an updated version of worker.js. If an update is found, the new worker won't be used until all the page's tabs are closed and then re-opened. The solution I read was:
self.addEventListener('install', function(event) {
event.waitUntil(self.skipWaiting());
});
self.addEventListener('activate', function(event) {
event.waitUntil(self.clients.claim());
});
I can understand the skipWaiting part, but what exactly does clients.claim() do? I've done some simple tests and it seems to work as expected even without it.
I'm excerpting the following from a guide to the service worker lifecycle:
clients.claim
You can take control of uncontrolled clients by calling
clients.claim() within your service worker once it's activated.
Here's a variation of the demo above which calls clients.claim() in
its activate event. You should see a cat the first time. I say
"should", because this is timing sensitive. You'll only see a cat if
the service worker activates and clients.claim() takes effect before
the image tries to load.
If you use your service worker to load pages differently than they'd
load via the network, clients.claim() can be troublesome, as your
service worker ends up controlling some clients that loaded without
it.
Note: I see a lot of people including clients.claim() as boilerplate,
but I rarely do so myself. It only really matters on the very first
load, and due to progressive enhancement the page is usually working
happily without service worker anyway.
Service worker takes controls from the next page-reload after its registration. By using self.skipWaiting() and self.clients.claim(), you can ask the client to take control over service worker on the first load itself.
e.g
Let's say I cache a files hello.txt, and again If I make a call for hello.txt it will have make server call even though I have resource in my cache. This is the scenario when I don't use self.clients.claim(). However on making a server call for hello.txt on next page reloads, it will be serving the resource from the cache.
To tackle this problem, I have to use combination of self.skipWaiting() and self.clients.claim() so that service worker starts serving content as soon as it is activated.
P.S:
next page-reload means page revisit.
first load signifies the moment when page is visited for the first time.
I had trouble wrapping my head around clients.claim as well and none of the explanations made any sense to me so hopefully this answer helps anyone struggling as well.
To understand Clients.claim we have to look at the worker lifecycle.
Installing; This is the first phase after registration. When the oninstall handler completes, the service worker is considered installed.
Installed; The service worker is waiting for clients using other service workers to be closed.
Activating; There are no clients controlled by other service workers. When the onactive handler completes, the service worker is considered activated.
Activated; The service worker now controls the page.
skipWaiting and Clients.claim are designed to solve different problems.
Clients.claim ONLY has an effect on the very first time your webpage goes from an uncontrolled webpage to a controlled (by a service worker) webpage by registering a service worker.
skipWaiting is exactly what it says. It skips the waiting phase and moves directly to activating. Once activated it is now the active service worker for all clients. Clients being any window or tab that has a webpage open that is within the scope of your service worker.
So why do we need Clients.claim then?
This confused me and I bet it confused you too. The answer is best described in an example.
Imagine your webpage DOES NOT register a service worker and is therefor uncontrolled. You have two tabs (clients) of your webpage open. You make an update to your webpage so that it will now register a service worker.
You decide to reload the first tab (client), it will now fetch the script that registers the service worker and it will start to install. Once installed it notices that no other client is being controlled by a service worker so it does not have to wait and it can immediately safely activate the service worker and any fetch of a resource will now go through your service worker.
However, here is the catch, your other tab (client) will also have an active service worker now, BUT, it is not yet being controlled. Meaning any fetch will not go through the service worker yet. You need to reload any other tab (client) in order for it to be controlled by the active service worker. This is confusing, since if any new service worker hereafter becomes active by forcing it with skipWaiting, the other tabs (clients) will immediately be controlled by the new active service worker. So I emphasize the reload part is needed ONLY when an uncontrolled webpage becomes controlled.
Enter Clients.claim. When you call self.clients.claim() in the first service worker when it becomes activated, like so:
self.addEventListener('activate', event => {
event.waitUntil(clients.claim());
});
It will make sure the other tabs (clients) that were uncontrolled, but have an active service worker, will get controlled by the active service worker. Meaning any fetch to a resource will now go through the active service worker. Without Clients.claim the service worker is not used until the page is reloaded.
Again, if all the webpages are being controlled by a service worker already. If a NEW service worker is detected and installed, it normally waits until all tabs with the webpage (clients) are closed. The next time you visit the webpage it will have activated the new service worker and the webpage is being controlled by it.
However, if you don't close all the clients and you force the new service worker by using skipWaiting it will immediately become active for all clients and also controlled. Meaning any new fetch for a resource from ANY of the clients will now immediately go through your new service worker. Now you don't need to use Clients.claim in order for the other clients to start using your new service worker.
This was my attempt, hopefully it helped someone.
Clients.claim() makes the service worker take control of the page when you first register a service worker. If there is already a service worker on the page, it will make no difference. skipWaiting() makes a new service worker replace an old one. Without it, you would have to close the page (and any other open tabs containing a page in the same scope) before the new service worker was activated.

Prevent Service Worker from automatically stopping

Service Worker seems to automatically stop at some point. This behaviour unintentionally closes the WebSocket connection established on activate.
When and Why does it stop? How can I programmatically disable this unexpected action to keep Service Worker stay running?
What you're seeing is the expected behavior, and it's not likely to change.
Service workers intentionally have very short lifespans. They are "born" in response to a specific event (install, activate, message, fetch, push, etc.), perform their task, and then "die" shortly thereafter. The lifespan is normally long enough that multiple events might be handled (i.e. an install might be followed by an activate followed by a fetch) before the worker dies, but it will die eventually. This is why it's very important not to rely on any global state in your scripts, and to bootstrap any state information you need via IndexedDB or the Cache Storage API when your service worker starts up.
Service workers are effectively background processes that get installed whenever you visit certain web pages. If those background processes were allowed to run indefinitely, there's an increased risk of negative impact on battery and performance of your device/computer. To mitigate this risk, your browser will only run those processes when it knows it's necessary, i.e. in response to an event.
A use case for WebSockets is having your client listen for some data from the server. For that use case, the service worker-friendly alternative to using WebSockets is to use the Push Messaging API and have your service worker respond to push events. Note that in the current Chrome implementation, you must show a user-visible notification when handling a push event. The "silent" push use case is not supported right now.
If instead of listening to data from the server, you were using WebSockets as a way of sending data from your client to your server, there's unfortunately no great service worker-friendly way of doing that. At some point in the future, there may be a way of registering your service worker to be woken up via a periodic/time-based event at which point your could use fetch() to send data to the server, but that's currently not supported in any browsers.
P.S.: Chrome (normally) won't kill a service worker while you have its DevTools interface open, but this is only to ease debugging and is not behavior you should rely on for a real application.
The Theory
Jeff's answer explains the theory part - why and how, in detail.
It also includes many good points on why you might not want to pursue this.
However, in my case, the downsides are nonexistent since my app will run on desktop machines which are reserved only to run my app. But I needed to keep the SW alive even when the browser window is minimized. So, if you are working on a web app which will run on variety of devices, keeping the SW alive might not be a good idea, for the things discussed in above answer.
With that said, let's move onto the actual, practical answer.
My "Practical" Solution
There should be many ways to keep the SW alive, since SWs stay alive a bit after responding to many different events. In my case, I've put a dummy file to server, cached it in the SW, and requested that file periodically from the document.
Therefore the steps are;
create a dummy file on the server, say ping.txt
cache the file on your SW
request that file from your html periodically to keep the SW alive
Example
// in index.html
setInterval(function(){
fetch('/ping.txt')
}, 20000)
The request will not actually hit the server, since it will be cached on the SW. Nonetheless, that will keep the SW alive, since it will respond to the fetch even evoked by the request.
PS: I've found 20 seconds to be a good interval to keep the SW alive, but it might change for you, you should experiment and see.

Service Worker vs Shared Worker

What is the difference between Service Worker and Shared Worker?
When should I use Service Worker instead of Shared Worker and vice versa?
A service worker has additional functionality beyond what's available in shared workers, and once registered, they persist outside the lifespan of a given web page.
Service workers can respond to message events, like shared workers, but they also have access to additional events. Handling fetch events allows service workers to intercept any network traffic (originating from a controlled page) and take specific actions, including serving responses from a Request/Response cache. There are also plans to expose a push event to service workers, allowing web apps to receive push messages in the "background".
The other major difference relates to persistence. Once a service worker is registered for a specific origin and scope, it stays registered indefinitely. (A service worker will automatically be updated if the underlying script changes, and it can be either manually or programmatically removed, but that's the exception.) Because a service worker is persistent, and has a life independent of the pages active in a web browser, it opens the door for things like using them to power the aforementioned push messaging—a service worker can be "woken up" and process a push event as long as the browser is running, regardless of which pages are active. Future web platform features are likely to take advantage of this persistence as well.
There are other, technical differences, but from a higher-level view, those are what stand out.
A SharedWorker context is a stateful session and is designed to multiplex web pages into a single app via asynchronous messaging (client/server paradigm). Its life cycle is domain based, rather than single page based like DedicatedWorker (two-tier paradigm).
A ServiceWorker context is designed to be stateless. It actually is not a persistent session at all - it is the inversion of control (IoC) or event-based persistence service paradigm. It serves events, not sessions.
One purpose is to serve concurrent secure asynchronous events for long running queries (LRQs) to databases and other persistence services (ie clouds). Exactly what a thread pool does in other languages.
For example if your web app executes many concurrent secure LRQs to various cloud services to populate itself, ServiceWorkers are what you want. You can execute dozens of secure LRQs instantly, without blocking the user experience. SharedWorkers and DedicatedWorkers are not easily capable of handling many concurrent secure LRQs. Also, some browsers do not support SharedWorkers.
Perhaps they should have called ServiceWorkers instead: CloudWorkers for clarity, but not all services are clouds.
Hopefully this explanation should lead you to thinking about how the various Worker types were designed to work together. Each has its own specialization, but the common goal is to reduce DOM latency and improve user experience in web based applications.
Throw in some WebSockets for streaming and WebGL for graphics and you can build some smoking hot web apps that perform like multiplayer console games.
2022 06 Update
WebKit added support for the SharedWorker recently, see the details of the resolution in the issue link mentioned below.
2020 11 Update
Important detail for anyone interested in this discussion: SharedWorker is NOT supported by WebKit (was intentionally removed ~v6 or something).
WebKit team are explicitly suggesting to use ServiceWorker wherever SharedWorker might seem relevant.
For a community wish to get this functionality back to WebKit see this (unresolved as of now) issue.
Adding up to the previous great answers.
As the main difference is that ServiceWorker is stateless (will shut down and then start with clear global scope) and SharedWorker will maintain state for the duration of the session.
Still there is a possibility to request that ServiceWorker will maintain state for the duration of a message handler.
s.onmessage = e => e.waitUntil((async () => {
// do things here
// for example issue a fetch and store result in IndexedDb
// ServiceWorker will live till that promise resolves
})())
The above code requires that the ServiceWorker will not shut down till the promise given as the parameter to waitUntil resolves. If many messages are handled concurrently in that manner ServiceWorker will not shut down untill all promises are resolved.
This could be possibly used to prolong ServiceWorker life indefinitely making it effectively a SharedWorker. Still, do keep in mind that browser might decide to force a shut down if ServiceWorker goes on fo too long.

Categories