A hard reload or hard refresh (e.g., shift-reload in Chrome) appears to bypass the service worker.
For example, loading a service-worker controlled page such as https://airhorner.com/ or https://wiki-offline.jakearchibald.com/, setting the network to "offline" in devtools, and then hard reloading the page leads to a broken "there is no internet connection" page. (A regular reload shows the cached page, as expected.)
Is there a way to prevent this, or use the service worker as a fallback in the event that the device is offline?
This is behavior is explicitly called out as part of the service worker specification:
navigator.serviceWorker.controller returns null if the request is a
force refresh (shift+refresh). The ServiceWorker objects returned from
this attribute getter that represent the same service worker are the
same objects.
So it's not just a browser implementation detail.
If you felt like there was a strong reason why a service worker shouldn't behave that way, the best approach would be to bring up your concerns in the spec's issue tracker.
Related
Further to Can I http poll or use socket.io from a Service Worker on Safari iOS? what is the list of what can and cannot be done in a service worker? The answer referenced above says "You cannot ... have an open connection of any sort to your server" which makes sense, but where is that fact documented and how is the restriction enforced?
For example, are certain browser APIs unavailable to Service Workers? or is there an execution quota which prevents a long running process?
Eg. if my service worker has ...
setInterval (()=>{console.log('foo'), 1000})
... will it throw an exception?, will it run and then fail? is the behaviour browser dependent?
Service Workers are only supposed to process attached events.
And those are to be registered by some script from the outside.
Even delaying the execution is not supported in some cases on Safari - Event.waitUntil(promise).
Once your event queue is empty, your user agent is supposed to decide wether it kills of the service. There is no guarantee that anything from then on is going to get executed.
A service worker is not just another thread but a very specific kind of thread. As in it is meant to intercept network and resource fetch requests and do something with it. In its most basic form, it caches if the network is not available, but it can also return a different resource than what was requested, an older version or a placeholder etc.
Eg. if my service worker has ... setInterval (()=>{console.log('foo'),
1000}) ... will it throw an exception? will it run and then fail? is
the behaviour browser dependent?
It will likely work. However, there is very little point of doing this since you neither have any DOM access nor can you directly interact with the user. At most, you can print out errors and warnings though I don't know what warning would require interval polling.
From the question, it sounds you are trying to accomplish some background work without blocking the main thread. In which case, the more generic type (the Worker API) is your friend.
I got a error; the worbox precaching my Static files,for example js or css;At this time I setted workbox.routing.registerRoute is don't work;
If I delete workbox precaching (must sure server worker cache file),After the refresh file is from cache;
Responses won't come from the service worker until the registered service worker takes control of the current page. Depending on how you're testing things, that might not happen until you've closed all of your previously open tabs for your origin.
You can learn more at "The Service Worker Lifecycle".
I'd recommend starting from scratch by using a Chrome Incognito window, going through the SW registration, and then reloading that Incognito tab. At that point, the newly registered SW should be in control of the page, and you should see your precached JavaScript being used to satisfy the subresource request.
In general, if you are using Workbox precache and runtime routing in the same service worker, and you list you call to precaching first (which is what you're doing), then precaching will take precedence.
Service Worker seems to automatically stop at some point. This behaviour unintentionally closes the WebSocket connection established on activate.
When and Why does it stop? How can I programmatically disable this unexpected action to keep Service Worker stay running?
What you're seeing is the expected behavior, and it's not likely to change.
Service workers intentionally have very short lifespans. They are "born" in response to a specific event (install, activate, message, fetch, push, etc.), perform their task, and then "die" shortly thereafter. The lifespan is normally long enough that multiple events might be handled (i.e. an install might be followed by an activate followed by a fetch) before the worker dies, but it will die eventually. This is why it's very important not to rely on any global state in your scripts, and to bootstrap any state information you need via IndexedDB or the Cache Storage API when your service worker starts up.
Service workers are effectively background processes that get installed whenever you visit certain web pages. If those background processes were allowed to run indefinitely, there's an increased risk of negative impact on battery and performance of your device/computer. To mitigate this risk, your browser will only run those processes when it knows it's necessary, i.e. in response to an event.
A use case for WebSockets is having your client listen for some data from the server. For that use case, the service worker-friendly alternative to using WebSockets is to use the Push Messaging API and have your service worker respond to push events. Note that in the current Chrome implementation, you must show a user-visible notification when handling a push event. The "silent" push use case is not supported right now.
If instead of listening to data from the server, you were using WebSockets as a way of sending data from your client to your server, there's unfortunately no great service worker-friendly way of doing that. At some point in the future, there may be a way of registering your service worker to be woken up via a periodic/time-based event at which point your could use fetch() to send data to the server, but that's currently not supported in any browsers.
P.S.: Chrome (normally) won't kill a service worker while you have its DevTools interface open, but this is only to ease debugging and is not behavior you should rely on for a real application.
The Theory
Jeff's answer explains the theory part - why and how, in detail.
It also includes many good points on why you might not want to pursue this.
However, in my case, the downsides are nonexistent since my app will run on desktop machines which are reserved only to run my app. But I needed to keep the SW alive even when the browser window is minimized. So, if you are working on a web app which will run on variety of devices, keeping the SW alive might not be a good idea, for the things discussed in above answer.
With that said, let's move onto the actual, practical answer.
My "Practical" Solution
There should be many ways to keep the SW alive, since SWs stay alive a bit after responding to many different events. In my case, I've put a dummy file to server, cached it in the SW, and requested that file periodically from the document.
Therefore the steps are;
create a dummy file on the server, say ping.txt
cache the file on your SW
request that file from your html periodically to keep the SW alive
Example
// in index.html
setInterval(function(){
fetch('/ping.txt')
}, 20000)
The request will not actually hit the server, since it will be cached on the SW. Nonetheless, that will keep the SW alive, since it will respond to the fetch even evoked by the request.
PS: I've found 20 seconds to be a good interval to keep the SW alive, but it might change for you, you should experiment and see.
What is the difference between Service Worker and Shared Worker?
When should I use Service Worker instead of Shared Worker and vice versa?
A service worker has additional functionality beyond what's available in shared workers, and once registered, they persist outside the lifespan of a given web page.
Service workers can respond to message events, like shared workers, but they also have access to additional events. Handling fetch events allows service workers to intercept any network traffic (originating from a controlled page) and take specific actions, including serving responses from a Request/Response cache. There are also plans to expose a push event to service workers, allowing web apps to receive push messages in the "background".
The other major difference relates to persistence. Once a service worker is registered for a specific origin and scope, it stays registered indefinitely. (A service worker will automatically be updated if the underlying script changes, and it can be either manually or programmatically removed, but that's the exception.) Because a service worker is persistent, and has a life independent of the pages active in a web browser, it opens the door for things like using them to power the aforementioned push messaging—a service worker can be "woken up" and process a push event as long as the browser is running, regardless of which pages are active. Future web platform features are likely to take advantage of this persistence as well.
There are other, technical differences, but from a higher-level view, those are what stand out.
A SharedWorker context is a stateful session and is designed to multiplex web pages into a single app via asynchronous messaging (client/server paradigm). Its life cycle is domain based, rather than single page based like DedicatedWorker (two-tier paradigm).
A ServiceWorker context is designed to be stateless. It actually is not a persistent session at all - it is the inversion of control (IoC) or event-based persistence service paradigm. It serves events, not sessions.
One purpose is to serve concurrent secure asynchronous events for long running queries (LRQs) to databases and other persistence services (ie clouds). Exactly what a thread pool does in other languages.
For example if your web app executes many concurrent secure LRQs to various cloud services to populate itself, ServiceWorkers are what you want. You can execute dozens of secure LRQs instantly, without blocking the user experience. SharedWorkers and DedicatedWorkers are not easily capable of handling many concurrent secure LRQs. Also, some browsers do not support SharedWorkers.
Perhaps they should have called ServiceWorkers instead: CloudWorkers for clarity, but not all services are clouds.
Hopefully this explanation should lead you to thinking about how the various Worker types were designed to work together. Each has its own specialization, but the common goal is to reduce DOM latency and improve user experience in web based applications.
Throw in some WebSockets for streaming and WebGL for graphics and you can build some smoking hot web apps that perform like multiplayer console games.
2022 06 Update
WebKit added support for the SharedWorker recently, see the details of the resolution in the issue link mentioned below.
2020 11 Update
Important detail for anyone interested in this discussion: SharedWorker is NOT supported by WebKit (was intentionally removed ~v6 or something).
WebKit team are explicitly suggesting to use ServiceWorker wherever SharedWorker might seem relevant.
For a community wish to get this functionality back to WebKit see this (unresolved as of now) issue.
Adding up to the previous great answers.
As the main difference is that ServiceWorker is stateless (will shut down and then start with clear global scope) and SharedWorker will maintain state for the duration of the session.
Still there is a possibility to request that ServiceWorker will maintain state for the duration of a message handler.
s.onmessage = e => e.waitUntil((async () => {
// do things here
// for example issue a fetch and store result in IndexedDb
// ServiceWorker will live till that promise resolves
})())
The above code requires that the ServiceWorker will not shut down till the promise given as the parameter to waitUntil resolves. If many messages are handled concurrently in that manner ServiceWorker will not shut down untill all promises are resolved.
This could be possibly used to prolong ServiceWorker life indefinitely making it effectively a SharedWorker. Still, do keep in mind that browser might decide to force a shut down if ServiceWorker goes on fo too long.
Is there any way I can access the chrome.* apis (specifically chrome.history) from a Web Worker?
If I pass the chrome.history or chrome object in with postMessage, it is not working because of a conversion error to Transferable type.
I can successfully query the history from my extension and pass the results, but I would like to leave the heavy lifting to the worker instead of the main thread and then pass the results.
Web Workers are meant to be light-weight, and do not inherit any permissions (not even host permissions) from the extension (besides, chrome is not even defined in a Web worker).
If you're doing really heavy stuff with the results of the chrome.history API, then you could pass the result of a callback to a worker for processing (with Transferables, the overhead is minimal). Before doing that, make sure that you profile whether the performance impact is really that significant to warrant implementing anything like this.