workbox on https://localhost:<port> fails to fetch random assets - javascript

I've got an express server with a self-signed SSL which just serves assets for SPA frontend. When i visit https://localhost:8433 application stats up and successfully fetches all requried assets. However at the same time app's service worker (workbox) also sends its requests to cache the same assets. And those requests randomly fail with a TypeError: Failed to fetch
Network tab:
Seems like some requests are randomly being canceled by service worker.
I've tried to search for similar issues, but to no awail. Some recommend playing around with CORS, that didn't help, swapping localhost for 127.0.0.1 or changing caching strategies had the same effect, in the end i had the very minimal configuration with precaching only (no runtime caching), unfortunatelly nothing really helped.
I'm pretty sure it should be a common issue, i'm just missing it :)
Does anyone have any thoughts?
By the way, when app is served from insecure http - no errors at all.

Related

Serving Angular app as a static content in Express Server

I am serving Angular app as a static content in Express Server. When serving static files with Express, Express by default adds ETag to the files. So, each next request will first check if ETag is matched and if it is, it will not send files again. I know that Service Worker works similar and it tries to match the hash. Does anyone know what is the main difference between these two approaches (caching with ETag and caching with Service Workers), and when we should use one over the other? What would be the most efficient when it comes to performance:
Server side caching and serving Angular app static files
Implementing Angular Service Worker for caching
Do both 1 and 2
To give a better perspective, I'll address a third cache option as well, to clarify the differences.
Types of caching
Basically we have 3 possible layers of caching, based on the priority they are checked from the client:
Service Worker cache (client-side)
Browser Cache, also known as HTTP cache (client-side)
Server side cache (CDN)
PS: Some browser like Chrome have an extra memory cache layer in front of the service worker cache.
Characteristics / differences
The service worker is the most reliable from the client-side ones, since it defines its own rules over how to manage the caching, and provide extra capabilities and fine-grained control over exactly what is cached and how caching is done.
The Browser caching is defined based on some HTTP headers from the assets response (Cache-Control and Expires), but the main issue is that there are many conditions in which those are ignored.
For instance, I've heard that for files bigger than 25Mb, normally they are not cached, specially on mobile, where the memory is limited (I believe it's getting even more strict lately, due to the increase in mobile usage).
So between those 2 options, I'd always chose the Service Worker cache for more reliability.
Now, talking to the 3rd option, the CDN checks the HTTP headers to look for ETag for busting the cache.
The idea of the Server-side caching is to only call the origin server in case the asset is not found on the CDN.
Now, between 1st and 3rd, the main difference is that Service Workers works best for Slow / failing network connections and offline, since the cache is done client-side, so if the network is off, then the service worker retrieves the last cached information, allowing for a smooth user experience.
Server-side, on the other hand, only works when we are able to reach the server, but at the same time, the caching happens out of user's device, saving local space, and reducing the application memory consumption.
So as you see, there's no right / wrong answers, just what works best for your use case.
Some Sources
MDN Cache
MDN HTTP caching
Great article from web.dev
Facebook study on caching duration and efficiency
Let's answer your questions:
what is the main difference between these two approaches (caching with ETag and caching with Service Workers)
Both solutions cache files, the main difference is the need to reach the server or stay locally:
For the ETag, the browser hits the server asking for a file with a hash (the etag), depending on the file stored in the server, the server will answer with a "the file was not modified, use your local copy" with a 300 HTTP response or "here is a new version of that file" with a 200 HTTP response and a new file. In both cases the server always decides. and the user will wait for a round trip.
With the Service worker approach you can decide locally what to do. You can write some logic to control what/when to use a local copy (cached) or when go to the server. This is very useful for offline capabilities since the logic is happening in the client, and there is no need to hit the server.
when we should use one over the other?
You can use both together. You can define some logic in the service worker, if there is no connection return the local copies, otherwise go to the server.
What would be the most efficient when it comes to performance:
Server side caching and serving Angular app static files
Implementing Angular Service Worker for caching
Do both 1 and 2
My recommended approach is use both approaches. Although treat your files differently, the 'index.html' file can change, in this case use the service worker (in case there is no internet access) and if there is internet access let the web server answer with the etag. All the other static files (CSS and JS) should be immutable files, this is you can be sure the local copy is valid, in this case add a hash to the files' name (so they are always unique files) and cache them. When you have a new version of your app, you will modify the 'index.html' pointing to new immutable files.

JS service-worker and cache storage API with http 302 - how to cache the redirect?

I have a webpage, serverd via https from Tomcat, with serviceworker according to examples for fetching a resource and store it in the cache. If Tomcat is not running, the page is served from the cache storage - so far so good.
My Tomcat configuration contains the redirectPort attribute to redirect http to https. The problem I have: When Tomcat is not running and my webpage is accessed via http, browser shows: Connection refused since the http 302 redirect is not stored in the cache. How I could achieve that?
Unfortunately service workers require https for safety, so you cannot have a service worker intercept an http request.
If you would like to force browsers to visit your page in https you could enable HSTS:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security
The HSTS list can be preloaded in the browser allowing it to work offline. Note, however, you need to be careful when enabling HSTS as if you make a mistake it can be difficult to correct.
Alternatively browsers are slowly moving towards loading https by default unless the user explicitly types http. For example:
https://blog.chromium.org/2021/03/a-safer-default-for-navigation-https.html

Force serviceWorker update when existing "bad" serviceWorker is active and cache first

I have a problem, much like this question, which is a few years old now.
The problem I have is [a] new JS to the app code to unregister the ServiceWorker or skip-waiting is not being parsed because the app code is being served by the old ServiceWorker, and [b] I have full control of the server, but using the Clear-Site-Data: response header isn't working because all of the app's files are being served from the ServiceWorker or cache. It's never seeing that header. The only fresh requests are being made to an API on a different subdomain.
There are users out in the world using this application. What can I do to get clients with the old serviceWorker installed and running in their browser to observe new updates?
The request for the underlying service worker script file during an update check should always bypass both the HTTP cache and the existing service worker's fetch handler.
You should be able to add the Clear-Site-Data: header to the response for the underlying service worker script file returned by your web server and recover in that manner.
Additionally, you should be able to include code in the service worker script file along the lines of what's in that "kill switch" example and it will be run whenever a browser makes an update check, since again, that update check will bypass caches and go against the web server.

How to SSR on specific routes on angular-universal? [duplicate]

I used #angular/service-worker to generate SW for an angular4 web app.
after updating the ngsw-manifest.json to handle dynamic request from the server
I get "Status Code:503 OK (from ServiceWorker)" when offline (after first loading).
Any thoughts?
your problem is related with [Content Security Policy (CSP)] configuration on your web server or proxy nginx. Due you are connecting properly with nginx, the PWA offline feature is not enabled in your browser.
read more on (https://content-security-policy.com/).
The performance strategy works (you can try this strategy), the caveat being dynamic content is only cached on the second page load due to async nature of the service worker. In your ngsw-config.json:
"cacheConfig": {
"strategy": "performance",
"maxSize": 100,
"maxAge": "3d"
}
By default, files that are fetched at runtime are not cached, but by creating a dynamic cache group, matching files that are fetched by the Service Worker will also be cached. Choosing optimizeFor setting of performance will always serve from the cache first, and then attempt to cache any updates. Choosing freshness means that the Service Worker will attempt to access the file via the network, and if the network fails, then it will serve the cached copy.

Consuming REST service with Ajax - Same origin policy

I'm writing my first Knockout Js application and I'm stuck trying to make an ajax request to my service (I'm new to web development in general).
I already found out that the problem is same-origin policy, and the reason I'm getting blocked by this I think has to do with my development setup: I'm using WebStorm to write my html/js and launching the page with its built-in webserver, which serves at port 63342; and my REST service is self-hosted, written in go, and running at 8080.
When the application is finished, I'd like to serve both the REST api and the Web app from my go server, but while developing the WebStrom server is really convenient.
Do any of you guys have similar problems? How do you work it out? Should I try to serve everything from my go server even during development? My server is not ready to serve any static content yet. Or should I try to use PJSON, even though I don't think I need it in my final app?
This is the error I get in my chrome develoment tools:
XMLHttpRequest cannot load http://localhost:8080/lines/03/pos. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:63342' is therefore not allowed access.
You could CORS-enable your REST service, and make sure that your web app is sending CORS request headers.
I'm not proficient in either Go or WebStorm, but I recommend investigating CORS.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS
Turns out it took only a couple of lines of code to serve static content from my go server, so I just did that and now everything is working fine.
Thanks for your help though!
Best regards

Categories