Serving Angular app as a static content in Express Server - javascript

I am serving Angular app as a static content in Express Server. When serving static files with Express, Express by default adds ETag to the files. So, each next request will first check if ETag is matched and if it is, it will not send files again. I know that Service Worker works similar and it tries to match the hash. Does anyone know what is the main difference between these two approaches (caching with ETag and caching with Service Workers), and when we should use one over the other? What would be the most efficient when it comes to performance:
Server side caching and serving Angular app static files
Implementing Angular Service Worker for caching
Do both 1 and 2

To give a better perspective, I'll address a third cache option as well, to clarify the differences.
Types of caching
Basically we have 3 possible layers of caching, based on the priority they are checked from the client:
Service Worker cache (client-side)
Browser Cache, also known as HTTP cache (client-side)
Server side cache (CDN)
PS: Some browser like Chrome have an extra memory cache layer in front of the service worker cache.
Characteristics / differences
The service worker is the most reliable from the client-side ones, since it defines its own rules over how to manage the caching, and provide extra capabilities and fine-grained control over exactly what is cached and how caching is done.
The Browser caching is defined based on some HTTP headers from the assets response (Cache-Control and Expires), but the main issue is that there are many conditions in which those are ignored.
For instance, I've heard that for files bigger than 25Mb, normally they are not cached, specially on mobile, where the memory is limited (I believe it's getting even more strict lately, due to the increase in mobile usage).
So between those 2 options, I'd always chose the Service Worker cache for more reliability.
Now, talking to the 3rd option, the CDN checks the HTTP headers to look for ETag for busting the cache.
The idea of the Server-side caching is to only call the origin server in case the asset is not found on the CDN.
Now, between 1st and 3rd, the main difference is that Service Workers works best for Slow / failing network connections and offline, since the cache is done client-side, so if the network is off, then the service worker retrieves the last cached information, allowing for a smooth user experience.
Server-side, on the other hand, only works when we are able to reach the server, but at the same time, the caching happens out of user's device, saving local space, and reducing the application memory consumption.
So as you see, there's no right / wrong answers, just what works best for your use case.
Some Sources
MDN Cache
MDN HTTP caching
Great article from web.dev
Facebook study on caching duration and efficiency

Let's answer your questions:
what is the main difference between these two approaches (caching with ETag and caching with Service Workers)
Both solutions cache files, the main difference is the need to reach the server or stay locally:
For the ETag, the browser hits the server asking for a file with a hash (the etag), depending on the file stored in the server, the server will answer with a "the file was not modified, use your local copy" with a 300 HTTP response or "here is a new version of that file" with a 200 HTTP response and a new file. In both cases the server always decides. and the user will wait for a round trip.
With the Service worker approach you can decide locally what to do. You can write some logic to control what/when to use a local copy (cached) or when go to the server. This is very useful for offline capabilities since the logic is happening in the client, and there is no need to hit the server.
when we should use one over the other?
You can use both together. You can define some logic in the service worker, if there is no connection return the local copies, otherwise go to the server.
What would be the most efficient when it comes to performance:
Server side caching and serving Angular app static files
Implementing Angular Service Worker for caching
Do both 1 and 2
My recommended approach is use both approaches. Although treat your files differently, the 'index.html' file can change, in this case use the service worker (in case there is no internet access) and if there is internet access let the web server answer with the etag. All the other static files (CSS and JS) should be immutable files, this is you can be sure the local copy is valid, in this case add a hash to the files' name (so they are always unique files) and cache them. When you have a new version of your app, you will modify the 'index.html' pointing to new immutable files.

Related

Cache all the http requests to a domain

As a frontend engineer, I'm wondering if there is a way to speed up my development by using a tool at the browser level (or any other kind of tool) to cache all the requests to a server.
Through hot restart, I have to wait to fetch a lot of requests to refresh the page and return to the desired state of the application.
A lot of times I don't need to have these pieces of information live. A cached version is fine (also that would avoid useless server workload).
In short, to put it simple, is there a way to cache a request with all the parameters like "http://myserver/endpoint/?params" and make the browser return the cached result?
Basically, a whole server as a cached backend.

A question about how web applications work and how server-client is implemented

This is kind of a weird question I think to ask, but I have browsing about for the past some time and cannot find a clear definite answer.
I understand that a client connects to its own server and communicates with the web-server through sockets and I kind of see how that works in php (I have never used php but have used sockets before so I understand the concept).
The issue is I'm trying to get a real view of this.
The question is, do websites generally use sockets and contact a web-server to fetch data or the actual html? Or is it a rare choice made in some areas?
If it is generally used, then is the "real" js usually in the server? or is it client-side (for performance sake)?
Context:
Let me explain a bit where I'm coming from, I'm not a web expert, but I am a computer engineering student so most concepts are easy to understand. A "real"-er view of this would be very helpful.
Now, onto why I'm asking this. I'm developing a web-app as part of a project and have done a fair bit of progress on it but everything was done on a local dev server (so basically a client?)
I've started wondering about this because I wanted to use a database for my website and since I want to connect to something, I will need to connect to a web-server first (for security sake).
My question's intent is to guide me on how and most importantly, where, to setup this server.
I don't think showing any code would be of help here, but assume I have my client running on localhost:1234, my database on localhost:3306, I think I should have a web-server on another port so I can establish this communication, but I want to do it in a clean and legitimate way so all of my current solutions can be ported online with little to no changes (except the obvious)
There's a bunch to unpack here.
First of all, servers can be distant or local. Usually they are distant, local server are mostly used for development purposes.
Even if your server is on your local machine, it still isn't the client. The client is the part that is connecting to your server. For web development it is usually the user browser.
Javascript is a language that can be used server-side, with a NodeJS server, but more often client-side, in your user browser.
Your website, or web application, communicate with your server through various means. Most common one is the HTTP protocol, used to make server requests such as data request to populate your page (in case of an API server, REST or otherwise), or simply request the actual page to display in the browser. The HTTP protocol works by resolving URLs, and making requests to your server registered to this url using special methods such as GET, POST, DELETE, etc...
Sockets are used to create a persistent connection with your server that works both ways. It is mostly used for realtime updates, such as a live chat, as it allows you to push updates from the server instead of having the client request everything.
In most cases the database can be found on the same server as the one serving the website or application, as it is a lot easier to handle, and often faster without the extra networks requests to get the data. However it can be placed on another server, with it's own API to get the data (not necessarily web related)
Ports such as 1234 or 3306 are often used for local development, however once your move your project to a host service, this is usually replace by urls. And the host service will provide you with a config to access the associated database. Or if you are building your own server you might still use ports. It is heavily dependent on your server config.
Hope this clear some things up.
In addition to #Morphyish answer, in the simplest case, a web browser (the client) requests an URL from a server. The URL contains the domain name of the server and some parameters. The server responds with HTML code. The browser interprets the code and renders the webpage.
The browser and the server communicates using HTTP protocol. HTTP is stateless and closes the connection after each request.
The server can respond with static HTML, e.g. by serving a static HTML file. Or, by serving dynamic HTML. Serving dynamic HTML requires some kind of server language (e.g. nodejs, PHP, python) that essentially concatenates strings to build the HTML code. Usually, the HTML is created by filling templates with data from the database (e.g. MySQL, Postgres).
There are countless languages, frameworks, libraries that help to achieve this.
In addition to HTML, the server can also serve javascript that is interpreted in the browser and adds dynamics to the webpage. However, there could be 2 types of javascript that should not be mixed. NodeJS runs on the server and formats the server response, client javascript runs on the browser. Remember, client and server are completely isolated and can communicate only through an HTTP connection.
That said, there ways to make persistent connections between client and server with WebSockets, and add all kinds of exotic solutions. The core principle remains the same.
It does not matter if server software (e.g apache, nginx) is running on your local machine or anywhere else. The browser makes a request to an address, the DNS and network stack figures out how to reach the server and makes it work.

How to SSR on specific routes on angular-universal? [duplicate]

I used #angular/service-worker to generate SW for an angular4 web app.
after updating the ngsw-manifest.json to handle dynamic request from the server
I get "Status Code:503 OK (from ServiceWorker)" when offline (after first loading).
Any thoughts?
your problem is related with [Content Security Policy (CSP)] configuration on your web server or proxy nginx. Due you are connecting properly with nginx, the PWA offline feature is not enabled in your browser.
read more on (https://content-security-policy.com/).
The performance strategy works (you can try this strategy), the caveat being dynamic content is only cached on the second page load due to async nature of the service worker. In your ngsw-config.json:
"cacheConfig": {
"strategy": "performance",
"maxSize": 100,
"maxAge": "3d"
}
By default, files that are fetched at runtime are not cached, but by creating a dynamic cache group, matching files that are fetched by the Service Worker will also be cached. Choosing optimizeFor setting of performance will always serve from the cache first, and then attempt to cache any updates. Choosing freshness means that the Service Worker will attempt to access the file via the network, and if the network fails, then it will serve the cached copy.

Is it possible to cache, client-side, dynamically created files?

I'm working on a websocket based project. The main server, which includes both a web and websocket server, mainly acts as a forwarding hub to other websocket servers. These secondary websocket servers are not expected to also have web servers running, but are expected to be hosting files that may or may not need to be downloaded (transferred directly via websocket depending on need).
While the files aren't expected to be very large--current test file hovers around 2KB, but we expect a standard of around 10-20KB, and possibly much larger if we allow encoding of images and other data-heavy material--files are expected to be demanded from an array of hosts repeatedly. That is, clients may 'request' a file from multiple, independent websocket servers (not at the same time). However, it would be expected a single client may request the same file from a single websocket server as much as 20 times a day or more.
So, to cut down on bandwidth, I am wondering if it is possible to cache these dynamic files doing purely client-side work.
With HTML5 You can leverage the browsers Application Cache by using a Manifest.
In such a manifest you can specify files which should be cached on the clientside. Invalidation of these caches happens through Javascript so that is also on the clientside.
More on Application Cache and manifest files you'll find here: http://www.html5rocks.com/en/tutorials/appcache/beginner/

Theoretical: Is It Possible / Feasible To Serve Static Content Via Websockets?

In the web world a web browser makes a new request for every static file it has to retrieve, so; a stylesheet, javascript file, inline image - all initiate a new server request. Whilst my knowledge of the web is pretty good, the underlying technologies like websockets are somewhat new to me in how they work and what they are capable of.
My question is rather theoretical, but I am wondering if it's possible now or would ever be possible to serve static files via a websocket? Considering websockets are a persistent connection from the client (web browser) to the server, it makes sense that websockets could be used for serving some if not all static content as it would just be one connection as opposed to many.
To clarify a little bit.
I realise my wording about connections was incorrect as pointed out by Greg below. But from what I understand the reason CDN's were created and are still used today is to address the issue with browsers and or servers having a hard limit on the number of concurrent downloads, once you hit that limit your requests are then queued thus adding to the page load time. I am aware they were also created to provide cookie-less requests as well. So really my question should be: "Can websockets be used in place of a CDN?"
BrowserScope has some useful metrics, it appears as though the request limit is about 6 per hostname for most modern browsers and even IE8. But as I said sometimes people have more than 6 resources, does this mean they're being queued and slowing the page load time where websockets could potentially reduce this to one?
It's definitely possible but there are a few reasons why you probably don't want to use this for static resources:
You need at least one resource that is statically delivered over the standard HTTP mechanism which means you need something capable of serving static resources anyways. Generally you want to keep Javascript separate from your HTML which would mean another static load. Or you can be messy and put the WebSocket code embedded on the main page, but you still are really any better off yet.
You can't open WebSocket connections until a script on the page starts running. Establishing the WebSocket connection adds some initial latency.
Most browsers will load non-conflicting static resources in parallel (some older browsers have a severe limit on the number of parallel connections, but they still have some parallelization). You could open multiple WebSocket connections for different static resources, but doing this reliably and efficiently is going to take a lot of effort. Browsers have already solved most of these issues for static resources.
Each WebSocket connection is a guaranteed order message based transport. Combined with the serialized nature of Javascript execution this effectively means you get to process one WebSocket message at a time. You could use Web Workers to be able to process more than one WebSocket connection in parallel, but the main render script will still be serialized across those connections. You could certainly make this efficient, but once again, this isn't a trivial problem and browsers have already solved a lot of these static resource loading problems.
Many web servers support gziping resources before delivering them. WebSocket does not yet have compression support (it's being discussed as an extension in the working group). This means if you want to compress your resources over WebSocket you will have to do this in Javascript which will add more latency.
If you have parts of your page that are dynamically updated using static resources (e.g. loading in new images into a HTML5 canvas game), then WebSockets may be your best option because an already established WebSocket connection will have low latency and overhead for getting pushed updates from the server then getting these delivered over HTTP. But I wouldn't recommend using WebSockets for the initial static resources when you page first loads.
This answer does not really address your web sockets question, but it may make it obsolete:
The next generation technology that is supposed to solve the problem of transferring several assets over a single connection is SPDY, which is was a candidate for HTTP 2.0. It has working implementations in Chrome and Firefox and already some experimental server-side support by the likes of Google and Twitter.
Edit: SPDY protocol is now deprecated. You can however look into it for research purposes.

Categories