Configure browser request timeout for external resurces - javascript

Is it possible to configure the browser's timeout for pending requests via JavaScript?
By browser requests I mean requests that the browser executes in order to fetch static resources like images referenced by url in html files, fonts, etc (I'm no talking about xmlHttpRequests made intentionally by the application for fetching dynamic content on a back-end server. I'm talking about requests that I'm unable to control, made automatically by the browser, like the aforementioned ones).
Sometimes the requests for such resources stay pending occupying a connection (and the number of connections is limited). I'd like to timeout (or cancel) those so they don't occupy connection.

Related

Cache all the http requests to a domain

As a frontend engineer, I'm wondering if there is a way to speed up my development by using a tool at the browser level (or any other kind of tool) to cache all the requests to a server.
Through hot restart, I have to wait to fetch a lot of requests to refresh the page and return to the desired state of the application.
A lot of times I don't need to have these pieces of information live. A cached version is fine (also that would avoid useless server workload).
In short, to put it simple, is there a way to cache a request with all the parameters like "http://myserver/endpoint/?params" and make the browser return the cached result?
Basically, a whole server as a cached backend.

Serving Angular app as a static content in Express Server

I am serving Angular app as a static content in Express Server. When serving static files with Express, Express by default adds ETag to the files. So, each next request will first check if ETag is matched and if it is, it will not send files again. I know that Service Worker works similar and it tries to match the hash. Does anyone know what is the main difference between these two approaches (caching with ETag and caching with Service Workers), and when we should use one over the other? What would be the most efficient when it comes to performance:
Server side caching and serving Angular app static files
Implementing Angular Service Worker for caching
Do both 1 and 2
To give a better perspective, I'll address a third cache option as well, to clarify the differences.
Types of caching
Basically we have 3 possible layers of caching, based on the priority they are checked from the client:
Service Worker cache (client-side)
Browser Cache, also known as HTTP cache (client-side)
Server side cache (CDN)
PS: Some browser like Chrome have an extra memory cache layer in front of the service worker cache.
Characteristics / differences
The service worker is the most reliable from the client-side ones, since it defines its own rules over how to manage the caching, and provide extra capabilities and fine-grained control over exactly what is cached and how caching is done.
The Browser caching is defined based on some HTTP headers from the assets response (Cache-Control and Expires), but the main issue is that there are many conditions in which those are ignored.
For instance, I've heard that for files bigger than 25Mb, normally they are not cached, specially on mobile, where the memory is limited (I believe it's getting even more strict lately, due to the increase in mobile usage).
So between those 2 options, I'd always chose the Service Worker cache for more reliability.
Now, talking to the 3rd option, the CDN checks the HTTP headers to look for ETag for busting the cache.
The idea of the Server-side caching is to only call the origin server in case the asset is not found on the CDN.
Now, between 1st and 3rd, the main difference is that Service Workers works best for Slow / failing network connections and offline, since the cache is done client-side, so if the network is off, then the service worker retrieves the last cached information, allowing for a smooth user experience.
Server-side, on the other hand, only works when we are able to reach the server, but at the same time, the caching happens out of user's device, saving local space, and reducing the application memory consumption.
So as you see, there's no right / wrong answers, just what works best for your use case.
Some Sources
MDN Cache
MDN HTTP caching
Great article from web.dev
Facebook study on caching duration and efficiency
Let's answer your questions:
what is the main difference between these two approaches (caching with ETag and caching with Service Workers)
Both solutions cache files, the main difference is the need to reach the server or stay locally:
For the ETag, the browser hits the server asking for a file with a hash (the etag), depending on the file stored in the server, the server will answer with a "the file was not modified, use your local copy" with a 300 HTTP response or "here is a new version of that file" with a 200 HTTP response and a new file. In both cases the server always decides. and the user will wait for a round trip.
With the Service worker approach you can decide locally what to do. You can write some logic to control what/when to use a local copy (cached) or when go to the server. This is very useful for offline capabilities since the logic is happening in the client, and there is no need to hit the server.
when we should use one over the other?
You can use both together. You can define some logic in the service worker, if there is no connection return the local copies, otherwise go to the server.
What would be the most efficient when it comes to performance:
Server side caching and serving Angular app static files
Implementing Angular Service Worker for caching
Do both 1 and 2
My recommended approach is use both approaches. Although treat your files differently, the 'index.html' file can change, in this case use the service worker (in case there is no internet access) and if there is internet access let the web server answer with the etag. All the other static files (CSS and JS) should be immutable files, this is you can be sure the local copy is valid, in this case add a hash to the files' name (so they are always unique files) and cache them. When you have a new version of your app, you will modify the 'index.html' pointing to new immutable files.

Does the browser have a max time limit for network requests?

When initiating a network request in the browser via fetch or xmlhttprequest, what happens if the network request takes a very long time? E.g. 20 minutes. Does the browser have a time limit after which it rejects requests? Is it possible to extend this?
I am thinking about a large file upload using a single network request to a server endpoint, but which might take a very long time over slow connections. Though I am only asking about browser behavior.
Usually these values are set on the web server.
You may want to reach out to the Web Administrator and see if they can adjust the xmlhttprequest time out value.
https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/timeout
Additional Unsolicited Suggestion: For large uploads / big data, try to utilize Jumbo Frames if possible.

What happens at the browser level when a max concurrent HTTP request limit is hit?

I know that different browsers have different amounts of concurrent connections they can handle to the same hostname, but what exactly happens to a new request when that limit is hit?
Does it automatically wait and retry again later or is there something I need to do to help this process along?
Specifically, if this is a XMLHttpRequest executed via JavaScript and not just some assets being loaded by the browser from markup, could that automatically try again?
I have a client side library that makes multiple API requests and occasionally it tries to send too many too quickly. When this happens, I can see server side API errors, but this doesn't make sense. If the concurrency limit stops requests, then they would have never hit the server, would they?
Update: Thanks to #joshstrike and some more testing, I've discovered that my actual problem was not related to concurrent HTTP request limits in the browser. I am not sure these even apply to JavaScript API calls. I have a race condition in the specific API calls I'm making, which gave an error that I initially misunderstood.
The browser will not retry any request on its own if that request times out on the server (for whatever reason - including if you exceed the API's limits). It's necessary to check the status of each request and handle retrying them in some way that's graceful to the application and the user. For failed requests you can check the status code. However for requests which simply hang for a long time it may be necessary to attach a counter to your request, and "cancel" it after a delay... Then if a result comes back bearing the number of one that has already been canceled, ignore that result if a newer one has already returned. This is what typically happens in a long-polling application that is hitting a server constantly and not knowing whether some pings will return later or never return at all.
When the limit on the Chrome is reached it pauses anymore requests. Once one request has been responded to, the browser sends the next request. On Chrome that limit is six for me.

reducing roundtrip api requests on initial page load

Whenever I make a single-page HTML5 app, I generally have the following procedure:
The user requests a page from the server, and the server responds
with the appropriate html.
Once the page returns, Javascript on the client-side requests documents from the server (the current user, requested docs, etc.)
The client waits for yet another response before rendering, often resulting in a 'flicker' or necessitating a loading icon.
Are there any strategies for preloading the initial document requests and somehow attaching them -- a javascript object or array -- to the initial page response?

Categories