reducing roundtrip api requests on initial page load - javascript

Whenever I make a single-page HTML5 app, I generally have the following procedure:
The user requests a page from the server, and the server responds
with the appropriate html.
Once the page returns, Javascript on the client-side requests documents from the server (the current user, requested docs, etc.)
The client waits for yet another response before rendering, often resulting in a 'flicker' or necessitating a loading icon.
Are there any strategies for preloading the initial document requests and somehow attaching them -- a javascript object or array -- to the initial page response?

Related

Does getServerSideProps generates the whole page at each request at the server?

I learnt that getServerSideProps will generate the whole page on each request, instead of doing it at the time of build. Does getServerSideProps generates the whole page at each request at the server? or it just generates the whole page at the server IF the data has been changed in the database?
If it does generates a whole page, each request at the server, Isn't vanilla react JS doing the same thing?
(Except the fact that it is generating the page on the client's end but it is calling the APIs independent of the fact if the data has been changed or not.)
getServerSideProps pre-renders the page on every request. That means you the page is sent with all the data filled in. That is very different from sending an empty page and making API calls.
Also there is an option to cache by setting cache-headers in response, so if the data remains the same a particular browser can cache it and the API calls are not made again.
As long as props which is sent to page has some space to change, SSR has to occur every request. getServerSideProps is always depending on the request from client unless caching is activated.

How to determine, in a service worker, which HTML5 page orginated the fetch request?

Is there a mechanism in a service worker, which I am unaware, which can allow me to know from which page a fetch request is fired from?
Example:
I have an HTML page uploaded to my website at /aPageUploadedByUser/onMySite/index.html, I do want to make sure none of the authorization headers gets passed to any fetch request made by /aPageUploadedByUser/onMySite/index.html to any source.
If my service worker does somehow know the originating page of this request, I can modulate them for safety.
To answer your question (but with a caveat that this is not a good idea!), there are two general approaches, each with their own drawbacks:
The Referer: request header is traditionally set to the URL of the web page that made a request. However, this header may not always be set for all types of requests.
A FetchEvent has a clientId property, and that value can be passed to clients.get() to obtain a reference to the Client that created the request. The Client, in turn, has a url property. The caveats here are that clients.get() is asynchronous (meaning you can't use its result to determine whether or not to call event.respondWith()), and that the URL of the Client may have changed in the interval between the request being made and when you read the url property.
With those approaches outlined, the bigger problem is that using a service worker in this way is not a safe practice. The first time a user visits your origin, they won't have a service worker installed. And then on follow-up visits, if a user "hard" reloads the page with the shift key, the page won't be controlled by a service worker either. You can't rely on a service worker's fetch event handler to implement any sort of critical validation.

Client access vs broadcast data from web server

I'm looking for technique or skils to fix the ways for new web site.
This site show the read time data which located on server as file or data on memory.
I'll use Node.js for server-side. But I can't fix how to get the data and show that to web site user.
Because this data have to update per 1 second at least.
I think it is similar to the stock price page.
I know there are a lot of ways to access data like AJAX, Angular.js, Socket.io..
Also each has pros and cons.
Which platform or framework is good in this situation?
This ultimately depends on how much control you have over the server side. For data that needs to be refreshed every second, doing the polling on client side would place quite the load on the browser.
For instance, you could do it by simply using one of the many available frameworks to make http requests inside some form of interval. The downsides to this approach include:
the interval needs to be run in the background all the time while the user is on the page
the http request needs to be made for every interval to check if the data has changed
comparison of data also needs to be performed by the browser, which can be quite heavy at 1 sec intervals
If you have some server control, it would be advisable to poll the data source on the server, i.e. using a proxying microservice, and use the server to perform change checking and only send data to clients when it has changed.
You could use Websockets to communicate those changes via a "push" style message instead of making the client browser do the heavy lifting. The flow would go something like:
server starts polling when a new client starts listening on its socket
server makes http requests for each polling interval, runs comparison for each result
when result has changed, server broadcasts a socket message to all connected clients with new data
The main advantage to this is that all the client needs to do is "connect and listen". This even works with data sources you don't control – the server you provide can perform any data manipulation needed before it sends a message to the client, the source just needs to provide data when requested.
EDIT: just published a small library that accomplishes this goal: Mighty Polling ⚡️ Socket Server. Still young, examine for your use if using.

Server Rendering: Prefetching data Performance

I have an React Redux server rendered application that runs well.
When I go on different page, I measured some server response time:
On my /singin page, I got the response in 60ms. Good.
On my /user page (require an authentication) I got the response in 300ms. because the server has to prefetch the current user session and the current user by asking an API to know if the user has access to this page
On my /user/graph page. I got the response in 600ms. because the server has to prefetch the current user session, the current user and the graph data by asking an API.
The main issue is that I can't parallelize all these requests.
Here is the server flow:
Receive page request for /user/graph
Fetch /api/user/session and /api/user/me in parallel
Do the route matching with React-Router (it needs to know if the user is authenticated or not to redirect him)
At this point, the server knows the components that will be rendered. Foreach of them, it fetches the needed data in parallel. That means Fetch /api/graph, /api/graphConstants1, /api/graphConstants2, etc..
React rendering
Here are my questions:
What is the best practice?
How could I decrease this initial rendering time due to prefetching requests?
Should I do big request (like /api/graph) only on the client? But what is the purpose of server rendering then?
Should I ask my api team to create a custom super method only for the rendering server to retrieve all data in one request?
Here is the server flow:
Receive page request for /user/graph
Fetch /api/user/session and /api/user/me in parallel
Do the route matching with React-Router (it needs to know if the user is authenticated or not to redirect him)
At this point, the server knows the components that will be rendered. Foreach of them, it fetches the needed data in parallel. That means Fetch /api/graph, /api/graphConstants1, /api/graphConstants2, etc..
React rendering
This looks perfect to me, this is how a Microservices based architecture should behave.
As stated by you, most of the API calls are being made in parallel, so there is not much to worry about the latency between successive API calls.
The main issue is that I can't parallelize all these request.
However, you could ask the API team to provide an umbrella API for the desired Microservices. In that scenario the API team needs to handle the processing in parallel or mutithreads.
Fetch /api/user/session and /api/user/me in parallel
Looks like, you are calling /api/user/session to validate the users' session. However, shouldn't you leverage caching for /api/user/me.
I guess /api/user/me this is a GET request, one approach to reduce this sort of calls. The data being sent by this API could easily be sent the signin API, if signin is successful and cache the data.
On any update to user data, i.e. on any POST, PUT, DELETE call, the exiting cached data can be purged, and the APIs, can return the latest state of the user which will be used to warm up the cache.
PS: This sort of decision can/must not be made on a StackOverflow answer but needs in depth discussion and agreement between the API provider and consumer.

Configure browser request timeout for external resurces

Is it possible to configure the browser's timeout for pending requests via JavaScript?
By browser requests I mean requests that the browser executes in order to fetch static resources like images referenced by url in html files, fonts, etc (I'm no talking about xmlHttpRequests made intentionally by the application for fetching dynamic content on a back-end server. I'm talking about requests that I'm unable to control, made automatically by the browser, like the aforementioned ones).
Sometimes the requests for such resources stay pending occupying a connection (and the number of connections is limited). I'd like to timeout (or cancel) those so they don't occupy connection.

Categories