I have an React Redux server rendered application that runs well.
When I go on different page, I measured some server response time:
On my /singin page, I got the response in 60ms. Good.
On my /user page (require an authentication) I got the response in 300ms. because the server has to prefetch the current user session and the current user by asking an API to know if the user has access to this page
On my /user/graph page. I got the response in 600ms. because the server has to prefetch the current user session, the current user and the graph data by asking an API.
The main issue is that I can't parallelize all these requests.
Here is the server flow:
Receive page request for /user/graph
Fetch /api/user/session and /api/user/me in parallel
Do the route matching with React-Router (it needs to know if the user is authenticated or not to redirect him)
At this point, the server knows the components that will be rendered. Foreach of them, it fetches the needed data in parallel. That means Fetch /api/graph, /api/graphConstants1, /api/graphConstants2, etc..
React rendering
Here are my questions:
What is the best practice?
How could I decrease this initial rendering time due to prefetching requests?
Should I do big request (like /api/graph) only on the client? But what is the purpose of server rendering then?
Should I ask my api team to create a custom super method only for the rendering server to retrieve all data in one request?
Here is the server flow:
Receive page request for /user/graph
Fetch /api/user/session and /api/user/me in parallel
Do the route matching with React-Router (it needs to know if the user is authenticated or not to redirect him)
At this point, the server knows the components that will be rendered. Foreach of them, it fetches the needed data in parallel. That means Fetch /api/graph, /api/graphConstants1, /api/graphConstants2, etc..
React rendering
This looks perfect to me, this is how a Microservices based architecture should behave.
As stated by you, most of the API calls are being made in parallel, so there is not much to worry about the latency between successive API calls.
The main issue is that I can't parallelize all these request.
However, you could ask the API team to provide an umbrella API for the desired Microservices. In that scenario the API team needs to handle the processing in parallel or mutithreads.
Fetch /api/user/session and /api/user/me in parallel
Looks like, you are calling /api/user/session to validate the users' session. However, shouldn't you leverage caching for /api/user/me.
I guess /api/user/me this is a GET request, one approach to reduce this sort of calls. The data being sent by this API could easily be sent the signin API, if signin is successful and cache the data.
On any update to user data, i.e. on any POST, PUT, DELETE call, the exiting cached data can be purged, and the APIs, can return the latest state of the user which will be used to warm up the cache.
PS: This sort of decision can/must not be made on a StackOverflow answer but needs in depth discussion and agreement between the API provider and consumer.
Related
I have a very slow API which I don't have access to change.
it has endpoints like /data?param1=option1¶m2=option2
is it a good idea to create a service worker, which will save responses of the requests as cache and periodically refetch them(with jwt for authentication)(data changes pretty rarely, like once a week)? Is there any caveats?
Nope, that's totally fine.
Probably the most important problem you will encounter is, as usually with caching, cache invalidation process.
In your case, if you strongly decide to cache responses for a limited amount of time without additional conditions, you probably can add a header to every cached response, which will hold a caching date.
Here's the article that explains the idea: How to set an expiration date for items in a service worker cache
In a nutshell, when you intercept a response, you unpack it, add a header with a current timestamp inside, then put in into the SW cache. Next, every time you intercept the request to the same endpoint, you get the response from the cache and check the timestamp in the header you set. Then return the cached response or refetch data, depending on the check result.
Another one thing to think about is the refetching strategy. You see, there are different approaches here:
Cache-First. It means that you always return the cached data to your app, and then go for refetch. This way even when it's time to show fresh data, your user will get the data from cache, and only next time they will get the fresh data. Not so pleasant for users, but this way the loading time will always be blazing fast (except the first visit, of course).
Network-First. Vice versa. This way users will wait for the fresh data, but it always will be actual.
Stale-While-Revalidate. Nice but header to implement. You return cached value, then, when SW fetched the fresh one, you trigger your app to rerender the page.
Here's a nice tutorial explaining how all this caching works: Service workers tutorial
Is there a mechanism in a service worker, which I am unaware, which can allow me to know from which page a fetch request is fired from?
Example:
I have an HTML page uploaded to my website at /aPageUploadedByUser/onMySite/index.html, I do want to make sure none of the authorization headers gets passed to any fetch request made by /aPageUploadedByUser/onMySite/index.html to any source.
If my service worker does somehow know the originating page of this request, I can modulate them for safety.
To answer your question (but with a caveat that this is not a good idea!), there are two general approaches, each with their own drawbacks:
The Referer: request header is traditionally set to the URL of the web page that made a request. However, this header may not always be set for all types of requests.
A FetchEvent has a clientId property, and that value can be passed to clients.get() to obtain a reference to the Client that created the request. The Client, in turn, has a url property. The caveats here are that clients.get() is asynchronous (meaning you can't use its result to determine whether or not to call event.respondWith()), and that the URL of the Client may have changed in the interval between the request being made and when you read the url property.
With those approaches outlined, the bigger problem is that using a service worker in this way is not a safe practice. The first time a user visits your origin, they won't have a service worker installed. And then on follow-up visits, if a user "hard" reloads the page with the shift key, the page won't be controlled by a service worker either. You can't rely on a service worker's fetch event handler to implement any sort of critical validation.
I'm looking for technique or skils to fix the ways for new web site.
This site show the read time data which located on server as file or data on memory.
I'll use Node.js for server-side. But I can't fix how to get the data and show that to web site user.
Because this data have to update per 1 second at least.
I think it is similar to the stock price page.
I know there are a lot of ways to access data like AJAX, Angular.js, Socket.io..
Also each has pros and cons.
Which platform or framework is good in this situation?
This ultimately depends on how much control you have over the server side. For data that needs to be refreshed every second, doing the polling on client side would place quite the load on the browser.
For instance, you could do it by simply using one of the many available frameworks to make http requests inside some form of interval. The downsides to this approach include:
the interval needs to be run in the background all the time while the user is on the page
the http request needs to be made for every interval to check if the data has changed
comparison of data also needs to be performed by the browser, which can be quite heavy at 1 sec intervals
If you have some server control, it would be advisable to poll the data source on the server, i.e. using a proxying microservice, and use the server to perform change checking and only send data to clients when it has changed.
You could use Websockets to communicate those changes via a "push" style message instead of making the client browser do the heavy lifting. The flow would go something like:
server starts polling when a new client starts listening on its socket
server makes http requests for each polling interval, runs comparison for each result
when result has changed, server broadcasts a socket message to all connected clients with new data
The main advantage to this is that all the client needs to do is "connect and listen". This even works with data sources you don't control – the server you provide can perform any data manipulation needed before it sends a message to the client, the source just needs to provide data when requested.
EDIT: just published a small library that accomplishes this goal: Mighty Polling ⚡️ Socket Server. Still young, examine for your use if using.
i am using phonegap to build apps. currently i need to control the concurrent user access using the same login id.
i need to send an ajax request to web api server to check whether there is user using the same login id. this ajax request need to be send every 30 second.
if api server return 'Y', means more than 1 person using the same login account, i need to close the apps and return back to login screen.
what i worry is, if i using setInterval(), the ajax call will have impact to the main UI. In this situation, should i use web workers ?
Use Ajax requests. They are asynchronous by default, and waiting for the server to return the response will not in any way impact the main UI.
Also note there are many techniques that may provide a better user experience than polling every 30 seconds. Check out web sockets if you are interested in an alternative implementation.
I used the OAuth gem to acquire an access token. In my code, I can write:
access_token.get('/1#{path}')
where path is some api query. But I want to do these queries asynchronously, client-side -- with no page refreshing.
I would like to know the best way to pass the API querying to the AJAX after authenticating with OAuth, and examples or an explanation of how to do so.
For example, I wish to display 20 followers per page, but when I click 'next page', it will just refresh the 20 on screen.
You're biggest problem will probably be the Same Origin Policy, i.e you will not be able to access data on the API providers domain.
You have two options.
First is to make your own server side dispatcher that will do your API calls for you. Call this from your client code. If you need to do any POST requests, then this is actually the only solution.
Second option depends on whether your API provider accepts JSONP requests. If it does, then you can at least do GET requests directly to the API end point without going via your own dispatcher.