Is it better to compose multiple AJAX calls parallel or serial? - javascript

I'm developing a single-page application, which sends multiple AJAX request to the server.
The system works with polling, because some data-request can take about 10-20minutes to calculate.
client asks server for data
server hands out a job-id
client asks server every few seconds for the result
The polling algorithm lowers the polling frequency over time, stopping at intervals of 10seconds.
But when a client sends different data requests in a short time, he ends up with about 10-20 job-ids and starts polling for all of them.
Is it better to simply do it this way and let the browser handle those requests in parallel or should I schedule every request and serialize them all?
Would it bring performance benefits to serialize them?

If each initial request returns a unique id and each page has a unique user id then you can poll on what information for each request.
In the JSON I would return the results for any completed request, and the current status of those that haven't completed, such as whether it has started being processed, and perhaps a percentage of completion, or how many requests are ahead of that request.
This will simplify the work as you won't be making several polling calls, but just one, getting back a complex result to give feedback to the user the status of each request.
I find it useful to give some information on status for long-running queries otherwise the user may think the request was lost.

Some months ago, I faced performance issues due to multiple ajax calls, but I haven't investigated deeper this topic since then : High latencies loading stores in an ExtJS 4.1 MVC application.

Related

Fastest way to make a million POST requests to a cloud function?

I have an array with a length of one million. Each element is a string. I have a cloud function that takes a string and processes it. What is the fastest way to POST all million strings in my array to the cloud function? I don't care for the response of the cloud function. Ideally, it would POST, not wait for a response, then move on and POST the next one, iterating through the entire list as fast as possible. The issue is apparently with HTTP you cannot not wait for a response. You must wait for the response. Each cloud function takes about 10 seconds to execute. So if I need to wait for each response before moving to the next one, this would take 10 million seconds. Whereas if I could post each and not wait, I could probably run through the entire array in a few seconds.
A lot of this has been covered before in prior questions/answers, but none that I found is a pure duplicate of what you're asking so I'll reference some that have come before and add some explanation. First the ones that have come before:
How to make millions of parallel http requests from nodejs app
How to fire off 1,000,000 requests
Is there a limit to how many promises can or should run concurrently when making requests
In Node js. How many simultaneous requests can I send with the "request" package
What is the limit of sending concurrent ajax requests with node.js?
How to loop many http requests with axios in node.js
Handling large number of outbound HTTP requests
Promise.all consumes all my RAM
Properly batch nested promises in Node
How can I handle a file of 30,000 urls without memory leaks?
First off, you can send a lot of parallel outbound requests. You do not have to wait for a prior response before sending the next one.
Second, you have resource limits on both client and server and ultimately, you will have to explore with testing your local configuration and your target server to find out where those resource limits are and then write your code to stay within those limits. There is no way to reliably send a request and then immediately kill the socket because you don't care about the response. If your socket gets queued by the target server (because you've already overwhelmed it), then killing the socket may drop it from the target server's queue before it gets processed by the target server.
Your local configuration will be limited by how many simultaneous sockets you can have open and how much memory you have (as each outbound request takes some amount of memory to keep track of).
The target server will be limited by its own resources. It may have protections built-in to limit how many posts/sec it can received from one particular source (rate limiting). It may have overall server protections against how many incoming requests at once it can handle. Typically servers protect themselves from overload by configuring things so that once an incoming request queue gets to a certain level, they just immediately hang up on new requests. The idea is to provide some level of protection of service and just deflect new requests when they come in too fast.
If this isn't your target server and there isn't any documentation about what its limits are supposed to be, then you will just have to test how many simutaneous requests you can have "in-flight" at the same time. If they implement rate limiting from a given source, then it's not uncommon that this might be a fairly low number such as 5. If no rate limiting, then you're really just trying to figure out what their http server can handle without causing it to drop connections in defense of service.
Once you figure out (with testing) how many simultaneous requests in flight the target server can comfortably handle, you will have to structure your code to deliver that. Usually, you would take an approach like is show in this mapConcurrent() function where you code things so that only N requests are in flight at the same time where N is a number you figured out experimentally by testing the target server.
Relevant pieces of helper code:
mapConcurrent(array, maxConcurrent, fn)
rateLimitMap(array, requestsPerSec, maxInFlight, fn)
runN(fn, limit, cnt, options)
pMap(array, fn, limit)
And, if you want a pre-made library, the async library contains a bunch of control flow helpers like these.

What is the best practice for building an API that takes a long time to respond?

I am building an API endpoint that has to call multiple external services/DBs and I do not want my users to have to wait for this process to take place, however, the result of this process is essential for my users.
My first thought is to add the request to a queue and return immediately, then at some later time, the user can query a different endpoint for the result.
Is there a better way to go about this? Should there be a webhook response instead of asking users to query the API twice?
Three main ways I've seen:
Client sends the API request and immediately gets back a job number. The client can then send a different API request with that job number every so often (every minute or so depending upon how long the usual result takes to get) so check on the progress. On one of those checks the job will be done and the get the data.
Client makes a webSocket or socket.io connection. Client sends a request over that websocket/socket.io connection. Server starts working on the result. When the result is done, it is immediately sent over the webSocket/socket.io connection back to the client. The client can then keep the websocket/socket.io connection connected for other requests or close the connection.
Use Server-Sent events. Send, the query and then when the result is done, the server can send it back on that same connection.
I don't think there's a "best practice" among these three as each have some advantages and some other uses which may be relevant. The polling option #1 is the lowest common denominator and will work in any situation, but requires a polling strategy by the client and may have some latency (result ready before client polls).
The choices #2 and #3 are both very efficient and their general technology may have other uses also.

Slow third-party APIs clogging Express server

I am creating a question answering application using Node.js + Express for my back-end. Front-end sends the question data to the back-end, which in turn makes requests to multiple third-party APIs to get the answer data.
Problem is, some of those third-party APIs take too long to respond, since they have to do some intense processing and calculations. For that reason, i have already implemented a caching system that saves answer data for each different question. Nevertheless, that first request each time might take up to 5 minutes.
Since my back-end server waits and does not respond back to the front-end until data arrives (the connections are being kept open), it can only serve 6 requests concurrently (that's what I have found). This is unacceptable in terms of performance.
What would be a workaround to this problem? Is there a way to not "clog" the server, so it can serve more than 6 users?
Is there a design pattern, in which the servers gives an initial response, and then serves the full data?
Perhaps, something that sets the request to "sleep" and opens up space for new connections?
Your server can serve many thousands of simultaneous requests if things are coded properly and it's not CPU intensive, just waiting for network responses. This is something that node.js is particularly good at.
A single browser, however, will only send a few requests at a time (it varies by browser) to the same endpoint (queuing the others until the earlier ones finish). So, my guess is that you're trying to test this from a single browser. That's not going to test what you really want to test because the browser itself is limiting the number of simultaneous requests. node.js is particularly good at having lots of request in flight at the same time. It can easily do thousands.
But, if you really have an operation that takes up to 5 minutes, that probably won't even work for an http request from a browser because the browser will probably time out an inactive connection still waiting for a result.
I can think of a couple possible solutions:
First, you could make the first http request be to just start the process and have it return immediately with an ID. Then, the client can check every 30 seconds of so after that sending the ID in an http request and your server can respond whether it has the result yet or not for that ID. This would be a client-polling solution.
Second, you could establish a webSocket or socket.io connection from client to server. Then, send a message over that socket to start the request. Then, whenever the server finishes its work, it can just send the result directly to the client over the webSocket or socket.io connection. After receiving the response, the client can either keep the webSocket/socket.io connection open for use again in the future or it can close it.

Is there a maximum number of Get requests?

Per the title, is there a maximum number of Get requests?
I need to make a couple hundred get requests to a rest API in order to dynamically load data into webpage, but I find that if I make a Promise.All array and output the promise result in the .then, eventually I get undefined due to request time outs.
Is this due to a limit on the number of connections? Is there a best practice for making large number of simultaneous requests?
Thanks for your insight!
A receiving server has a particular capability for how many simultaneous requests it can handle. It could be a small number or a very large number depending upon a whole bunch of things including the server configuration, the server software architecture, the types of request being sent, etc...
If you're getting timeouts from the server, then you are probably sending enough requests that the server can't process all of them before whatever request timeout is configured (on either client or server) and thus you get a timeout error.
The usual way of handling this on the client is to control how many simultaneous requests you will send at once and then when one finishes, you can send the next and so on. You will have to test to find out what the capabilities are of the receiving server and then you should back off a bit from that to allow other load from other sources some room to execute while your requests are running.
Assuming your requests are not unusually heavy-weight things to do on the server, I would typically test 5 or 10 requests at a time and see how the receiving server handles that.
There's a discussion of a lot of options for controlling this here:
Promise.all consumes all my RAM
Make several requests to an API that can only handle 20 request a minute
Concurrency control is also part of Promise.map() in the Bluebird promise library.
Is there a maximum number of Get requests?
Servers are limited on how many requests they can handle at once for a whole variety of reasons. Every server setup will likely be different and it also depends upon the types of requests you're sending too (and what they have to do). Some servers may be able to handle hundreds of thousands of requests (probably because there's a cluster behind them and they're configured for big load). Smaller configurations may only handle dozens at a time.
Is this due to a limit on the number of connections?
Any receiving server will have a limit on how many incoming connections it will allow to queue. What that is will depend upon many factors and there is no way for you (from the outside) to know exactly what that limit is. Timeout errors usually don't mean you're hitting this limit.

javascript UI components making too many http request. Any idea how to optimize this?

This question is purely based on assumptions. May or may not be valid problem. Anyway, here it goes
Let's say we have a heavy javascript client app with some numbers
of UI components / widgets, Each of these widgets has
an endpoint to query data from
On a page load, these components will make http request.
Multiple of them; to multiple different endpoints.
Obviously we see that the number of http requests will increase
with heavy client side architecture as compared to traditional web where
UI is generated from the server side.
Sample case:
widget A requests resource A
widget B requests resource B
Of course, we can minimize the http request by having:
parent widget requests an endpoint that return { resource A, resource B }
parent widget distributes data to widget A
parent widget distributes data to widget B
This can be done by, sort of, grouping related widgets based on business logic
Not all can be framed this way. Even if it can, how would maintain code modularity?
Is there any well known design pattern for large javascript apps wrt. performance?
Maybe I am overthinking as I certainly dont have the numbers here.
Any thought guys?
for starters I would consider creating a client JavaScript library that would handle fetching/sending data and make all the widgets use this API.
this way you can optimize/group the flow of data to/from all of your widgets in one place.
One idea that comes to mind (which wouldn't reduce the amount of data transferred, but would reduce the number of HTTP requests) is to route all your AJAX requests on the client side through some common Javascript interface that you control.
Then, instead of sending out one HTTP request per UI request, you can wait a few milliseconds and batch all the requests that occur within that interval, sending out just one HTTP request for the whole batch (you'd have to be careful to only do this for requests going to your server).
On the server, you could have a special generic "batched" endpoint that internally services all the batched requests (preferably in parallel) and returns the results in a batched response.
Then the client side distributes the batched results to the original requesters.
Note that this only works if the requests all take approximately the same length of time to service; you wouldn't want the batched response waiting 30s for one sub-request to finish when all the others are already done. You might be able to address this with a blacklist or something.
Also, try to identify which requests need to be serviced first and assign them priority!
It sounds like what you want is a central queue for all the data requests. Each widget on it's own can make requests by queuing up a specific request. Then, you would have a common piece of code that examines all the requests in the queue and figures out if they can be optimized a bit (requesting multiple pieces of data in one request from the same endpoint).
Using this type of design pattern still keeps all the widgets modular, yet has one small library of code that handles the optimization of the requests.
Technically, how this would work is that you'd create a library function for adding a request to the queue. That function would require an endpoint, a callback function when the data is ready and a description of the request. Each widget would call this common function for making it's data request.
This common function would put each request into a queue and then do a setTimeout() for 0ms (if one wasn't already set). That setTimeout() will be called when the current thread of execution is done which will be when all requests for this current thread of initialization are now in the queue. The queue can then be examined and any requests to the same endpoint that can be combined into one request and the request can be sent. When the data arrives, the separate pieces of data are then parceled out and the appropriate widget's callback is called.
If caching of data would be helpful (if multiple requests over time for the exact same data are happening), this layer could also implement a cache.

Categories