Javascript Promise.allSettled with large amount number of requests - javascript

I have a bulk create participants function that using Promise.allSettled to send 100 axios POST request. The backend is Express and frontend is React. That request is call a single add new participant rest API. I have set the backend timeout to 15s using connect-timeout. And frontend is 10s timeout.
My issue is when I click the bulk add button, the bulk create is triggered and that Promise.allSettled concurrent starts. However, I cannot send a new request before all concurrent request done. Because I have set up a timeout on the frontend, the new request will be cancelled.
Is there a way, I can still make the concurrent request, but that request does not stop other new requests?
This is the frontend code, createParticipant is the API request.
const PromiseArr = []
for (let i = 0; i < totalNumber; i++) {
const participant = participantList[i]
const participantNewDetail = {
firstName: participant.firstName,
lastName: participant.lastName,
email: participant.email,
}
PromiseArr.push(
createParticipant(participantNewDetail)
.then((createParticipantResult) => {
processedTask++
processMessage = `Processing adding participant`
dispatch({ type: ACTIVATE_PROCESS_PROCESSING, payload: { processedTask, processMessage } })
})
.catch((error) => {
processedTask++
processMessage = `Processing adding participant`
dispatch({ type: ACTIVATE_PROCESS_PROCESSING, payload: { processedTask, processMessage } })
throw new Error(
JSON.stringify({
status: "failed",
value: error.data.message ? error.data.message : error,
})
)
})
)
}
const addParticipantResults = await Promise.allSettled(PromiseArr)
PromiseArr is the Promise array with the length 100.
Is it possible I can splite this big request into small pieces promise array and send to the backend and within each request gap, it's possible I can send another new request like retriveUserDetail?

If you're sending 100 requests at a time to your server, that's just going to take awhile for the server to process. It would be best be to find a way to combine them all into one request or into a very small number of requests. Some server APIs have efficient ways of doing multiple queries in one request.
If you can't do that, then you probably should be sending them 5-10 at a time max so the server isn't being asked to handle sooo many simultaneous requests which causes your additional request to go to the end of the line and take too long to process. That will allow you to send other things and get them processed while you're chunking away on the 100 without waiting for all of them to finish.
If this is being done from a browser, you also have some browser safeguard limitations to deal with where the browser refuses to send more than N requests to the same host at a time. So if you send more than that, it queues them up and holds onto them until some prior requests have completed. This keeps one client from massively overwhelming the server, but also creates this long line of requests that any new request has to go to the end of. The way to deal with that is not never send more than a small number of requests to the same host and then that queue/line will be short when you want to send a new request.
You can look at these snippets of code that let you process an array of data N-at-a-time rather than all at once. Each of these has slightly different control options so you can decide which one fits your problem the best.
mapConcurrent() - Process an array with no more than N requests in flight at the same time
pMap() - Similar to mapConcurrent with more argument checking
rateLimitMap() - Process max of N requestsPerSecond
runN() - Allows you to continue processing upon error
These all replace both Promise.all() and whatever code you had for iterating your data, launching all the requests and collecting the promises into an array. The functions take an input array of data, a function to call that gets passed an item of the data and should return a promise that resolves to the result of that request and they return a promise that resolves to an array of data in the original array order (same return value as Promise.all()).

Related

Caching/Reusing/Responding to identical concurrent requests in a framework like Koa

andI'm trying to work on a caching solution for inflight requests to Koa,
Let's say that i have 100 seperate users hitting the same endpoint concurrently, but the endpoint takes ~4-5 seconds to return a response.
For example:
GET http://mykoa.application.com/getresults
In my router middleware is it possible to cache all of the concurrent inbound requests and then once the response has been generated return the same result to all of them? Potentially something similar to the example below?
const inflight = {};
router.use(async function(ctx, next) {
// Create a cache 'key'
const hash = `${ctx.request.url}-${ctx.state.user?.data?.id}-${JSON.stringify(ctx.request.body)}`;
// Check if there is already a request inflight
if (inflight[hash]) {
// If there is then wait for the promise resolution
return await inflight[hash];
}
// Cache the request resolution for any other identical requests
inflight[hash] = next();
await inflight[hash];
// Clean it up so that the next request will be fresh
inflight[hash].then(function(res) {
delete inflight[hash];
}, function(err) {
delete inflight[hash];
})
})
In my head this should work, and the expectation would be that all 100 concurrent requests would resolve at the same time (after the first one has resolved) however in my tests each request is still being run separately and taking 4-5 seconds each in sequence.

How to detect server data changes without refreshing browser or using useEffect in React?

In a React project, I'am fetching data from particular api consider some 'https://www.api-example.com' (just an example, not real url), with method:get, apiKey, authorization and all those things. Now to be more specific, the json data from the server is displayed to front side in React code, but, how to detect changes when json data changes after every minute or seconds or after hour or no changes.
Consider simple example, I am requesting for some Vide Call(VC) request to some person, and the data I receive from that person is in JSON format and has values such as "vc_status":"accepted" or "vc_status":"rejected" or "vc_status":"reshedule", whatever it maybe. Its based on the person choice, to select the desired data. Maybe he accepts request after 1 minute or 30 minutes or rejects or reshedules. My duty is to fetch that data and accordingly reflect to UI.
Now to get data from that person, following are the methods I used:
Method 1: useEffect()
const [newRequest, setNewRequest] = useState("")
const newData = () => {
try {
fetch(`https://www.api-example.com/video/requests`,{
method: get,
headers: {
ApiKey:'hbhasdgasAsas',
Authorization: '......'
}
}).then((data) => data.json()).then((data) => setNewRequest(data.vc_request))
}
}
useEffect(() => {
newData()
})
Then use newRequest data later in the code
Not feasible: useEffect is quite resource intensive and freezes the server at some particular time
Method 2: setInterval()
/* Same code as above, only few changes */
useEffect(() => {
setInterval(() =>{
newData()
}, 10000)
}, [])
Not feasible: Here the server utilization is intensive although after 10 seconds but, still it freezes server
Ultimately My intention is to get VC status and accordingly reflect on the front-side without using useEffect or setInterval, even though its used but, should not be resource intensive. My duty is to manage only Front-side and fetch data from Server and reflect to UI
So what is the best optimal solution to get 'changed json data' from server and display to the front. Any solution highly appreciated
The only good way to do this is to change/add functionality to the server so that it can tell the client that information has changed, instead of the client having to request information from the server periodically.
This can be done with WebSockets (the same technology that Stack Exchange uses to give users realtime alerts on inbox messages, reputation changes, etc).
If you set up the server to have a socket endpoint that sends the data (JSON) back to the client:
when the socket first connects, and
when the data on the server changes
Then you should be able to have code something like the following on the frontend:
useEffect(() => {
const socket = new WebSocket('wss://some-website.com/socket');
socket.addEventListener('message', ({ data }) => {
const parsed = JSON.parse(data);
setNewRequest(parsed.vc_request);
});
return () => socket.close();
}, []);
If you do not have control over the server, and the server doesn't provide anything like this, you're out of luck, and repeated fetching that you've already mentioned is your only option.
If the server "freezes" after 10 seconds (not sure what the problem actually is there), I suppose you could try increasing the delay to something like a minute or two.
useEffect(() => {
setInterval(() =>{
newData()
}, 10000)
}, []);
Not feasible: useEffect is quite resource intensive and freezes the server at some particular time
useEffect is not resource-intensive at all.

Axios requests in a loop after previous completes

The application I am building involves sending video frames to the server for processing. I am using Axios to send a POST request to the server. I intend to send about 3-5 frames(POST request) per second. I need to send one after the previous request completes because the next request has data in the request body which depends on the response of the previous request.
I tried running it in a loop but that won't do because the next one starts before the previous one completed. In the current code, I have something like this -
const fun = () => {
axios.post(url,data)
.then((res) => {
// process the response and modify data accordingly
// for the next request
})
.finally(() => fun());
};
This way I am able to achieve the requests to be one after another continuously. But I am unable to control the number of requests that are being sent per second. Is there any way I can limit the number of requests per second to be say at max 5?
Additional Info: I am using this for a web app where I need to send webcam video frames(as data URI) to the server for image processing and the result back to the client after it. I am looking to limit the number of requests to reduce the internet data consumed by my application and hence would like to send at the max 5 requests per second(preferably evenly distributed). If there's a better way that Axios POST requests for this purpose, please do suggest :)
An amazing library called Bottleneck can come to your aid here. You can limit the number of request per second to any number you want using the following code:
const Bottleneck = require("bottleneck");
const axios = require('axios');
const limiter = new Bottleneck({
maxConcurrent: 1,
minTime: 200
});
for(let index = 0; index < 20; index++){
limiter.schedule(() => postFrame({some: 'data'}))
.then(result => {
console.log('Request response:', result)
})
.catch(err => {
console.log(err)
})
}
const postFrame = data => {
return new Promise((resolve, reject) => {
axios.post('https://httpstat.us/200', data)
.then(r => resolve(r.data))
.catch(e => reject(e))
});
}
By manipulating the limiter
const limiter = new Bottleneck({
maxConcurrent: 1,
minTime: 200
});
You can get your axios request to fire at any rate with any concurrency.
For instance to limit the number of request per second to 3, you would set minTime to 333. Stands to reason, if you want to limit it to 5 per second, it would have to be set to 200.
Please refer the the Bottleneck documentation here: https://www.npmjs.com/package/bottleneck

Http Request can't send a response immediately in Nodejs server

I send JSON requests one by one to the nodejs server. After 6th request, server can't reply to the client immediately and then it takes a little while(15 seconds or little bit more and send back to me answer 200 ok) It occurs a writing json value into MongoDB and time is important option for me in terms with REST call. How can I find the error in this case? (which tool or script code can help me?) My server side code is like that
var controlPathDatabaseSave = "/save";
app.use('/', function(req, res) {
console.log("req body app use", req.body);
var str= req.path;
if(str.localeCompare(controlPathDatabaseSave) == 0)
{
console.log("controlPathDatabaseSave");
mongoDbHandleSave(req.body);
res.setHeader('Content-Type', 'application/json');
res.write('Message taken: \n');
res.write('Everything all right with database saving');
res.send("OK");
console.log("response body", res.body);
}
});
My client side code as below:
function saveDatabaseData()
{
console.log("saveDatabaseData");
var oReq = new XMLHttpRequest();
oReq.open("POST", "http://192.168.80.143:2800/save", true);
oReq.setRequestHeader("Content-type", "application/json;charset=UTF-8");
oReq.onreadystatechange = function() {//Call a function when the state changes.
if(oReq.readyState == 4 && oReq.status == 200) {
console.log("http responseText", oReq.responseText);
}
}
oReq.send(JSON.stringify({links: links, nodes: nodes}));
}
--Mongodb save code
function mongoDbHandleSave(reqParam){
//Connect to the db
MongoClient.connect(MongoDBURL, function(err, db)
{
if(!err)
{
console.log("We are connected in accordance with saving");
} else
{
return console.dir(err);
}
/*
db.createCollection('user', {strict:true}, function(err, collection) {
if(err)
return console.dir(err);
});
*/
var collection = db.collection('user');
//when saving into database only use req.body. Skip JSON.stringify() function
var doc = reqParam;
collection.update(doc, doc, {upsert:true});
});
}
You can see my REST call in google chrome developer editor. (First six call has 200 ok. Last one is in pending state)
--Client output
--Server output
Thanks in advance,
Since it looks like these are Ajax requests from a browser, each browser has a limit on the number of simultaneous connections it will allow to the same host. Browsers have varied that setting over time, but it is likely in the 4-6 range. So, if you are trying to run 6 simultaneous ajax calls to the same host, then you may be running into that limit. What the browser does is hold off on sending the latest ones until the first ones finish (thus avoiding sending too many at once).
The general idea here is to protect servers from getting beat up too much by one single client and thus allow the load to be shared across many clients more fairly. Of course, if your server has nothing else to do, it doesn't really need protecting from a few more connections, but this isn't an interactive system, it's just hard-wired to a limit.
If there are any other requests in process (loading images or scripts or CSS stylesheets) to the same origin, those will count to the limit too.
If you run this in Chrome and you open the network tab of the debugger, you could actually see on the timline exactly when a given request was sent and when its response was received. This should show you immediately whether the later requests are being held up at the browser or at the server.
Here's an article on the topic: Maximum concurrent connections to the same domain for browsers.
Also, keep in mind that, depending upon what your requests do on the server and how the server is structured, there may be a maximum number of server requests that can efficiently processed at once. For example, if you had a blocking, threaded server that was configured with one thread for each of four CPUs, then once the server has four requests going at once, it may have to queue the fifth request until the first one is done causing it to be delayed more than the others.

Node.js client request hangs

I have a node.js process that uses a large number of client requests to pull information from a website. I am using the request package (https://www.npmjs.com/package/request) since, as it says: "It supports HTTPS and follows redirects by default."
My problem is that after a certain period of time, the requests begin to hang. I haven't been able to determine if this is because the server is returning an infinite data stream, or if something else is going on. I've set the timeout, but after some number of successful requests, some of them eventually get stuck and never complete.
var options = { url: 'some url', timeout: 60000 };
request(options, function (err, response, body) {
// process
});
My questions are, can I shut down a connection after a certain amount of data is received using this library, and can I stop the request from hanging? Do I need to use the http/https libraries and handle the redirects and protocol switching myself in order the get the kind of control I need? If I do, is there a standardized practice for that?
Edit: Also, if I stop the process and restart it, they pick right back up and start working, so I don't think it is related to the server or the machine the code is running on.
Note that in request(options, callback), the callback will be fired when request is completed and there is no way to break the request.
You should listen on data event instead:
var request = require('request')
var stream = request(options);
var len = 0
stream.on('data', function(data) {
// TODO process your data here
// break stream if len > 1000
len += Buffer.byteLength(data)
if (len > 1000) {
stream.abort()
}
})

Categories