The application I am building involves sending video frames to the server for processing. I am using Axios to send a POST request to the server. I intend to send about 3-5 frames(POST request) per second. I need to send one after the previous request completes because the next request has data in the request body which depends on the response of the previous request.
I tried running it in a loop but that won't do because the next one starts before the previous one completed. In the current code, I have something like this -
const fun = () => {
axios.post(url,data)
.then((res) => {
// process the response and modify data accordingly
// for the next request
})
.finally(() => fun());
};
This way I am able to achieve the requests to be one after another continuously. But I am unable to control the number of requests that are being sent per second. Is there any way I can limit the number of requests per second to be say at max 5?
Additional Info: I am using this for a web app where I need to send webcam video frames(as data URI) to the server for image processing and the result back to the client after it. I am looking to limit the number of requests to reduce the internet data consumed by my application and hence would like to send at the max 5 requests per second(preferably evenly distributed). If there's a better way that Axios POST requests for this purpose, please do suggest :)
An amazing library called Bottleneck can come to your aid here. You can limit the number of request per second to any number you want using the following code:
const Bottleneck = require("bottleneck");
const axios = require('axios');
const limiter = new Bottleneck({
maxConcurrent: 1,
minTime: 200
});
for(let index = 0; index < 20; index++){
limiter.schedule(() => postFrame({some: 'data'}))
.then(result => {
console.log('Request response:', result)
})
.catch(err => {
console.log(err)
})
}
const postFrame = data => {
return new Promise((resolve, reject) => {
axios.post('https://httpstat.us/200', data)
.then(r => resolve(r.data))
.catch(e => reject(e))
});
}
By manipulating the limiter
const limiter = new Bottleneck({
maxConcurrent: 1,
minTime: 200
});
You can get your axios request to fire at any rate with any concurrency.
For instance to limit the number of request per second to 3, you would set minTime to 333. Stands to reason, if you want to limit it to 5 per second, it would have to be set to 200.
Please refer the the Bottleneck documentation here: https://www.npmjs.com/package/bottleneck
Related
andI'm trying to work on a caching solution for inflight requests to Koa,
Let's say that i have 100 seperate users hitting the same endpoint concurrently, but the endpoint takes ~4-5 seconds to return a response.
For example:
GET http://mykoa.application.com/getresults
In my router middleware is it possible to cache all of the concurrent inbound requests and then once the response has been generated return the same result to all of them? Potentially something similar to the example below?
const inflight = {};
router.use(async function(ctx, next) {
// Create a cache 'key'
const hash = `${ctx.request.url}-${ctx.state.user?.data?.id}-${JSON.stringify(ctx.request.body)}`;
// Check if there is already a request inflight
if (inflight[hash]) {
// If there is then wait for the promise resolution
return await inflight[hash];
}
// Cache the request resolution for any other identical requests
inflight[hash] = next();
await inflight[hash];
// Clean it up so that the next request will be fresh
inflight[hash].then(function(res) {
delete inflight[hash];
}, function(err) {
delete inflight[hash];
})
})
In my head this should work, and the expectation would be that all 100 concurrent requests would resolve at the same time (after the first one has resolved) however in my tests each request is still being run separately and taking 4-5 seconds each in sequence.
I have the following code:
await axiosAPICall(dummyData); // works
const sqlQuery = `SELECT column1, column2 FROM table`;
const queryStream = mySqlConnectionInstance.query(sqlQuery, []);
queryStream
.on('error', function (err) {
// Handle error, an 'end' event will be emitted after this as well
})
.on('result', async (actualData) => {
// axios api call, the api callback goes to callstack with any of the following errors:
// 1. read ECONNRESET
// 2. Client network socket disconnected before secure TLS connection was established
await axiosAPICall(actualData); // breaks
})
.on('end', function () {
// all rows have been received
});
As you can see I'm getting all the rows from a table in a MySQL database stream. When the data comes from the database stream, I'm passing that data to the axios API call.
The API call works perfectly fine when called outside of the stream logic but when I call the API inside the streaming logic, it breaks all the time.
I am hitting API calls as fast as each on('result') gets called (the async/await does NOT slow down the request rate, i.e. I end up with multiple requests in parallel.
Does anyone know why is API calls not working inside the streaming logic section?
If the question needs any clarifications please comment.
Based on a comment that suggests the error is due to making "too many requests" at once - this is a simple and naive way to wait for the previous request before making the next
const sqlQuery = `SELECT column1, column2 FROM table`;
const queryStream = mySqlConnectionInstance.query(sqlQuery, []);
const wait = (ms) => new Promise(resolve => setTimeout(resolve, ms));
let previous = Promise.resolve();
queryStream
.on('error', function (err) {
// Handle error, an 'end' event will be emitted after this as well
})
.on('result', async (actualData) => {
// wait until previous request has completed
await previous;
// optional, add a delay,
// 100ms for this example -
// for example if there is a limit of 10 requests per second
// adjust (or remove) as required
await wait(100);
// set "previous" for next request
previous = axiosAPICall(actualData);
})
.on('end', function () {
// if you `await previous` here,
// you can truly wait until all rows are processed
// all rows have been received
});
I have a bulk create participants function that using Promise.allSettled to send 100 axios POST request. The backend is Express and frontend is React. That request is call a single add new participant rest API. I have set the backend timeout to 15s using connect-timeout. And frontend is 10s timeout.
My issue is when I click the bulk add button, the bulk create is triggered and that Promise.allSettled concurrent starts. However, I cannot send a new request before all concurrent request done. Because I have set up a timeout on the frontend, the new request will be cancelled.
Is there a way, I can still make the concurrent request, but that request does not stop other new requests?
This is the frontend code, createParticipant is the API request.
const PromiseArr = []
for (let i = 0; i < totalNumber; i++) {
const participant = participantList[i]
const participantNewDetail = {
firstName: participant.firstName,
lastName: participant.lastName,
email: participant.email,
}
PromiseArr.push(
createParticipant(participantNewDetail)
.then((createParticipantResult) => {
processedTask++
processMessage = `Processing adding participant`
dispatch({ type: ACTIVATE_PROCESS_PROCESSING, payload: { processedTask, processMessage } })
})
.catch((error) => {
processedTask++
processMessage = `Processing adding participant`
dispatch({ type: ACTIVATE_PROCESS_PROCESSING, payload: { processedTask, processMessage } })
throw new Error(
JSON.stringify({
status: "failed",
value: error.data.message ? error.data.message : error,
})
)
})
)
}
const addParticipantResults = await Promise.allSettled(PromiseArr)
PromiseArr is the Promise array with the length 100.
Is it possible I can splite this big request into small pieces promise array and send to the backend and within each request gap, it's possible I can send another new request like retriveUserDetail?
If you're sending 100 requests at a time to your server, that's just going to take awhile for the server to process. It would be best be to find a way to combine them all into one request or into a very small number of requests. Some server APIs have efficient ways of doing multiple queries in one request.
If you can't do that, then you probably should be sending them 5-10 at a time max so the server isn't being asked to handle sooo many simultaneous requests which causes your additional request to go to the end of the line and take too long to process. That will allow you to send other things and get them processed while you're chunking away on the 100 without waiting for all of them to finish.
If this is being done from a browser, you also have some browser safeguard limitations to deal with where the browser refuses to send more than N requests to the same host at a time. So if you send more than that, it queues them up and holds onto them until some prior requests have completed. This keeps one client from massively overwhelming the server, but also creates this long line of requests that any new request has to go to the end of. The way to deal with that is not never send more than a small number of requests to the same host and then that queue/line will be short when you want to send a new request.
You can look at these snippets of code that let you process an array of data N-at-a-time rather than all at once. Each of these has slightly different control options so you can decide which one fits your problem the best.
mapConcurrent() - Process an array with no more than N requests in flight at the same time
pMap() - Similar to mapConcurrent with more argument checking
rateLimitMap() - Process max of N requestsPerSecond
runN() - Allows you to continue processing upon error
These all replace both Promise.all() and whatever code you had for iterating your data, launching all the requests and collecting the promises into an array. The functions take an input array of data, a function to call that gets passed an item of the data and should return a promise that resolves to the result of that request and they return a promise that resolves to an array of data in the original array order (same return value as Promise.all()).
In a React project, I'am fetching data from particular api consider some 'https://www.api-example.com' (just an example, not real url), with method:get, apiKey, authorization and all those things. Now to be more specific, the json data from the server is displayed to front side in React code, but, how to detect changes when json data changes after every minute or seconds or after hour or no changes.
Consider simple example, I am requesting for some Vide Call(VC) request to some person, and the data I receive from that person is in JSON format and has values such as "vc_status":"accepted" or "vc_status":"rejected" or "vc_status":"reshedule", whatever it maybe. Its based on the person choice, to select the desired data. Maybe he accepts request after 1 minute or 30 minutes or rejects or reshedules. My duty is to fetch that data and accordingly reflect to UI.
Now to get data from that person, following are the methods I used:
Method 1: useEffect()
const [newRequest, setNewRequest] = useState("")
const newData = () => {
try {
fetch(`https://www.api-example.com/video/requests`,{
method: get,
headers: {
ApiKey:'hbhasdgasAsas',
Authorization: '......'
}
}).then((data) => data.json()).then((data) => setNewRequest(data.vc_request))
}
}
useEffect(() => {
newData()
})
Then use newRequest data later in the code
Not feasible: useEffect is quite resource intensive and freezes the server at some particular time
Method 2: setInterval()
/* Same code as above, only few changes */
useEffect(() => {
setInterval(() =>{
newData()
}, 10000)
}, [])
Not feasible: Here the server utilization is intensive although after 10 seconds but, still it freezes server
Ultimately My intention is to get VC status and accordingly reflect on the front-side without using useEffect or setInterval, even though its used but, should not be resource intensive. My duty is to manage only Front-side and fetch data from Server and reflect to UI
So what is the best optimal solution to get 'changed json data' from server and display to the front. Any solution highly appreciated
The only good way to do this is to change/add functionality to the server so that it can tell the client that information has changed, instead of the client having to request information from the server periodically.
This can be done with WebSockets (the same technology that Stack Exchange uses to give users realtime alerts on inbox messages, reputation changes, etc).
If you set up the server to have a socket endpoint that sends the data (JSON) back to the client:
when the socket first connects, and
when the data on the server changes
Then you should be able to have code something like the following on the frontend:
useEffect(() => {
const socket = new WebSocket('wss://some-website.com/socket');
socket.addEventListener('message', ({ data }) => {
const parsed = JSON.parse(data);
setNewRequest(parsed.vc_request);
});
return () => socket.close();
}, []);
If you do not have control over the server, and the server doesn't provide anything like this, you're out of luck, and repeated fetching that you've already mentioned is your only option.
If the server "freezes" after 10 seconds (not sure what the problem actually is there), I suppose you could try increasing the delay to something like a minute or two.
useEffect(() => {
setInterval(() =>{
newData()
}, 10000)
}, []);
Not feasible: useEffect is quite resource intensive and freezes the server at some particular time
useEffect is not resource-intensive at all.
I am running a cron job every 5 mins to get data from 3rd party API, It can be N number of request at a time from NodeJS application. Below are the details and code samples:
1> Running cron Job every 5 mins:
const cron = require('node-cron');
const request = require('request');
const otherServices= require('./services/otherServices');
cron.schedule("0 */5 * * * *", function () {
initiateScheduler();
});
2> Get the list of elements for which I want to initiate the request. Can receive N number of elements. I have called request function (getSingleElementUpdate()) in the forEach loop
var initiateScheduler = function () {
//Database call to get elements list
otherServices.moduleName()
.then((arrayList) => {
arrayList.forEach(function (singleElement, index) {
getSingleElementUpdate(singleElement, 1);
}, this);
})
.catch((err) => {
console.log(err);
})
}
3> Start initiating the request for singleElement. Please note I don't need any callback if I received a successful (200) response from the request. I just have to update my database entries on success.
var getSingleElementUpdate = function (singleElement, count) {
var bodyReq = {
"id": singleElement.elem_id
}
var options = {
method: 'POST',
url: 'http://example.url.com',
body: bodyReq,
dataType: 'json',
json: true,
crossDomain: true
};
request(options, function (error, response, body) {
if (error) {
if (count < 3) {
count = count + 1;
initiateScheduler(singleElement, count)
}
} else{
//Request Success
//In this: No callback required
// Just need to update database entries on successful response
}
});
}
I have already checked this:
request-promise: But, I don't need any callback after a successful request. So, I didn't find any advantage of implementing this in my code. Let me know if you have any positive point to add this.
I need your help with the following things:
I have checked the performance when I received 10 elements in arrayList of step 2. Now, the problem is I don't have any clear vision about what will happen when I start receiving 100 and 1000 of elements in step 2. So, I need your help in determining whether I need to update my code for that scenario or not or is there anything I missed out which degrade the performance. Also, How many maximum requests I can make at a time. Any help from you is appreciable.
Thanks!
AFAIK there is no hard limit on a number of request. However, there are (at least) two things to consider: your hardware limits (memory/CPU) and remote server latency (is it able to respond to all requests in 5 mins before the next batch). Without knowing the context it's also impossible to predict what scaling mechanism you might need.
The question is actually more about app architecture and not about some specific piece of code, so you might want to try softwareengineering instead of SO.