I am building an application in which, I am sending 3 axios calls concurrently from frontend to the backend by using axios.all function that make changes in a MONGO DB database. But the problem is, I want to send these axios requests in such a way that either all the 3 requests are made successfully, or if any 1 of the 3 fails, no other request call should be made.
How can I do so in javascript?
let one = "request link 1";
let two = "request link 2";
let three = "request link 3";
const requestOne = axios.post(one, newInventory);
const requestTwo = axios.post(two, element);
const requestThree = axios.post(three, newObj);
axios.all([requestOne,requestTwo,requestThree]).then(axios.spread((...response)=>{
alert("changes are made successfully");
window.location.reload();
})).catch(err=>{
alert("Some error has occured", err);
})
Here is the code. I am making 3 requests (requestOne, requestTwo, requestThree). Lets consider a case when the requestOne fails dues to some reason, while requestTwo and requestThree are successful. This is what I want to prevent. If any of the request fails, I want to revert the changes made by all the other successful requests. I want either all the requests to be successful, or all the requests to fail.
axios.all and axios.spread are deprecated, and this is mentioned on their GitHub page as well. Here's the link: https://github.com/axios/axios#concurrency-deprecated.
So, since it's deprecated, use Promise.all instead.
Below is your code with Promise.all implemented
let one = "request link 1";
let two = "request link 2";
let three = "request link 3";
const requestOne = axios.post(one, newInventory);
const requestTwo = axios.post(two, element);
const requestThree = axios.post(three, newObj);
Promise.all([requestOne,requestTwo,requestThree]).then((res)=>{
alert("changes are made successfully");
window.location.reload();
})).catch(err=>{
alert("Some error has occured", err);
})
Only an error will be returned if any of the promises get rejected otherwise it will return the responses of all promises. However, a rejection of any of the promises will not make other promises' request to stop, as the request fires concurrently in Promise.all.
Related
The problem
FetchError: request to https://direct.faforever.com/wp-json/wp/v2/posts/?per_page=10&_embed&_fields=content.rendered,categories&categories=638 failed, reason: connect ECONNREFUSED
I'm doing some API calls for a website using fetch. Usually there are no issues, when a request "fails" usually the catch error gets it and my website continues to run. However, when the server that hosts the API calls is down/off, my fetch API calls crash the website entirely (despite being on a try catch loop).
As far as I'm concerned, shouldnt the catch block "catch" the error and continue to the next call? Why does it crash everything?
My wanted solution
For the website to just move on to the next fetch call / just catch the error and try again when the function is called again (rather than crashing the entire website).
The code
Here is an example of my fetch API call (process.env.WP_URL is = https:direct.faforever.com )
async function getTournamentNews() {
try {
let response = await fetch(`${process.env.WP_URL}/wp-json/wp/v2/posts/?per_page=10&_embed&_fields=content.rendered,categories&categories=638`);
let data = await response.json();
//Now we get a js array rather than a js object. Otherwise we can't sort it out.
let dataObjectToArray = Object.values(data);
let sortedData = dataObjectToArray.map(item => ({
content: item.content.rendered,
category: item.categories
}));
let clientNewsData = sortedData.filter(article => article.category[1] !== 284);
return await clientNewsData;
} catch (e) {
console.log(e);
return null;
}
}
Here's the whole code (this whole thing is being called by express.js in line 246 (the extractor file).
Extractor / Fetch API Calls file
https://github.com/FAForever/website/blob/New-Frontend/scripts/extractor.js
Express.js file in line 246
https://github.com/FAForever/website/blob/New-Frontend/express.js#:~:text=//%20Run%20scripts%20initially%20on%20startup
I have a google cloud function that sends notifications to a firebase topic.
The function was working fine till suddenly, it start to send more than one notification 2 or 3 at the same time. After contacting the Firebase support team, they told may I should make the function Idempotent, but I don't know how, since it's a callable function.
for more details, this is a reference question containing more detail about the case.
below is the function's code.
UPDATE 2
it was a bug in the admin sdk and they resolved it in the last release.
UPDATE
the function is already idempotent because it is an event driven function
the link above contains the functions log as prof it runs only once.
after 2 month on go and back it appears the problem with firebase admin sdk
the function code getMessaging().sendToTopic() has retry 4 times and the origin request so its 5 times by default before throwing error and terminate the function. So the reason of duplicate notification is that the admin sdk from time to time cant reach the FCM server for some reason.it try to send notification to all subs but in half way or before send all notification it get error so it retry again from the beginning so some users receives one notification and some get 2, 3,4.
And Now the question is how to prevent these default retries or how to make the retry continue from where it get the error. probably Ill ask a separated question.
For now I did a naive solution by prevent the duplicate notification from the receiver( mobile client). if it get more than one notification has same content within a minute show only one.
const functions = require("firebase-functions");
// The Firebase Admin SDK to access Firestore.
const admin = require("firebase-admin");
const {getMessaging} = require("firebase-admin/messaging");
const serviceAccount = require("./serviceAccountKey.json");
admin.initializeApp({
credential: admin.credential.cert(serviceAccount),
databaseURL: "https://mylinktodatabase.firebaseio.com",
});
exports.callNotification = functions.https.onCall( (data) => {
// Grab the text parameter.
const indicator = data.indicator;
const mTitle = data.title;
const mBody = data.body;
// topic to send to
const topic = "mytopic";
const options = {
"priority": "high",
"timeToLive": 3600,
};
let message;
if (indicator != null ) {
message = {
data: {
ind: indicator,
},
};
} else {
message = {
data: {
title: mTitle,
body: mBody,
},
};
}
// Send a message to devices subscribed to the provided topic.
return getMessaging().sendToTopic(topic, message, options)
.then(() => {
if (indicator != null ) {
console.log("Successfully sent message");
return {
result: "Successfully sent message", status: 200};
} else {
console.log("Successfully sent custom");
return {
result: "Successfully sent custom", status: 200};
}
})
.catch((error) => {
if (indicator != null ) {
console.log("Error sending message:", error);
return {result: `Error sending message: ${error}`, status: 500};
} else {
console.log("Error sending custom:", error);
return {result: `Error sending custom: ${error}`, status: 500};
}
});
});
In this blog Cloud Functions pro tips: Building idempotent functions, shows how to do a function idempotent using two approaches:
Use your event IDs
One way to fix this is to use the event ID, a number that uniquely identifies an event that triggers a background function, and— this is important—remains unchanged across function retries for the same event.
To use an event ID to solve the duplicates problem, the first thing is to extract it from the event context that is accessed through function parameters. Then, we utilize the event ID as a document ID and write the document contents to Cloud Firestore. This way, a retried function execution doesn’t create a new document, just overrides the existing one with the same content. Similarly, some external APIs (e.g., Stripe) accept an idempotency key to prevent data or work duplication. If you depend on such an API, simply provide the event ID as your idempotency key.
A new lease on retries
While this approach eliminates the vast majority of duplicated calls on function retries, there’s a small chance that two retried executions running in parallel could execute the critical section more than once. To all but eliminate this problem, you can use a lease mechanism, which lets you exclusively execute the non-idempotent section of the function for a specific amount of time. In this example, the first execution attempt gets the lease, but the second attempt is rejected because the lease is still held by the first attempt. Finally, a third attempt after the first one fails re-takes the lease and successfully processes the event.
To apply this approach to your code, simply run a Cloud Firestore transaction before you send your email, checking to see if the event has been handled, but also storing the time until which the current execution attempt has exclusive rights to sending the email. Other concurrent execution attempts will be rejected until the lease expires, eliminating all duplicates for all intents and purposes.
Also, as stated in this other question:
Q: Is there a need to make these onCall functions idempotent or will they never perform retries?
A: Calls to onCall functions are not automatically retried. It's up to your application's client-side and server-side code, to agree on a retry strategy.
See also:
Retrying Event-Driven Functions - Best practices
Like the title, I'm working on an app that calls the YouTube APIs but sometimes my app makes too many requests that trigger the API quotas limit and thus making the app stop working due to error on the API server. The solution for this which I found on the Internet is to create multiple API keys and looping through them. But I'm having a headache figuring out how to make my Axios trying multiple API URLs before returning the data. Any tips or tricks on this?
Here is the sample code I tried:
async function getYoutubeVideoSnippet(id) {
const response = await axios.all([
axios.get(
"https://www.googleapis.com/youtube/v3/videos?part=snippet,contentDetails,statistics&id=" +
id +
"&key=API_KEY"
),
axios.get(
"https://www.googleapis.com/youtube/v3/videos?part=snippet,contentDetails,statistics&id=" +
id +
"&key=API_KEY"
),
]);
return response.data;
}
I don't understand exactly what you want to do, but If you want to have many requests and only need one response, You can use Promise.any. For example
async function fetchData () {
const requests = [
axios.get('...'),
axios.get('...'),
]
const response = await Promise.any(requests)
return response.data
}
With this example, You can receive the first response, but you should be aware that all requests are eventually sent
I have a bulk create participants function that using Promise.allSettled to send 100 axios POST request. The backend is Express and frontend is React. That request is call a single add new participant rest API. I have set the backend timeout to 15s using connect-timeout. And frontend is 10s timeout.
My issue is when I click the bulk add button, the bulk create is triggered and that Promise.allSettled concurrent starts. However, I cannot send a new request before all concurrent request done. Because I have set up a timeout on the frontend, the new request will be cancelled.
Is there a way, I can still make the concurrent request, but that request does not stop other new requests?
This is the frontend code, createParticipant is the API request.
const PromiseArr = []
for (let i = 0; i < totalNumber; i++) {
const participant = participantList[i]
const participantNewDetail = {
firstName: participant.firstName,
lastName: participant.lastName,
email: participant.email,
}
PromiseArr.push(
createParticipant(participantNewDetail)
.then((createParticipantResult) => {
processedTask++
processMessage = `Processing adding participant`
dispatch({ type: ACTIVATE_PROCESS_PROCESSING, payload: { processedTask, processMessage } })
})
.catch((error) => {
processedTask++
processMessage = `Processing adding participant`
dispatch({ type: ACTIVATE_PROCESS_PROCESSING, payload: { processedTask, processMessage } })
throw new Error(
JSON.stringify({
status: "failed",
value: error.data.message ? error.data.message : error,
})
)
})
)
}
const addParticipantResults = await Promise.allSettled(PromiseArr)
PromiseArr is the Promise array with the length 100.
Is it possible I can splite this big request into small pieces promise array and send to the backend and within each request gap, it's possible I can send another new request like retriveUserDetail?
If you're sending 100 requests at a time to your server, that's just going to take awhile for the server to process. It would be best be to find a way to combine them all into one request or into a very small number of requests. Some server APIs have efficient ways of doing multiple queries in one request.
If you can't do that, then you probably should be sending them 5-10 at a time max so the server isn't being asked to handle sooo many simultaneous requests which causes your additional request to go to the end of the line and take too long to process. That will allow you to send other things and get them processed while you're chunking away on the 100 without waiting for all of them to finish.
If this is being done from a browser, you also have some browser safeguard limitations to deal with where the browser refuses to send more than N requests to the same host at a time. So if you send more than that, it queues them up and holds onto them until some prior requests have completed. This keeps one client from massively overwhelming the server, but also creates this long line of requests that any new request has to go to the end of. The way to deal with that is not never send more than a small number of requests to the same host and then that queue/line will be short when you want to send a new request.
You can look at these snippets of code that let you process an array of data N-at-a-time rather than all at once. Each of these has slightly different control options so you can decide which one fits your problem the best.
mapConcurrent() - Process an array with no more than N requests in flight at the same time
pMap() - Similar to mapConcurrent with more argument checking
rateLimitMap() - Process max of N requestsPerSecond
runN() - Allows you to continue processing upon error
These all replace both Promise.all() and whatever code you had for iterating your data, launching all the requests and collecting the promises into an array. The functions take an input array of data, a function to call that gets passed an item of the data and should return a promise that resolves to the result of that request and they return a promise that resolves to an array of data in the original array order (same return value as Promise.all()).
I'm trying to create a chatbot in DialogFlow that checks the status of your insurance claim.
I have set up a call to an external API (mock), and I use a promise to wait for the response and then return it. However, I consistently get [empty response] from DF, despite getting the correct data from the mock API. Is it just taking too long?
Below is the relevant code:
var callClaimsApi = new Promise((resolve, reject)=>{
try{
https.get('https://MOCKAPIURL.COM', (res) => {
res.setEncoding('utf8');
let rawData = '';
res.on('data', (chunk) => { rawData += chunk; });
res.on('end', () => {
resolve(JSON.parse(rawData));
});
});} catch(e){reject(e.message);}
});
function checkClaims(agent){
callClaimsApi
.then(function(fulfillment){
console.log("fulfillment name: " + fulfillment.name);
agent.add("It looks like you want to find a claim for " + fulfillment.name);
})
.catch(function(error){console.log(error);});
}
intentMap.set('checkClaims', checkClaims);
here is the output from the logs:
The issue is that, although you're doing all your processing through a Promise, you are not returning that Promise in your Handler. The library needs the Promise so it knows there is an asynchronous operation going on and that it should wait till that operation is completed before sending a reply.
Fortunately, in your case, you may be able to do this by just adding the return statement before callClaimsApi.
You may also wish to look into using a library such as axios to do the http call, since it has promise support built-in.
According to documentation, Dialogflow's wait time is 5 seconds. If you can optimize your code that would be awesome. There are some tricks to make DF wait for longer using Follow-Up events or use one intent to request -> respond to the user with some confirmation (ex. Can you wait for 3 seconds? Yes/No) -> By this time the request will be available so you can send it in the next message.
You can check his post for me info