Let us say I am making multiple API calls from a React application. "Multiple API Calls" in this context means: Clicking on a checkbox several time fast. Everytime a checkbox is checked, an API call is triggered.
To illustrate this situation more precisely, I will add a screenshot off the User Interface:
Everytime a checkbox is checked, this function is called:
export function getApplications(authenticationToken, url, query, queryStringIn) {
return function(dispatch) {
const config = {
headers: {'Authorization' :"Bearer " + authenticationToken}
};
let queryString;
if (queryStringIn === "" || queryStringIn === null) {
queryString = queryStringBuilder.buildQuery(query);
} else {
queryString = queryStringIn;
}
let fetchTask = fetch(url + "/Applications?" + queryString.toString(), config).then(data => {
return data.json();
}).then(applications => {
dispatch(loadApplicationsSuccess(applications, queryString, query));
}).catch(error => {
dispatch(loadApplicationsFailed(error));
});
addTask(fetchTask); //ensure that server rendering is waiting for this to complete
return fetchTask;
}
The problem occurs when I am clicking at a checkbox so fast that the API is not able to return a response, before I am making a new call to the same API.
I want a behavior where only the last API call is made, and API calls in the queue before are cancelled if there are any.
What do you guys recommend to do in a such case?
Related
The below is my .ts file for the Alarm Component and over HTML I am using a simple *ngFor over criticalObject.siteList to display the records
This is not the original code I have simplified this but the problem I am facing is that on rigorous click on the refresh button(fires HTTP request), the list is adding duplicate siteNames and that should not happen. I have heard of debounce time, shareReplay, and trying applying here, which even doesn't make sense here.
NOTE: I have to fire the HTTP request on every refresh button click.
Keenly Waiting for Help.
criticalObject.siteList = [];
siteList = ["c404", "c432"];
onRefresh() {
this.criticalObject.siteList = [];
this.siteList.forEach(elem => {
getAlarmStatus(elem);
})
}
getAlarmStatus(item) {
critical_list = [];
alarmService.getAlarmStatusBySite(item.siteName).subcribe(data => {
if(data) {
// do some calculations
if(this.criticalObject.siteList.length === 0) {
this.criticalObject.siteList.push({
siteName = item.siteName;
})
}
this.criticalObject.siteList.forEach((elem, idx) => {
if(elem.siteName === item.siteName) {
return;
} else if(idx === this.criticalObject.siteList.length - 1) {
this.criticalObject.siteList.push({
siteName = item.siteName;
})
}
})
}
}
})
I did a silly mistake, I am new to JavaScript, I found out you cannot return from a forEach loop and that's why I was getting duplicated records, return statement in forEach acts like a continue in JavaScript.
I have a button user can click on that makes an http call to fetch data.
When user click the button the first time, I make http call. If the user clicks the button again, just provide the data that was fetched earlier.
How do I do that without storing it in a local variable explicitly? I have tried few things but none seem to work. It always makes the http request. I have tried to use "shareReplay" and "share" operators.
<button (click)=getData()>Click me</button>
getData() {
source$.pipe(
switchMap(sourceVal => {
const source2$ = this.getSource2();
const source3$ = this.getSource3(); -------- I do not want this call to be made on subsequent button clicks because it's a reference data
return combineLatest([source2$, source3$])
}),
map(([source2Val, source3Val]) => {
//do some processing
return 'done'
})
)
}
I am using angular and rxjs.
you can disable the button or prevent sending multiple requests via a variable.
fetching = false;
getData() {
if(!this.fetching) {
this.fetching = true;
this.http.get('url').pipe(shareReplay(1), finalize(() => {
this.fetching = false;
}));
}
}
I'm using cache first caching strategy for my pwa, for every GET request I first look if that request exists in cache, if it does I return it and update the cache.
The problem is that users can switch between multiple projects, so when they switch to another project,
the first time they open some url, they get the stuff from previous project if it exists in cache.
My solution is to try to add GET parametar ?project=projectId(project=2 for example) in the service worker, so each project would have its own version of the request saved in the cache.
I wanted to concatinate project id to the event.request.url, but I've read here that it is read only.
After doing that, hopefully I would have urls like this in cache:
Instead of: https://stackoverflow.com/questions
I would have: https://stackoverflow.com/questions?project=1
And: https://stackoverflow.com/questions?project=2
So I would get questions from the project I'm on, instead of just getting questions from previous project is /questions is saved in cache already.
Is there a way to edit request url in service worker?
My service worker code:
self.addEventListener('fetch', function(event) {
const url = new URL(event.request.clone().url);
if (event.request.clone().method === 'POST') {
// update project id in service worker when it's changed
if(url.pathname.indexOf('/project/') != -1 ) {
// update user data on project switch
let splitUrl = url.pathname.split('/');
if (splitUrl[2] && !isNaN(splitUrl[2])) {
console.log( user );
setTimeout(function() {
fetchUserData();
console.log( user );
}, 1000);
}
}
// do other unrelated stuff to post requests
.....
} else { // HANDLE GET REQUESTS
// ideally,here I would be able to do something like this:
if(user.project_id !== 'undefined') {
event.request.url = event.request.url + '?project=' + user.project_id;
}
event.respondWith(async function () {
const cache = await caches.open('CACHE_NAME')
const cachedResponsePromise = await cache.match(event.request.clone())
const networkResponsePromise = fetch(event.request.clone())
if (event.request.clone().url.startsWith(self.location.origin)) {
event.waitUntil(async function () {
const networkResponse = await networkResponsePromise.catch(function(err) {
console.log( 'CACHE' );
// return caches.match(event.request);
return caches.match(event.request).then(function(result) {
// If no match, result will be undefined
if (result) {
return result;
} else {
return caches.open('static_cache')
.then((cache) => {
return caches.match('/offline.html');
});
}
});
});
await cache.put(event.request.clone(), networkResponse.clone())
}())
}
// news and single photos should be network first
if (url.pathname.indexOf("news") > -1 || url.pathname.indexOf("/photos/") > -1) {
return networkResponsePromise || cachedResponsePromise;
}
return cachedResponsePromise || networkResponsePromise;
}())
}
});
It's possible to use any URL as a cache key when reading/writing to the Cache Storage API. When writing to the cache via put(), for instance, you can pass in a string representing the URL you'd like to use as the first parameter:
// You're currently using:
await cache.put(event.request.clone(), networkResponse.clone())
// Instead, you could use:
await cache.put(event.request.url + '?project=' + someProjectId, networkResponse.clone())
But I think a better approach that would accomplish what you're after is to use different cache names for each project, and then within each of those differently-named caches you would not have to worry about modifying the cache keys to avoid collisions.
// You're currently using:
const cache = await caches.open('CACHE_NAME')
// Instead, you could use:
const cache = await caches.open('CACHE_NAME' + someProjectId)
(I'm assuming that you have some reliable way of figuring out what the correct someProjectId value should be inside of the service worker, based on which client is making the incoming request.)
I'm scraping website with Apify. I want to scrape different types of pages and then combine the data into one data set. Now i have different sets of data for each kind of pages (users, shots). How to transfer data between pageFunction executions, ex. to calculate followers number for each shot author.
async function pageFunction(context) {
const { request, log, jQuery } = context;
const $ = jQuery;
if (request.url.indexOf('/shots/') > 0) {
const title = $('.shot-title').text();
return {
url: request.url,
title
};
} else if (request.userData.label === "USER") {
var followers_count = $('.followers .count').first().text();
return {
url: request.url,
followers_count
};
}
}
If I understand the question correctly, you can pass the data through crawled pages and save only one item in the end. For this use case, you can work with userData, which you can pass with every request.
For example, if you would like to pass the data from /shots site to the USER, you could do it like this. (but it requires you to enqueue pages manually to control the flow of the data, also this approach except that the /shots type of the page is the first one you visit and then continue)
async function pageFunction(context) {
const { request, log, jQuery } = context;
const $ = jQuery;
if (request.url.indexOf('/shots/') > 0) {
const title = $('.shot-title').text();
const userLink = 'some valid url to user page'
//add to the queue your request with the title in the userData
await context.enqueueRequest({
url: userLink,
userData:{
label:'USER',
shotsTitle: title
}
})
} else if (request.userData.label === "USER") {
var followers_count = $('.followers .count').first().text();
//here you need to get the shotsTitle and return it
return {
url: request.url,
followers_count,
shotsTitle: request.userData.shotsTitle
};
}
}
If you would need to share the between runs of the actors, that is other topic, let me know if it helped.
Also would recommend going through the getting started guide which is here.
I'm using this Gumroad-API npm package in order to fetch data from an external service (Gumroad). Unfortunately, it seems to use a .then() construct which can get a little unwieldy as you will find out below:
This is my meteor method:
Meteor.methods({
fetchGumroadData: () => {
const Gumroad = Meteor.npmRequire('gumroad-api');
let gumroad = new Gumroad({ token: Meteor.settings.gumroadAccessKey });
let before = "2099-12-04";
let after = "2014-12-04";
let page = 1;
let sales = [];
// Recursively defined to continue fetching the next page if it exists
let doThisAfterResponse = (response) => {
sales.push(response.sales);
if (response.next_page_url) {
page = page + 1;
gumroad.listSales(after, before, page).then(doThisAfterResponse);
} else {
let finalArray = R.unnest(sales);
console.log('result array length: ' + finalArray.length);
Meteor.call('insertSales', finalArray);
console.log('FINISHED');
}
}
gumroad.listSales(after, before, page).then(doThisAfterResponse); // run
}
});
Since the NPM package exposes the Gumorad API using something like this:
gumroad.listSales(after, before, page).then(callback)
I decided to do it recursively in order to grab all pages of data.
Let me try to re-cap what is happening here:
The journey starts on the last line of the code shown above.
The initial page is fetched, and doThisAfterResponse() is run for the first time.
We first dump the returned data into our sales array, and then we check if the response has given us a link to the next page (as an indication as to whether or not we're on the final page).
If so, we increment our page count and we make the API call again with the same function to handle the response again.
If not, this means we're at our final page. Now it's time to format the data using R.unnest and finally insert the finalArray of data into our database.
But a funny thing happens here. The entire execution halts at the Meteor.call() and I don't even get an error output to the server logs.
I even tried switching out the Meteor.call() for a simple: Sales.insert({text: 'testing'}) but the exact same behaviour is observed.
What I really need to do is to fetch the information and then store it into the database on the server. How can I make that happen?
EDIT: Please also see this other (much more simplified) SO question I made:
Calling a Meteor Method inside a Promise Callback [Halting w/o Error]
I ended up ditching the NPM package and writing my own API call. I could never figure out how to make my call inside the .then(). Here's the code:
fetchGumroadData: () => {
let sales = [];
const fetchData = (page = 1) => {
let options = {
data: {
access_token: Meteor.settings.gumroadAccessKey,
before: '2099-12-04',
after: '2014-12-04',
page: page,
}
};
HTTP.call('GET', 'https://api.gumroad.com/v2/sales', options, (err,res) => {
if (err) { // API call failed
console.log(err);
throw err;
} else { // API call successful
sales.push(...res.data.sales);
res.data.next_page_url ? fetchData(page + 1) : Meteor.call('addSalesFromAPI', sales);
}
});
};
fetchData(); // run the function to fetch data recursively
}