I am getting this error, "(#613) Calls to graph_url_engagement_count have exceeded the rate of 10 calls per 3600 seconds." so I want to keep the calls to the API within the limit for a given link.
Note: That the limit of 10 calls per hour only applies to each link.
This is the function I am using:
const getFacebookStats = async (link) => {
try {
const resp = await axios.get(
`https://graph.facebook.com/v12.0/?id=${link}&fields=engagement&access_token=${BEARER_TOKEN_FACEBOOK}`
);
return resp.data.engagement;
} catch (err) {
// Handle Error Here
console.error(err);
}
};
Any help would be much appreciated.
Thank you
Solved this problem using LRU cache.
cache.set(link, cache.get(link) == undefined ? 1 : cache.get(link) + 1);
Everytime the link was passed onto the getFacebookStats function, the link and its count were stored in the LRU cache, if the same link was being called multiple times then its count got incremented.
In the settings of LRU cache, maxAge was set to 1 hour:
maxAge: 1000 * 60 * 60,
Related
I wonder if someone could help me.
I have a non-professional development license for a reverse geocoder within a javascript program where I am only allowed to do two requests
per second otherwise I get a 429 error. I have 3 sets of co-ordinates I wish to feed into the reverse geocoder and I get the first two
processed correctly but after that I get an error and the third one isn't processed. I thought that if I used the SetTimeout function either in the for
loop or in one of the lower level functions this would delay the requests enough to be able to process all 3 addresses but no matter where I
place the SetTimeout function it continues to get the 429 error. When I log the time to the console, I can see that the three calls to the
reverse geocoder happen at the same time. Can anyone suggest where I can place the timeout to slow down the requests enough?
Thanks (last attempted version of code below)
for (let i = 0; i < mapMarkers.length; i++){
// use a reverse geocode function to build a display address for each of the co-ordinates chosen
SetTimeout(reverseGeocode(mapMarkers[i].getLatLng()), 1000);
};
function reverseGeocode(coords){
var today = new Date();
var time = today.getHours() + ":" + today.getMinutes() + ":" + today.getSeconds();
console.log("Into reverse geocoder " + time);
let REVERSEURL = `https:....`
let TOKEN = '.....'
let url = `${REVERSEURL}key=${TOKEN}&lat=${coords.lat}&lon=${coords.lng}`;
//do a reverse geocoding call
getData(url, coords);
}
async function getData(url,coords) {
try {
const data = await getRequest(url);
// create a display address with the first three elements containing something in the address dictionary
let address = createAddressString(data.address, 3) +
" ( " + coords.lat.toFixed(5) + ", " + coords.lng.toFixed(5) + " ) ";
// Insert a div containing the address just built
$("#addresses-container").append("<div class='request-page'>"+address+'</div>');
} catch(e) {
console.log(e);
}
}
async function getRequest(url) {
const res = await fetch(url);
if (res.ok) {
return res.json();
} else {
throw new Error("Bad response");
}
}
Your current logic is invoking the reverseGeocode() method immediately, as you pass the response from that function call to the timeout. You need to provide a function reference instead.
Even if you correct that issue, then you would instead delay all the requests by 1 second, but they would still get fired at the same time.
To stagger them you can use the index of the iteration to multiply the delay. For example, the following logic will fire 1 request every 250ms. This delay can be amended depending on what the rate limit is of your API provider. Also note that SetTimeout() needs to be setTimeout()
for (let i = 0; i < mapMarkers.length; i++) {
setTimeout(() => reverseGeocode(mapMarkers[i].getLatLng()), 250 * i);
}
Aside from the problem, it would be worth checking if the API can accept multiple lookups in a single request, which will alleviate the issue. Failing that, I'd suggest finding an alternative provider which allows more than 3 requests per N period.
I am using DiscordJS and their API has a character limit and will reject message if limit is exceeded.
Through fetchData()I am successfully building assignedPrint, which is an array of messages that I would like to send over the API.
So I already have the array ready to go but I am also using an auto update feature (courtesy of WOK) where every set amount of time, array would be flushed refilled with fresh data and sent over again for editing the original message through the message.edit() method.
It works just fine but I am forseeing that my array might get bigger over time and sending a single message may break things because of the API max character limit.
const getText = () => {
return assignedPrint
+ greengo + 'Updating in ' + counter + 's...' + greenstop;
};
const updateCounter = async (message) => {
message.edit(getText());
counter -= seconds;
if (counter <= 0) {
counter = startingCounter;
// emptying the array before fetching again for the message edit
assignedPrint = [];
await fetchData();
}
setTimeout(() => {
updateCounter(message);
}, 1000 * seconds);
};
module.exports = async (bot) => {
await fetchData();
const message = await bot.channels.cache.get(tgt).send(getText());
updateCounter(message);
};
As you can see, the hurdle is that getText()includes everything.
I tried sending one element as a time using for (cont e of assignedPrint) and it worked but how can I edit every message upon refreshing data, knowing that new data may be added or some already sent data could be removed...
The easiest of course is to do it in one single message but again, it may hit the quota and cause a crash.
Thanks.
Hi I'm trying to delete messages that are 7 days old I'm using cron to schedule when this should happen. Here in my code I'm trying to fetch messages from a channel called log_channel then comparing the message timestamps to const days = moment().add(7, 'd'); However I'm stuck at the if statement if (msgs - days) as this does not seem to return anything.
Here is my code for reference:
const cron = require('node-cron');
const moment = require('moment');
module.exports = new Event("ready", client => {
try{
const log_channel = client.channels.cache.find(c => c.name == "log")
cron.schedule("0 18 22 * * *", function(){
console.log("Attempting to purge log messages...");
const days = moment().add(7, 'd'); // use moment the set how many days to compare timestamps with //
log_channel.messages.fetch({ limit: 100 }).then(messages => {
const msgs = Date.now(messages.createdTimestamp)
if (msgs < days){
log_channel.bulkDelete(100)
console.log("messages deleted")
}
})
})
First of all, I noticed your function wasn't an asynchronous function. We need to use a asynchronous function because we'll be using asynchronous, promise-based behavior your code. log_channel.messages.fetch should be await log_channel.messages.fetch According to the Message#fetch() docs The fetch() method in this case simply returns asynchronous message object. Promise.
The next part is that you missed out ForEach(), forEach() executes the callbackFn function once for each array element.
Finally, you are also comparing the timestamp (a snowflake) with a moment object which will not work and the reason your if doesn't return true. If you want to get messages 7 days prior you can do something like this moment.utc(msg.createdTimestamp).add( -7, 'd') this will return all timestamps 7 days prior the last 100 messages. Note that Discord API limits you to 14 days so keep that in mind.
cron.schedule("* 18 22 * * *", async function(){
console.log("Attempting to purge mod log messages...");
await log_channel.messages.fetch({ limit: 100 })
.then(messages => {
messages.forEach(msg => {
const timestamp = moment.utc(msg.createdTimestamp).add( -7, 'd');
if (timestamp) {
log_channel.bulkDelete(5);
} else {
return;
}
})
I have some websocket that sends around 100's of data per second,I want to limit it to only 1 data per 500 ms.
onMessage(data) {
console.log(data); // This prints around 100 different times within 1 second
}
I tried something like below , Is this the right approach or is there any other better way to do it ? because this code runs 100 times per second.
var lastlog = new Date().getTime();
onMessage(data) {
currenttime = new Date().getTime();
if ( currenttime - lastlog > 500) {
console.log(data);
lastlog = new Date().getTime();
}
}
P.s : I can ignore remaining data and will be able to reduce the 500 ms to 200ms.. that is 5 data per second.
Here is another way of doing it, using the npm package throttle-debounce. This method is not "better". It can result is less code typed but you might not want the dependency on the package.
You can use the throttle function and specify how many milliseconds until it can be called again. Setting the second argument to true prevents the last request from being deffered -https://www.npmjs.com/package/throttle-debounce#notrailing.
The example below uses the library to throttle how often a button is pressed.
const { throttle } = throttleDebounce
const handleRequest = throttle(500, true, () => {
console.log('this request will be handled')
})
<script src='https://cdn.jsdelivr.net/npm/throttle-debounce#3.0.1/umd/index.js'></script>
<button onClick="handleRequest()">Mimick sending message</button>
Your use case might look like this:
import { throttle } from 'throttle-debounce'
const onMessage = throttle(500, true, () => {
console.log(data);
})
Less lines than your example, but that doesn't mean it's "better".
I am trying to create a https function in google cloud functions that when called will make copies of a document in a Firebase database add an ID and timestamp to the copy and save it to a collection in Firestore. This function will repeat for a certain amount of time for a given interval. For example it if told to run for 2 minutes making a copy every 10 seconds, it will make 12 unique copies saved to Firestore when it is finished running.
Now I have already implemented this function for each specific document in the Firebase database. However this is not scalable. If I were to eventually have 1000 documents in the database then I would need 1000 unique functions. So I want a single https function that will make a copy of every document in the database add the unique id and time stamp to the copy before saving it to Firestore at its unique path.
Here is an example of what I have.
// Duration of function runtime in minutes
var duration = 2;
// interval in which the funtion will be called in seconds
var interval = 10;
// Make a Copy of doc 113882052 with the selected duration, interval, and runtime options
exports.makeCopy_113882052 = functions.runWith(runtimeOpts).https.onRequest((request, response) => {
var set_duration = 0; // Counter for each iteration of runEveryXSeconds
var functId = setInterval(runEveryXSeconds, (interval * 1000) ); //runEveryXSeconds will run every interval in milliseconds
// runEveryXSeconds will based on the desired parameters
function runEveryXSeconds() {
// If runEveryXSeconds has reached its last iteration clear functID so that it stops running after one last iteration
if(set_duration >= ((duration * 60 / interval) - 1)) {
console.log("have made ",((duration * 60 / interval) - 1)," copies will now exit after one last copy");
clearInterval(functId);
}
set_duration++; // Increment set_duration at the beginning of every loop
console.log("Add a charger every ", interval, " seconds until ", duration, " min is up! currently on iteration ", set_duration);
// grab a snapshot of the database and add it to firestore
admin.database().ref('/113882052').once('value').then(snapshot => {
var data = snapshot.val(); // make a copy of the snapshot
data.station_id = "113882052"; // add the station id to the copy
data.timestamp = admin.firestore.FieldValue.serverTimestamp(); // add the current server timestamp to the copy
admin.firestore().collection('copies/113882052/snapShots').add(data); // add the copy to firestore at the given path
// only return promis when all iterations are complete
if (set_duration >= (duration * 60 / interval)) {
console.log("return successfully");
response.status(200).send('Success'); // send success code if completed successfully
}
return null;
}).catch(error => {
console.log('error at promise');
response.send(error); // send error code if there was an error
return null;
});
} // End of runEveryXSeconds
}); // End of makeCopy_113882052
So this function here makes a copy of doc 113882052 from Firebase adds the station_id "113882052" and the currant time stamp to the copy and saves it to Firestore collection at the path "copies/113882052/snapShots". It does this 12 times.
All of the other docs and their associated functions work the same way, just swap out the 9 digits for a different 9.
My initial though process is to use wildcards, but those are only used with triggers like onCreate and onUpdate. I am not using these, I am using once(), so I am not sure this is possible.
I have tried swapping out the once() method with onUpdate() like so
functions.database.ref('/{charger_id}').onUpdate((change, context) => {
const before_data = change.before.val();
const after_data = change.after.val();
//const keys = Object.keys(after_data);
var data = change.after.val();
var sta_id = context.params.charger_id;
data.station_id = sta_id; // add the station id to the copy
data.timestamp = admin.firestore.FieldValue.serverTimestamp(); // add the current server timestamp to the copy
admin.firestore().collection('chargers/', sta_id, '/snapShots').add(data); // add the copy to firestore at the given path
// only return promise when all iterations are complete
if (set_duration >= (duration * 60 / interval)) {
console.log("return successfully");
response.status(200).send('Success'); // send success code if completed successfully
}
return null;
});
but it does not work. I believe it does not work because it would only make a copy if the doc was updated just as the function was called, but I am not sure.
Is there a way to use wild cards to achieve what I want, and is there another way to do this without wildcards?
Thanks!
If you want to make a copy of all nodes in the Realtime Database, you'll need to read the root node of the database.
So instead of admin.database().ref('/113882052'), you'd start with admin.database().ref(). Then in the callback/completion handler, you loop over the children of the snapshot:
admin.database().ref().once('value').then(rootSnapshot => {
rootSnapshot.forEach(snapshot => {
var data = snapshot.val(); // make a copy of the snapshot
data.station_id = snapshot.key; // add the station id to the copy
data.timestamp = admin.firestore.FieldValue.serverTimestamp(); // add the current server timestamp to the copy
...
})