Situation:
I have a Node.js api that is called many times a second on a website. I am using console.time('RESPONSE TIME') and console.timeEnd('RESPONSE TIME') to measure how long the api is taking to respond to the client on to each request.
Inside of the api, I am using a Promise.all() to aggregate responses from 4 different api's and then return a final response based on what the 4 apis returned.
Issue:
Everything works as expected except for an occasional warning logged Warning: No such label 'RESPONSE TIME' for console.timeEnd(). Why is this and how do I properly avoid this?
I speculate that it's because Node is asynchronous and while one request may still be waiting for it's 4 api's to respond, another request will have finished and hit the console.timeEnd() ending both timers since they share the same name. But I can't find an answer anywhere.
Your speculation is correct, you should have a unique label name for every api call.
Assuming every request has a unique identifier stored in req.id you can use the following:
console.time(`RESPONSE TIME request ${req.id}`)
// await api call
console.timeEnd(`RESPONSE TIME request ${req.id}`)
Related
When I make API call for listing the paths in ADLS Gen2 using maxResults and Continuation as uri parameters.
Initially I'm getting the correct continuation token returned as response headers but when i try to make subsequent calls for listing the remaining files, sometimes ADLS returns with a continuation token that ends with "=="(lets call it type one) and sometimes if returns with the normal token(type two).Now,when I try to make API call using the first type, ADLS gives me an error and it works fine in the case of the other token.
I searched this before and what I found in one of the answers was that we have to ignore the response in which the token returned ends with "==" and we have to make calls again and again ignoring the continuation token(not using this token in the uri and using the same previous request uri) returned until we get the second(working) type of token.
This is a complex task when it comes to larger amount of files,is there a better solution for this?
Is there a better solution for getting the right type of continuation token for making subsequent calls?
The Sample request for making the call is
GET https://storageAccountName.dfs.core.windows.net/sampleDirectory?recursive={recursive}&resource=filesystem&maxresults={maxresults}&continuation={continuation}
The two types of sample continuation tokens returned are
LCJhbGciOiJSUzI1NiIsIng1dCI6ImpTMVhvMU9XRGpfNTJ2Ynd==(Not working one)
LCJhbGciOiJSUzI1NiIsIng1dCI6ImpTMVhvMU9XRGpfNTJ2Ynd (working one)
The Official Documentation for ADLS Gen2 for listing paths is
https://learn.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/list
The link to the stackoverflow answer which I was referring to can't get ADLS Gen2 REST continuation token to work
I have set up the eSignatures API for our app and until recently it has been working perfectly. The part that is now broken is the webhook function. So when a document gets signed by one of our clients, it triggers our webhook cloud function that then updates our database (seems simple?).
The problem is that eSignatures have now updated their timeout to only 8 seconds, which means that my function does not have enough time to run and respond to their servers. It does however still run and update the database correctly, but as it takes longer than 8 seconds, the 200 response never reaches eSignatures and therefore they retry the endpoint every hour (for ~5/6 hours). I have put in a catch so that data is not duplicated when this happens, but ideally we don't want the retries to take place!
My question is basically, is there a way to send a 200 response at the beginning of the function to eSginatures, and then continue with the updates? Or is there another solution to handle this timeout? As if anything does fail in the function, I still want to return a 4xx to eSignatures and in this case we will want the retry?
You can't send a response from your Cloud Functions and then continue executing tasks. When the response is sent, the Function is stopped. See this link and this one
But even if you could, sending a response before ending the tasks that your Cloud Function prevents you from sending a 4XX response if they fail. Hence, eSginatures would never retry.
I don't think you can do much else aside from possibly optimizing your Cloud Function or increasing the eSignatures timeout, which I don't think is possible ATM.
Basically what I want to do is to get a particular endpoint last execution time, total number of times api executed today and ip address of each machine who hits the api and the application response time of each time api hits. How to achieve this in nodeJS? Basically it is rest api auditing system.
There's nothing related specifically to Node here per se, you can log every request and the needed details somewhere (i.e. a plain database). It might be handy to create a middleware that wraps all API endpoints (client -> middleware -> api endpoint function) that contains the logic for this.
Since you're using NodeJS, you can create a callback mechanism for measuring response times, i.e. your middleware can provide a callback function to your API endpoint controller(s) that the controller invokes when it sends the response to the user (after res.send() if you're using Express).
I can answer some of your queries :-
IP -> IP address of the machine that hits the api is present in the request object req.connection.remoteAddress in expressjs(It may not be proper if your request is being routed over a proxy like Akamai, in that case turn on True-Client-IP header in akamai)
Count -> make a db write every time a particular endpoint is requested. Just put it right before your entry point function(api) sends response.
Response time of api - just put this at the start and end of your entry point function.
At the start :-
let start = process.hrtime();
Before api sends response :-
endTime = process.hrtime(start);
endTime = Math.round(((endTime[0] * 1000) + (endTime[1] / 1000000)));
Maybe you can wrap this in a function. It will give you the time elapsed in milliseconds and you can store that too somewhere for each iteration.
I'm trying to get the number of likes from a Facebook page with the following line of code in JavaScript, but I get the "application request limit reached error" even though I only made one API call to a single FB page
var jsonData = UrlFetchApp.fetch("https://graph.facebook.com/" + name);
Request failed for https://graph.facebook.com/nba returned code 403.
Truncated server response: {"error":{"message":"(#4) Application
request limit reached","type":"OAuthException","code":4}} (use
muteHttpExceptions option to examine full response)
I'm confused on why this is happening and I've looked at similar questions on StackOverflow regarding this problem, but none seemed to give the right solution or point me in the right direction. Any help is appreciated. Thanks!
App Level Rate Limiting
This rate limiting is applied globally on application level. When the app uses more than allowed resources the error is thrown.
Recommendations:
Do not make multiple calls in small amount of time, spread out the calls throughout the day.
Do smart fetching of data, you can always fetch important data only, you can remove duplicate data as well
Use parameters: "since", "until", "limit" to limit/filter the requests
You can find out more in Official documentation here!
setInterval(function{
//send ajax request and update chat window
}, 1000)
is there any better way to update the chat with new messages? is this the right way to update the chat using setInterval?
There are two major options (or more said popular ways)
Pulling
First is pulling, this is what you are doing. Every x (milli)seconds you check if the server config has changed.
This is the html4 way (excluding flash etc, so html/js only). For php not the best way because you make for a sinle user a lot of connections per minute (in your example code at least 60 connections per second).
It is also recommended to wait before the response and then wait. If for example you request every 1 second for an update, but your response takes 2 seconds, you are hammering your server. See tymeJV answer for more info
Pushing
Next is pushing. This is more the HTML5 way. This is implemented by websockets. What is happining is the client is "listing" to a connection and waiting to be updated. When it is updated it will triger an event.
This is not great to implement in PHP because well you need a constanct connection, and your server will be overrun in no time because PHP can't push connections to the background (like Java can, if I am correct).
I made personally a small chat app and used pusher. It works perfectly. I only used the free version so don't know how expensive it is.
Pretty much yes, one minor tweak, rather than encapsulate an AJAX call inside an interval (this could result in pooling of unreturned requests if something goes bad on the server), you should throw a setTimeout into the AJAX callback to create a recursive call. Consider:
function callAjax() {
$.ajax(options).done(function() {
//do your response
setTimeout(callAjax, 2000);
});
}
callAjax();