HTTP measure time of each request using interceptor - javascript

I'm trying to measure time on http requests in my application. Because I thing this should be done globally I'd like to use interceptor.
I've created one that can log start time and end time of every request:
app.factory('httpTimeInterceptor', [function() {
var start;
return {
request: function(config) {
start = new Date();
console.log("START",start);
return config;
},
response: function(response) {
console.log("START",start);
var date = new Date();
console.log("END",date);
return response;
}
};
}])
This logs three values to console: start time of my request, then again start time and end time (when request ends).
My problem begins when I'm doing multiple requests (second starts before first ends). In that case my start variable is overridden with new value.
Problem is my interceptor is factory so it is a singleton (please correct me if I'm wrong).
Can I modify my code to easily get actual time each request took?
I was thinking about creating array that will hold key for each request and when it ends I'll be able to get start time from that array (dictionary), but I don't know how to identify same request in request and response functions.
Maybe there is easier and simpler solution to what I'm trying to do?

You can store the start time on the config object passed to the request function. This object is available in the response interceptor as response.config. Make sure to pick a unique property name that's not already used by Angular.

Related

Zone.js is not expire per request in node and it's value exists in stack

I am very new to express.js ,node.js . In our application, we are using the zone.js library to store the ttl value in the stack. As per my understanding the zone is only applicable to per request once the response is sent to end user. After that zone will die.
But what happen in my case, it will still exists. For an example:
In first request i have store ttl value to 150 using storeTtl function. and later while sending the response i tried to get the value using getMinimumTtlVal function. In response i am getting 150, that is correct because stack has only one value present and before calling the send api request, it should need to be empty but that is not happening.
When i made a another request with ttl value 180 and store ttl using storeTtl function but when i tried to get the value in getMinimumTtlVal function, now the satck has two value (150,180). Because of the min function, it is returning 150.
Min function is required here, because in some request we call the storeTtl function couple of times. I am not able to figure out, what is wrong in this code.
The sample code is below :
require('zone.js/dist/zone-node.js');
class TtlStore {
storeTtl (ttl) {
if (!Zone.current.ttlStore) {
Zone.current.ttlStore = [];
}
Zone.current.ttlStore.push(ttl);
}
getMinimumTtlVal () {
const minValue = Math.min(...Zone.current.ttlStore);
if (isNaN(minValue) || !Number.isInteger(minValue)) {
return 0;
}
return minValue;
}
}
module.exports = TtlStore;
Thanks in advance
I didn't ever use zone.js. But I think it is because of express doesn't create a new thread for each request and zone.js keeps values for a thread. I found another question about nodejs thread logic and this might be what you are looking for. If you want empte ttlStore per request, I can suggest you to add a middleware that runs end of the request and clear your storage.

Global memoizing fetch() to prevent multiple of the same request

I have an SPA and for technical reasons I have different elements potentially firing the same fetch() call pretty much at the same time.[1]
Rather than going insane trying to prevent multiple unrelated elements to orchestrate loading of elements, I am thinking about creating a gloabalFetch() call where:
the init argument is serialised (along with the resource parameter) and used as hash
when a request is made, it's queued and its hash is stored
when another request comes, and the hash matches (which means it's in-flight), another request will NOT be made, and it will piggy back from the previous one
async function globalFetch(resource, init) {
const sigObject = { ...init, resource }
const sig = JSON.stringify(sigObject)
// If it's already happening, return that one
if (globalFetch.inFlight[sig]) {
// NOTE: I know I don't yet have sig.timeStamp, this is just to show
// the logic
if (Date.now - sig.timeStamp < 1000 * 5) {
return globalFetch.inFlight[sig]
} else {
delete globalFetch.inFlight[sig]
}
const ret = globalFetch.inFlight[sig] = fetch(resource, init)
return ret
}
globalFetch.inFlight = {}
It's obviously missing a way to have the requests' timestamps. Plus, it's missing a way to delete old requests in batch. Other than that... is this a good way to go about it?
Or, is there something already out there, and I am reinventing the wheel...?
[1] If you are curious, I have several location-aware elements which will reload data independently based on the URL. It's all nice and decoupled, except that it's a little... too decoupled. Nested elements (with partially matching URLs) needing the same data potentially end up making the same request at the same time.
Your concept will generally work just fine.
Some thing missing from your implementation:
Failed responses should either not be cached in the first place or removed from the cache when you see the failure. And failure is not just rejected promises, but also any request that doesn't return an appropriate success status (probably a 2xx status).
JSON.stringify(sigObject) is not a canonical representation of the exact same data because properties might not be stringified in the same order depending upon how the sigObject was built. If you grabbed the properties, sort them and inserted them in sorted order onto a temporary object and then stringified that, it would be more canonical.
I'd recommend using a Map object instead of a regular object for globalFetch.inFlight because it's more efficient when you're adding/removing items regularly and will never have any name collision with property names or methods (though your hash would probably not conflict anyway, but it's still a better practice to use a Map object for this kind of thing).
Items should be aged from the cache (as you apparently know already). You can just use a setInterval() that runs every so often (it doesn't have to run very often - perhaps every 30 minutes) that just iterates through all the items in the cache and removes any that are older than some amount of time. Since you're already checking the time when you find one, you don't have to clean the cache very often - you're just trying to prevent non-stop build-up of stale data that isn't going to be re-requested - so it isn't getting automatically replaced with newer data and isn't being used from the cache.
If you have any case insensitive properties or values in the request parameters or the URL, the current design would see different case as different requests. Not sure if that matters in your situation or not or if it's worth doing anything about it.
When you write the real code, you need Date.now(), not Date.now.
Here's a sample implementation that implements all of the above (except for case sensitivity because that's data-specific):
function makeHash(url, obj) {
// put properties in sorted order to make the hash canonical
// the canonical sort is top level only,
// does not sort properties in nested objects
let items = Object.entries(obj).sort((a, b) => b[0].localeCompare(a[0]));
// add URL on the front
items.unshift(url);
return JSON.stringify(items);
}
async function globalFetch(resource, init = {}) {
const key = makeHash(resource, init);
const now = Date.now();
const expirationDuration = 5 * 1000;
const newExpiration = now + expirationDuration;
const cachedItem = globalFetch.cache.get(key);
// if we found an item and it expires in the future (not expired yet)
if (cachedItem && cachedItem.expires >= now) {
// update expiration time
cachedItem.expires = newExpiration;
return cachedItem.promise;
}
// couldn't use a value from the cache
// make the request
let p = fetch(resource, init);
p.then(response => {
if (!response.ok) {
// if response not OK, remove it from the cache
globalFetch.cache.delete(key);
}
}, err => {
// if promise rejected, remove it from the cache
globalFetch.cache.delete(key);
});
// save this promise (will replace any expired value already in the cache)
globalFetch.cache.set(key, { promise: p, expires: newExpiration });
return p;
}
// initalize cache
globalFetch.cache = new Map();
// clean up interval timer to remove expired entries
// does not need to run that often because .expires is already checked above
// this just cleans out old expired entries to avoid memory increasing
// indefinitely
globalFetch.interval = setInterval(() => {
const now = Date.now()
for (const [key, value] of globalFetch.cache) {
if (value.expires < now) {
globalFetch.cache.delete(key);
}
}
}, 10 * 60 * 1000); // run every 10 minutes
Implementation Notes:
Depending upon your situation, you may want to customize the cleanup interval time. This is set to run a cleanup pass every 10 minutes just to keep it from growing unbounded. If you were making millions of requests, you'd probably run that interval more often or cap the number of items in the cache. If you aren't making that many requests, this can be less frequent. It is just to clean up old expired entries sometime so they don't accumulate forever if never re-requested. The check for the expiration time in the main function already keeps it from using expired entries - that's why this doesn't have to run very often.
This looks as response.ok from the fetch() result and promise rejection to determine a failed request. There could be some situations where you want to customize what is and isn't a failed request with some different criteria than that. For example, it might be useful to cache a 404 to prevent repeating it within the expiration time if you don't think the 404 is likely to be transitory. This really depends upon your specific use of the responses and behavior of the specific host you are targeting. The reason to not cache failed results is for cases where the failure is transitory (either a temporary hiccup or a timing issue and you want a new, clean request to go if the previous one failed).
There is a design question for whether you should or should not update the .expires property in the cache when you get a cache hit. If you do update it (like this code does), then an item could stay in the cache a long time if it keeps getting requested over and over before it expires. But, if you really want it to only be cached for a maximum amount of time and then force a new request, you can just remove the update of the expiration time and let the original result expire. I can see arguments for either design depending upon the specifics of your situation. If this is largely invariant data, then you can just let it stay in the cache as long as it keeps getting requested. If it is data that can change regularly, then you may want it to be cached no more than the expiration time, even if its being requested regularly.
Consider using a ServiceWorker or Workbox to separate caching logic from your application. The Stale-While-Revalidate strategy could apply here.

getJSON done callback

I have the function bellow called every 5 seconds to get data from the server, which is flask/python. My question is how can I adapt the getjson call to have callback when the data is successfully retrieved.
I know there's .done .fail and so on, but I was wondering if I can keep this structure and just add bellow it, but I don't know the syntax in this particular case, hope this isn't too confusing, thanks for reading, here's the code.
// get data from the server every getDataFromServerInterval milliseconds
var getDataFromServerInterval = 5000;
function getData(){
// request timesince table entries from server for user...
$.getJSON($SCRIPT_ROOT + '/_database', {
action: "getUserTable_timesince",
username: $('input[name="username"]').val()
}, function(data) { // do something with the response data
timesince_dataBuffer = data;
});
return false; // prevent get
}
// get data from the server every getDataFromServerInterval milliseconds
setInterval(getData, getDataFromServerInterval);
You could do something like this. Instead of processing the data in getData or using a callback, take advantage of the promise that $.getJSON returns. Have a separate function that is called by the timeout which calls for the data, then processes it. It neatly separates your code out into more managable functions.
var getDataFromServerInterval = 5000;
function getData() {
return $.getJSON($SCRIPT_ROOT + '/_database', {
action: "getUserTable_timesince",
username: $('input[name="username"]').val()
}
}
function wrangleData() {
getData().then(function (data) {
console.log(data);
});
}
setInterval(wrangleData, getDataFromServerInterval);
I found a partial solution, I realized that I can add a callback at the end of the function that handles the data received, which is somewhat equivalent to .done in a different getjson call structure, I'm not sure yet if the function gets called before or after the data is received.
// global timesince buffer, holds
var timesince_dataBuffer;
// get data from the server every getDataFromServerInterval milliseconds
var getDataFromServerInterval = 5000;
function getData(){
// request timesince table entries from server for user
$.getJSON($SCRIPT_ROOT + '/_database', {
action: "getUserTable_timesince",
username: $('input[name="username"]').val()
}, function(data) { // do something with the response data
timesince_dataBuffer = data;
updateEntryStruct(); // the hope is to call this when data is received
});
return false; // prevent get
}
// get data from the server every getDataFromServerInterval milliseconds
setInterval(getData, getDataFromServerInterval);
This is the solution I came up with.
var timesince_dataBuffer;
function getData(){
// gets user's entries from sql table
$.getJSON($SCRIPT_ROOT + '/_database', { // $SCRIPT_ROOT, root to the application
action: "getUserTable_timesince",
username: $('input[name="username"]').val()
}, function(data) { // if a response is sent, this function is called
timesince_dataBuffer = data;
updateEntryStruct(); // recreate the structure of each content, buttons etc
});
return false;
}
I get the data, put in a global variable, call another function which takes that data and re-creates a structure for each object received, this way I don't recreate parts of the structure which are static, most importantly the buttons.
Another function is called every 1 second, which updates the dynamic parts.
(formatted time) passed since
(event name)
Anyway, this is actually my final project in CS50, I started by communicating with the server via form submissions, refreshing the page each time the user pressed a button, then I did it by ajax, but I was sending requests to the server every 2 seconds, and having unresponsive buttons because I would keep re-creating the buttons themselves on a time interval.
And now the page feels responsive and efficient, it's been a great learning experience.
If anyone wants to check out the code, everything is here.
https://github.com/silvermirai/cs50-final-project
It's basically a bunch of random functionality that came to mind.
The application can be found here as of now.
http://ide502-silvermirai.cs50.io:8080/

Stopping synchronous function after 2 seconds

I'm using the npm library jsdiff, which has a function that determines the difference between two strings. This is a synchronous function, but given two large, very different strings, it will take extremely long periods of time to compute.
diff = jsdiff.diffWords(article[revision_comparison.field], content[revision_comparison.comparison]);
This function is called in a stack that handles an request through Express. How can I, for the sake of the user, make the experience more bearable? I think my two options are:
Cancelling the synchronous function somehow.
Cancelling the user request somehow. (But would this keep the function still running?)
Edit: I should note that given two very large and different strings, I want a different logic to take place in the code. Therefore, simply waiting for the process to finish is unnecessary and cumbersome on the load - I definitely don't want it to run for any long period of time.
fork a child process for that specific task, you can even create a queu to limit the number of child process that can be running in a given moment.
Here you have a basic example of a worker that sends the original express req and res to a child that performs heavy sync. operations without blocking the main (master) thread, and once it has finished returns back to the master the outcome.
Worker (Fork Example) :
process.on('message', function(req,res) {
/* > Your jsdiff logic goes here */
//change this for your heavy synchronous :
var input = req.params.input;
var outcome = false;
if(input=='testlongerstring'){outcome = true;}
// Pass results back to parent process :
process.send(req,res,outcome);
});
And from your Master :
var cp = require('child_process');
var child = cp.fork(__dirname+'/worker.js');
child.on('message', function(req,res,outcome) {
// Receive results from child process
console.log('received: ' + outcome);
res.send(outcome); // end response with data
});
You can perfectly send some work to the child along with the req and res like this (from the Master): (imagine app = express)
app.get('/stringCheck/:input',function(req,res){
child.send(req,res);
});
I found this on jsdiff's repository:
All methods above which accept the optional callback method will run in sync mode when that parameter is omitted and in async mode when supplied. This allows for larger diffs without blocking the event loop. This may be passed either directly as the final parameter or as the callback field in the options object.
This means that you should be able to add a callback as the last parameter, making the function asynchronous. It will look something like this:
jsdiff.diffWords(article[x], content[y], function(err, diff) {
//add whatever you need
});
Now, you have several choices:
Return directly to the user and keep the function running in the background.
Set a 2 second timeout (or whatever limit fits your application) using setTimeout as outlined in this
answer.
If you go with option 2, your code should look something like this
jsdiff.diffWords(article[x], content[y], function(err, diff) {
//add whatever you need
return callback(err, diff);
});
//if this was called, it means that the above operation took more than 2000ms (2 seconds)
setTimeout(function() { return callback(); }, 2000);

Abort Xhr on cujojs rest.js

I'm using cujojs/rest to send requests to my (laravel) API. I'm looking for a way to cancel requests that are not over when a new one comes in.
There is a cancel method on the client which may be what I need to solve my problem :
My app displays a paginated collection of items, people can browse pages by clicking a "next/prev" button.
If the user clicks multiple times on the 'next' button quickly every new page will fire of a new request. I'd like to make sure that only the latest request gets going and all the other (unfinished) ones are aborted.
I used to do this with a beforeSend method when I was using another tool to perform my requests. It would add the xhr object to an array and before a request was fired it would call .abort() on all the xhr in that array if they were not over.
Now that I switched to cujojs/rest, I can't figure out how the cancel method could be used to accomplish that.
Of course I don't want to abort any request before a new one is run, just the ones that tap the same resource as I might have unrelated data loading elsewhere.
/users?page=1 -> should be canceled
/users?page=2 -> should be canceled
/preferences -> should NOT be canceled
/users?page=3 -> should be canceled
/users?page=4 -> should go through as its the last one
Any help would be very appreciated.
I am not sure because there is no proper documentation as far as i can find. But judging from the source code you should be able to do something like this
//lets say you defined **this.request** somewhere in the constructor to hold the request value which defaults to empty object {};
fetchCollection: function(id) {
var that = this;
var url = 'collections/' + id;
if( Object.keys( that.request ).length !== 0 ) {
if( 'function' === typeof that.request.cancel )
that.request.cancel();
else
that.request.canceled = true;
//here we should re-assign the default {} value again
that.request = {};
}
that.request.path = url;
return client( that.request ).then(
function(response){
that.collection = response.entity.data;
},
function(response){
console.error(response.status.code, response.status);
}
);
},
If it works i can explain why.
Ok so now when it works i can explain why using pseudo-code:
Inside client( request ) function cujojs before doing anything does this:
Checks if request.canceled exists and is equal to true
Aborts request if condition is met
If request.canceled doesn't exist it continues and goes to next step
cujojs defines request.cancel function which can abort the request before it is sent ( we don't care about steps after because cancelation is our intention )
Now what we are doing is we are using javascripts innate ability to pass variables by reference. By reference means that when you are giving create() function a variable request the function uses this same variable inside of itself, instead of creating a brand new copy of the variable request and using that copy.
The way we are exploiting this here is that we are manipulating request variable from outside of cujojs knowing that it is using the same variable thus our manipulation will directly affect request variable cujojs is using at the time of execution create() function.
Now that we know that cujojs has 2 methods of canceling request, and that we can manipulate request variable after create() function received it we do a simple checking on our side:
We check if this.request is empty ( it will only be empty if no request was sent yet )
If it is not empty we check whether cujojs already has defined the .cancel function on our request variable by doing 'function' === typeof request.cancel
if it had we can use this function to cancel the request we sent previously,
if it hadn't we know that cujojs is for now on the step which checks .canceled variable on our request variable so we assign this to true doing this.request.canceled = true;
After canceling the request, we assign request brand new value {} thus losing the reference to our previous request variable which we manipulated earlier
I am very poor at explaining things but i hope you understood, knowing such nuances will help you a lot in your development.
Happy coding

Categories