Internals (client and server) of aborting an XMLHttpRequest - javascript

So I'm curious about the actual underlying behaviours that occur when aborting an async javascript request. There was some related info in this question but I've yet to find anything comprehensive.
My assumption has always been that aborting the request causes the browser to close the connection and stop processing it entirely, thus causing the server to do the same if it's been setup to do so. I imagine however that there might be browser-specific quirks or edge cases here I'm not thinking of.
My understanding is as follows, I'm hoping someone can correct it if necessary and that this can be a good reference for others going forwards.
Aborting the XHR request clientside causes the browser to internally close the socket and stop processing it. I would expect this behaviour rather than simply ignoring the data coming in and wasting memory. I'm not betting on IE on that though.
An aborted request on the server would be up to what's running there:
I know with PHP the default behaviour is to stop processing when the client socket is closed, unless ignore_user_abort() has been called. So closing XHR connections saves you server power as well.
I'm really interested to know how this could be handled in node.js, I assume some manual work would be needed there.
I have no idea really about other server languages / frameworks and how they behave but if anyone wants to contribute specifics I'm happy to add them here.

For the client, the best place to look is in the source, so let's do this! :)
Let's look at Blink's implementation of XMLHttpRequest's abort method (lines 1083-1119 in XMLHttpRequest.cpp):
void XMLHttpRequest::abort()
{
WTF_LOG(Network, "XMLHttpRequest %p abort()", this);
// internalAbort() clears |m_loader|. Compute |sendFlag| now.
//
// |sendFlag| corresponds to "the send() flag" defined in the XHR spec.
//
// |sendFlag| is only set when we have an active, asynchronous loader.
// Don't use it as "the send() flag" when the XHR is in sync mode.
bool sendFlag = m_loader;
// internalAbort() clears the response. Save the data needed for
// dispatching ProgressEvents.
long long expectedLength = m_response.expectedContentLength();
long long receivedLength = m_receivedLength;
if (!internalAbort())
return;
// The script never gets any chance to call abort() on a sync XHR between
// send() call and transition to the DONE state. It's because a sync XHR
// doesn't dispatch any event between them. So, if |m_async| is false, we
// can skip the "request error steps" (defined in the XHR spec) without any
// state check.
//
// FIXME: It's possible open() is invoked in internalAbort() and |m_async|
// becomes true by that. We should implement more reliable treatment for
// nested method invocations at some point.
if (m_async) {
if ((m_state == OPENED && sendFlag) || m_state == HEADERS_RECEIVED || m_state == LOADING) {
ASSERT(!m_loader);
handleRequestError(0, EventTypeNames::abort, receivedLength, expectedLength);
}
}
m_state = UNSENT;
}
So from this, it looks like the majority of the grunt work is done within internalAbort, which looks like this:
bool XMLHttpRequest::internalAbort()
{
m_error = true;
if (m_responseDocumentParser && !m_responseDocumentParser->isStopped())
m_responseDocumentParser->stopParsing();
clearVariablesForLoading();
InspectorInstrumentation::didFailXHRLoading(executionContext(), this, this);
if (m_responseLegacyStream && m_state != DONE)
m_responseLegacyStream->abort();
if (m_responseStream) {
// When the stream is already closed (including canceled from the
// user), |error| does nothing.
// FIXME: Create a more specific error.
m_responseStream->error(DOMException::create(!m_async && m_exceptionCode ? m_exceptionCode : AbortError, "XMLHttpRequest::abort"));
}
clearResponse();
clearRequest();
if (!m_loader)
return true;
// Cancelling the ThreadableLoader m_loader may result in calling
// window.onload synchronously. If such an onload handler contains open()
// call on the same XMLHttpRequest object, reentry happens.
//
// If, window.onload contains open() and send(), m_loader will be set to
// non 0 value. So, we cannot continue the outer open(). In such case,
// just abort the outer open() by returning false.
RefPtr<ThreadableLoader> loader = m_loader.release();
loader->cancel();
// If abort() called internalAbort() and a nested open() ended up
// clearing the error flag, but didn't send(), make sure the error
// flag is still set.
bool newLoadStarted = m_loader;
if (!newLoadStarted)
m_error = true;
return !newLoadStarted;
}
I'm no C++ expert but from the looks of it, internalAbort does a few things:
Stops any processing it's currently doing on a given incoming response
Clears out any internal XHR state associated with the request/response
Tells the inspector to report that the XHR failed (this is really interesting! I bet it's where those nice console messages originate)
Closes either the "legacy" version of a response stream, or the modern version of the response stream (this is probably the most interesting part pertaining to your question)
Deals with some threading issues to ensure the error is propagated properly (thanks, comments).
After doing a lot of digging around, I came across an interesting function within HttpResponseBodyDrainer (lines 110-124) called Finish which to me looks like something that would eventually be called when a request is cancelled:
void HttpResponseBodyDrainer::Finish(int result) {
DCHECK_NE(ERR_IO_PENDING, result);
if (session_)
session_->RemoveResponseDrainer(this);
if (result < 0) {
stream_->Close(true /* no keep-alive */);
} else {
DCHECK_EQ(OK, result);
stream_->Close(false /* keep-alive */);
}
delete this;
}
It turns out that stream_->Close, at least in the BasicHttpStream, delegates to the HttpStreamParser::Close, which, when given a non-reusable flag (which does seem to happen when the request is aborted, as seen in HttpResponseDrainer), does close the socket:
void HttpStreamParser::Close(bool not_reusable) {
if (not_reusable && connection_->socket())
connection_->socket()->Disconnect();
connection_->Reset();
}
So, in terms of what happens on the client, at least in the case of Chrome, it looks like your initial intuitions were correct as far as I can tell :) seems like most of the quirks and edge cases have to do with scheduling/event notification/threading issues, as well as browser-specific handling, e.g. reporting the aborted XHR to the devtools console.
In terms of the server, in the case of NodeJS you'd want to listen for the 'close' event on the http response object. Here's a simple example:
'use strict';
var http = require('http');
var server = http.createServer(function(req, res) {
res.on('close', console.error.bind(console, 'Connection terminated before response could be sent!'));
setTimeout(res.end.bind(res, 'yo'), 2000);
});
server.listen(8080);
Try running that and canceling the request before it completes. You'll see an error at your console.
Hope you found this useful. Digging through the Chromium/Blink source was a lot of fun :)

Related

Different latency time when measured through javascript vs developer console

I am testing latency of a call through javascript and developer console.
In JS the measurement is done simply by adding start time variables e.g:
var start_execution=Math.floor( new Date().getTime() );
// - Call a URL asynchronously
element = doc.createElement("script");
element.src = request_url;
doc.getElementsByTagName("script")[0].parentNode.appendChild(element);
//In response of the call initialize end time and call function to compute latency
var end_execution=Math.floor( new Date().getTime() );
// function call to generate latency
calculateLatency();
function calculateLatency(){
var latency= end_execution-start_execution;
}
The method works fine if run in isolation where the latency figure is inline with the browser's developer-console/network panel. But on actual website with lots of asynchronous content, the numbers measured by JS is inflated upto 5X.
One 1000ms latency computed through js shows as 200ms in network panel.
This behavior is very frequent and the difference varies.
I suspect there is some sort of browser queue which handles asynchronous processing and if in case of peak load the request/response gets stuck in queue.
The option I am exploring is Performance http://www.w3.org/TR/resource-timing , but the browser support is limited here.
I am looking for some explanations around the behavior and ability to compute actual latency in javascript (same as shown in net-panel). Also recommendation on how to effectively use JS cutoff time for network calls as in such cases the inflated values might lead to unexpected behavior.
Why I want to do this: Set out timeout for non performing network calls but it is not fair to use setTimeOut and reject calls when the actual cause of latency is browser processing overhead.
You are absolutely right in your suggestion.
Almost everything in JS is driven by events (except some cases like page parsing process).
Any browser has single thread per window for javascript's events and every event handler executes consequently and every event (including propagation/bubbling and defaults) will be processed completely before processing next event.
For more information refer to this tutorial
As for recommendations on effective usage of events queue, there are some advice:
Avoid long-running scripts
Avoid synchronous XMLHttpRequest
Do not allow scripts from different frames being controlling the same global state
Do not use alert dialog boxes for debugging since they may completely change your program logic.
Personally I would use http://momentjs.com/ to about anything that is time related. In addition to that I would use duration plugin https://github.com/jsmreese/moment-duration-format.
jQuery
To use it in jQuery manual style
var start_execution = moment();
var end_execution = moment();
var jqxhr = $.get( "google.com",
function(data) {
end_execution = moment();
})
.done(function() {
end_execution = moment();
})
.fail(function() {
end_execution = moment();
})
.always(function() {
var ms = start_execution.diff(end_execution);
var duration = moment.duration(ms);
console.log(duration);
});
This is correctly written and will work even if request fails or timeouts.
Just for clarification, It would be wrong to write :
var start_execution = moment();
var jqxhr = $.get( "google.com",
function(data) {
//do something with the data;
});
var end_execution = moment();
var ms = start_execution.diff(end_execution);
var duration = moment.duration(ms);
console.log(duration);
As it measures nothing other then how much time it takes jQuery to create request initialization, most likely end_execution happens before actual request for that asset/url is even sent out.
Angular
With Angular you write httpInterceptorService, that can log the times when events happened.
var HttpCallsApp = angular.module("HttpCallsApp", []);
HttpCallsApp.config(function ($provide, $httpProvider) {
$provide.factory("MyHttpInterceptor", function ($q) {
var log = ApplicationNamespace.Util.logger;
return {
request: function (config) {
log.info("Ajax %s request [%s] initialized", config.method, config.url);
return config || $q.when(config);
},
response: function (response) {
log.info("Ajax %s response [%s] compleated : %s ( %s )",
response.config.method, response.config.url, response.status, response.statusText);
return response || $q.when(response);
},
requestError: function (rejection) {
log.info(rejection);
// Return the promise rejection.
return $q.reject(rejection);
},
responseError: function (rejection) {
log.info(rejection);
// Return the promise rejection.
return $q.reject(rejection);
}
};
});
$httpProvider.interceptors.push("MyHttpInterceptor");
});
In angular case application namespace contains application scope logger instance timestamps that i set in app config with logEnhancerProvider.datetimePattern = "HH:mm:ss.SSS";. From code quality perspective angular case is order of magnitude better, but I prefer not to go in details - it is not that you can not write same thing in jQuery, but it is not your default option.
Chrome adhoc test (or about any other modern browser)
ctrl + shift + n (opens new incognito window, ensures assets are not cached client side)
F12 (opens developer tools)
network (shows assets requests)
set to record network log
enter your url you want to test
Click XHR filter
Open the item and "Timing"
You should see something like :
Fiddler
If you don't trust your webbrowser or javascript is ran out of browser - in flash, .net, java etc program. You can still get the request timings. In that case you monitor packets sent.
You can see about anything you would want to know:
As a personal preference I have changed completed time time-stamp format.
Instead of using datetime, where milliseconds can vary depending on system factors you could use console.time() and console.timeEnd() (does not exist in old ie). Even better if you can use performance.now, but it has it's own problems. That is why I prefer to use momentjs.
If you want to do this accurately and in legacy "browsers" then at least in Google they have used following approach : you add a flash component, that can do this accurately. This would bring other problems, like data pipeline limits if you log a lot, but they are easier problems to solve then create support for legacy IE.

What happens with unhandled socket.io events?

Does socket.io ignore\drop them?
The reason why Im asking this is the following.
There is a client with several states. Each state has its own set of socket handlers. At different moments server notifies the client of state change and after that sends several state dependent messages.
But! It takes some time for the client to change state and to set new handlers. In this case client can miss some msgs... because there are no handlers at the moment.
If I understand correctly unhandled msgs are lost for client.
May be I miss the concept or do smth wrong... How to hanle this issues?
Unhandled messages are just ignored. It's just like when an event occurs and there are no event listeners for that event. The socket receives the msg and doesn't find a handler for it so nothing happens with it.
You could avoid missing messages by always having the handlers installed and then deciding in the handlers (based on other state) whether to do anything with the message or not.
jfriend00's answer is a good one, and you are probably fine just leaving the handlers in place and using logic in the callback to ignore events as needed. If you really want to manage the unhandled packets though, read on...
You can get the list of callbacks from the socket internals, and use it to compare to the incoming message header. This client-side code will do just that.
// Save a copy of the onevent function
socket._onevent = socket.onevent;
// Replace the onevent function with a handler that captures all messages
socket.onevent = function (packet) {
// Compare the list of callbacks to the incoming event name
if( !Object.keys(socket._callbacks).map(x => x.substr(1)).includes(packet.data[0]) ) {
console.log(`WARNING: Unhandled Event: ${packet.data}`);
}
socket._onevent.apply(socket, Array.prototype.slice.call(arguments));
};
The object socket._callbacks contains the callbacks and the keys are the names. They have a $ prepended to them, so you can trim that off the entire list by mapping substring(1) onto it. That results in a nice clean list of event names.
IMPORTANT NOTE: Normally you should not attempt to externally modify any object member starting with an underscore. Also, expect that any data in it is unstable. The underscore indicates it is for internal use in that object, class or function. Though this object is not stable, it should be up to date enough for us to use it, and we aren't modifying it directly.
The event name is stored in the first entry under packet.data. Just check to see if it is in the list, and raise the alarm if it is not. Now when you send an event from the server the client does not know it will note it in the browser console.
Now you need to save the unhandled messages in a buffer, to play back once the handlers are available again. So to expand on our client-side code from before...
// Save a copy of the onevent function
socket._onevent = socket.onevent;
// Make buffer and configure buffer timings
socket._packetBuffer = [];
socket._packetBufferWaitTime = 1000; // in milliseconds
socket._packetBufferPopDelay = 50; // in milliseconds
function isPacketUnhandled(packet) {
return !Object.keys(socket._callbacks).map(x => x.substr(1)).includes(packet.data[0]);
}
// Define the function that will process the buffer
socket._packetBufferHandler = function(packet) {
if( isPacketUnhandled(packet) ) {
// packet can't be processed yet, restart wait cycle
socket._packetBuffer.push(packet);
console.log(`packet handling not completed, retrying`)
setTimeout(socket._packetBufferHandler, socket._packetBufferWaitTime, socket._packetBuffer.pop());
}
else {
// packet can be processed now, start going through buffer
socket._onevent.apply(socket, Array.prototype.slice.call(arguments));
if(socket._packetBuffer.length > 0) {
setTimeout(socket._packetBufferHandler,socket._packetBufferPopDelay(), socket._packetBuffer.pop());
}
else {
console.log(`all packets in buffer processed`)
socket._packetsWaiting = false;
}
}
}
// Replace the onevent function with a handler that captures all messages
socket.onevent = function (packet) {
// Compare the list of callbacks to the incoming event name
if( isPacketUnhandled(packet) ) {
console.log(`WARNING: Unhandled Event: ${packet.data}`);
socket._packetBuffer.push(packet);
if(!socket._packetsWaiting) {
socket._packetsWaiting = true;
setTimeout(socket._packetBufferHandler, socket._packetBufferWaitTime, socket._packetBuffer.pop());
}
}
socket._onevent.apply(socket, Array.prototype.slice.call(arguments));
};
Here the unhandled packets get pushed into the buffer and a timer is set running. Once the given amount of time has passed, if starts checking to see if the handlers for each item are ready. Each one is handled until all are exhausted or a handler is missing, which trigger another wait.
This can and will stack up unhandled calls until you blow out the client's allotted memory, so make sure that those handlers DO get loaded in a reasonable time span. And take care not to send it anything that will never get handled, because it will keep trying forever.
I tested it with really long strings and it was able to push them through, so what they are calling 'packet' is probably not a standard packet.
Tested with SocketIO version 2.2.0 on Chrome.

Can this ever be a race condition? (JavaScript)

I'm looking for a solid answer on whether the following JavaScript code contains a race condition or not.
The question boils down to this: If I begin listening for the completion of an asynchronous task (such as an AJAX call) immediately after I've initiated the task, could that task complete before I've started listening?
I've found a few similar questions, but none has an answer that seems totally concrete ("there could be a problem ... it is unlikely that..."). Here's a rough example of the kind of situation I'm referring to:
// Publish an event synchronously
function emit(key){}
// Subscribe to an event
function on(key, cb){}
// Request the given url;
// emit 'loaded' when done
function get(url) {
http.get(url, function() {
emit('loaded');
});
}
get(url);
on('loaded', function() {
// Assuming this subscription happens
// within the same execution flow as
// the call to `get()`, could 'loaded'
// ever fire beforehand?
});
Even better if the answer has backing from the actual language specification (or another definitive source)!
No, there can be no race condition.
The asynchronous task could complete before you start listening to the event, but that doesn't matter. The completion of the task creates an event, and that event won't be handled until the function (or code block) ends and the control is returned to the browser.
#Guffa is correct. But, there are at least two situations where you can have the appearance of a race condition.
Maybe there is an error during the ajax request that isn't handled. Consider some typical XMLHttpRequest code:
var request = new XMLHttpRequest();
request.onreadystatechange = function() {
if (request.readyState === 4) {
if (request.status === 200) {
call_success_handler();
}
else {
call_error_handler();
}
}
};
request.open("GET", url , true);
request.send(null);
If the readyState is never '4', then no handlers will be called and no errors will be reported. Your success handler is never triggered, so you assume that the event fired too fast and you didn't notice.
It's less common now, but there are also cases where browsers may make you think you have a race condition. The specification says what is supposed to happen in error conditions, but it wasn't always that way. Old / non-conforming XMLHttpRequest implementations behave poorly with oddball network conditions. The initial (ca. 2006) versions of the spec didn't even address network level errors.
Hopefully most browsers have conforming implementations now, and hopefully most frameworks should handle error conditions properly.
There's a great article by Pearl Chen on asynchronous debugging that's worth a read if you want to dig into it deeper.
Also, there's more information on ajax problems here: jQuery $.ajax, error handler doesn't work.

Efficient closure structure in node.js

I'm starting to write a server in node.js and wondering whether or not I'm doing things the right way...
Basically my structure is like the following pseudocode:
function processStatus(file, data, status) {
...
}
function gotDBInfo(dbInfo) {
var myFile = dbInfo.file;
function gotFileInfo(fileInfo) {
var contents = fileInfo.contents;
function sentMessage(status) {
processStatus(myFile, contents, status);
}
sendMessage(myFile.name + contents, sentMessage);
}
checkFile(myFile, gotFileInfo);
}
checkDB(query, gotDBInfo);
In general, I'm wondering if this is the right way to code for node.js, and more specifically:
1) Is the VM smart enough to run "concurrently" (i.e. switch contexts) between each callback to not get hung up with lots of connected clients?
2) When garbage collection is run, will it clear the memory completely if the last callback (processStatus) finished?
Node.js is event-based, all codes are basically handlers of events. The V8 engine will execute-to-end any synchronous code in the handler and then process the next event.
Async call (network/file IO) will post an event to another thread to do the blocking IO (that's in libev libeio AFAIK, I may be wrong on this). Your app can then handle other clients. When the IO task is done, an event is fired and your callback function is called upon.
Here's an example of aync call flow, simulating a Node app handling a client request:
onRequest(req, res) {
// we have to do some IO and CPU intensive task before responding the client
asyncCall(function callback1() {
// callback1() trigger after asyncCall() done it's part
// *note that some other code might have been executed in between*
moreAsyncCall(function callback2(data) {
// callback2() trigger after moreAsyncCall() done it's part
// note that some other code might have been executed in between
// res is in scope thanks to closure
res.end(data);
// callback2() returns here, Node can execute other code
// the client should receive a response
// the TCP connection may be kept alive though
});
// callback1() returns here, Node can execute other code
// we could have done the processing of asyncCall() synchronously
// in callback1(), but that would block for too long
// so we used moreAsyncCall() to *yield to other code*
// this is kind of like cooperative scheduling
});
// tasks are scheduled by calling asyncCall()
// onRequest() returns here, Node can execute other code
}
When V8 does not have enough memory, it will do garbage collection. It knows whether a chunk of memory is reachable by live JavaScript object. I'm not sure if it will aggressively clean up memory upon reaching end of function.
References:
This Google I/O presentation discussed the GC mechanism of Chrome (hence V8).
http://platformjs.wordpress.com/2010/11/24/node-js-under-the-hood/
http://blog.zenika.com/index.php?post/2011/04/10/NodeJS

JavaScript: Detect AJAX requests

Is there any way to detect global AJAX calls (particularly responses) on a web page with generic JavaScript (not with frameworks)?
I've already reviewed the question "JavaScript detect an AJAX event", here on StackOverflow, and tried patching in the accepted answer's code into my application but it didn't work. I've never done anything with AJAX before either so, I don't know enough to modify it to work.
I don't need anything fancy, I just need to detect all (specific, actually, but I'd have to detect all first and go from there) AJAX responses and patch them into an IF statement for use. So, eventually, I'd like something like:
if (ajax.response == "certainResponseType"){
//Code
}
, for example.
Update:
It seems I should clarify that I'm not trying to send a request - I'm developing a content script and I need to be able to detect the web page's AJAX requests (not make my own), so I can execute a function when a response is detected.
Here's some code (tested by pasting into Chrome 31.0.1650.63's console) for catching and logging or otherwise processing ajax requests and their responses:
(function() {
var proxied = window.XMLHttpRequest.prototype.send;
window.XMLHttpRequest.prototype.send = function() {
console.log( arguments );
//Here is where you can add any code to process the request.
//If you want to pass the Ajax request object, pass the 'pointer' below
var pointer = this
var intervalId = window.setInterval(function(){
if(pointer.readyState != 4){
return;
}
console.log( pointer.responseText );
//Here is where you can add any code to process the response.
//If you want to pass the Ajax request object, pass the 'pointer' below
clearInterval(intervalId);
}, 1);//I found a delay of 1 to be sufficient, modify it as you need.
return proxied.apply(this, [].slice.call(arguments));
};
})();
This code solves the above issue with the accepted answer:
Note that it may not work if you use frameworks (like jQuery), because
they may override onreadystatechange after calling send (I think
jQuery does). Or they can override send method (but this is unlikely).
So it is a partial solution.
Because it does not rely on the 'onreadystatechange' callback being un-changed, but monitors the 'readyState' itself.
I adapted the answer from here: https://stackoverflow.com/a/7778218/1153227
Gives this a try. Detects Ajax responses, then I added a conditional using the XMLHttpRequest propoerties readyState & status to run function if response status = OK
var oldXHR = window.XMLHttpRequest;
function newXHR() {
var realXHR = new oldXHR();
realXHR.addEventListener("readystatechange", function() {
if(realXHR.readyState==4 && realXHR.status==200){
afterAjaxComplete() //run your code here
}
}, false);
return realXHR;
}
window.XMLHttpRequest = newXHR;
Modified from:
Monitor all JavaScript events in the browser console
This can be a bit tricky. How about this?
var _send = XMLHttpRequest.prototype.send;
XMLHttpRequest.prototype.send = function() {
/* Wrap onreadystaechange callback */
var callback = this.onreadystatechange;
this.onreadystatechange = function() {
if (this.readyState == 4) {
/* We are in response; do something,
like logging or anything you want */
}
callback.apply(this, arguments);
}
_send.apply(this, arguments);
}
I didn't test it, but it looks more or less fine.
Note that it may not work if you use frameworks (like jQuery), because they may override onreadystatechange after calling send (I think jQuery does). Or they can override send method (but this is unlikely). So it is a partial solution.
EDIT: Nowadays (the begining of 2018) this gets more complicated with the new fetch API. Global fetch function has to be overridden as well in a similar manner.
A modern (as of April 2021) answer to the question is to use PerformanceObserver which lets you observe both XMLHttpRequest requests and fetch() requests:
Detect fetch API request on web page in JavaScript
Detect ajax requests from raw HTML
<!-- Place this at the top of your page's <head>: -->
<script type="text/javascript">
var myRequestLog = []; // Using `var` (instead of `let` or `const`) so it creates an implicit property on the (global) `window` object so you can easily access this log from anywhere just by using `window.myRequestLog[...]`.
function onRequestsObserved( batch ) {
myRequestLog.push( ...batch.getEntries() );
}
var requestObserver = new PerformanceObserver( onRequestsObserved );
requestObserver.observe( { type: 'resource' /*, buffered: true */ } );
</script>
I use the above snippet in my pages to log requests so I can report them back to the mothership in my global window.addEventListenr('error', ... ) callback.
The batch.getEntries() function returns an array of DOM PerformanceResourceTiming objects (because we're only listening to type: 'resource', otherwise it would return an array of differently-typed objects).
Each PerformanceResourceTiming object has useful properties like:
The initiatorType property can be:
A HTML element name (tag name) if the request was caused by an element:
'link' - Request was from a <link> element in the page.
'script' - Request was to load a <script>.
'img' - Request was to load an <img /> element.
etc
'xmlhttprequest' - Request was caused by a XMLHttpRequest invocation.
'fetch' - Request was caused by a fetch() call.
name - The URI of the resource/request. (If there's a redirection I'm unsure if this is the original request URI or the final request URI).
startTime: Caution: this is actually the time since PerformanceObserver.observe() was called when the request was started.
duration: Caution: this is actually the time since PerformanceObserver.observe() was called when the request completed: it is not the duration of the request alone. To get the "real" duration you need to subtract startTime from duration.
transferSize: the number of bytes in the response.

Categories