I came across this error only on IE9:
SCRIPT575: Could not complete the operation due to error c00c023f.
The error happened on this line: if ((a.responseXML) && (a.readyState==4)) {
I cant figure it out why this happened, and it seems to work very well in other browsers.
and this is my javascript code:
var a = new XMLHttpRequest();
a.open("GET",'/cust/ajax/getresult.php?qk=nnf87&arg1='+pzid,true);
a.onreadystatechange = function () {
if ((a.responseXML) && (a.readyState==4)) {
var N = a.responseXML.getElementsByTagName('result')
sequence = N[0].firstChild.data;
var SEQ = sequence.split(",");
var num = SEQ.length;
var sum = 0;
for(var n=0;n<num;n++){sum = sum + (SEQ[n]*1);}
//document.getElementById("the_number_of").innerHTML = sum;
var date = new Date();
date.setTime(date.getTime()+(2*60*60*1000));
document.cookie='cpa_num='+sum+'; expires= '+date.toGMTString()+'; path=/';
}
}
I don't suppose your request is being aborted? A quick Googling found this blog post. It would seem that an aborted request in IE9 will give this error when trying to read any properties off of the XMLHttpRequest object.
From the post, their particular problem with this error code could be duplicated by:
Create a XMLHttpRequest object
Assign an onreadystatechanged event handler
Execute a request
Abort the request before the response has been handled
You will now see that the readystatechange handler will be called,
with the readystate property set to '4'. Any attempt to read the
XmlHttpRequest object properties will fail.
The author mitigates this problem by assigning an abort state to the request when the manual-abort is performed, and detecting it and returning before trying to read any other properties. Though this approach would only really work if you are performing the abort yourself.
A similar problem was documented on the this WebSync Google Groups post. Towards the end of the discussion there is an implication that this problem only occurs
if you've got the standards and IE9 rendering
modes both set
Hope that points you in the right direction.
Within the readyState==4 routine, include a try and catch similar to:
try {
var response=xmlHttp.responseText;
}
catch(e) {
var response="Aborted";
}
We found that this to be the most successful resolution to the above.
Switch the
if ((a.responseXML) && (a.readyState==4))
to
if ((a.readyState==4) && (a.responseXML))
As the order matters. it seems that on IE9 if the state is not 4, the responseXML and reponseText yield this error if being accessed (I have no clue why...)
I was getting this error in my Framework. It only shows up in IE (go figure). I simply wrapped the response like below:
if(request.readyState == 4)
{
// get response
var response = request.responseText;
}
It happens for me with IE9 when I read the "status" property prematurely (before readyState is 4 / DONE).
Related
I'm trying to make a request to a route which makes a query to an API and if the API has the data, the response is to render another website with the API data. But if the data is not ready yet, since it is still processing, the route returns a string "not finished yet".
What I wish to do is: make a get request and if the response is "not finished yet" wait for 5 seconds and do the request again until the response is the data. After it, the script would open the window with the new page with the data loaded.
Here is what I have already made:
job_id = document.querySelector("#job_id").getAttribute("value")
code = document.querySelector("#code").getAttribute("value")
var myRequest = new XMLHttpRequest();
myRequest.open('GET', `http://127.0.0.1:5000/status/${job_id}/${code}`);
myRequest.onreadystatechange = function () {
if (myRequest.readyState === 4 && myRequest.responseText != 'not finished yet') {
window.location = `http://127.0.0.1:5000/status/${job_id}/${code}`
}
};
If anyone knows if it works or knows a better way to deal with that, I'd appreciate your help.
Thanks in advance.
Solution:
After some hours, I finnaly found a way to handle it. Still don't know if it is the best way.
function search() {
job_id = document.querySelector("#job_id").getAttribute("value")
code = document.querySelector("#code").getAttribute("value")
var myRequest = new XMLHttpRequest();
myRequest.open('GET', `http://127.0.0.1:5000/status/${job_id}/${code}`);
myRequest.send();
myRequest.onreadystatechange = function () {
if (myRequest.readyState === 4 && myRequest.responseText === 'not finished yet') { setTimeout(function () {search();}, 5000)
}
else if(myRequest.readyState === 4 && myRequest.responseText != 'not finished yet')
{ window.location = `http://127.0.0.1:5000/status/${job_id}/${code}`}}
}
search()
I use var option = {}; as a global object to handle OOP (object oriented programming).
What you want to do is when you need to define something give it a prefix for the function and an identifier so you can avoid conflicts.
You posted some code so at 1 reputation and decent formatting you're doing a lot better than most starting at 1, kudos. Let's say you're working with a job ID of 79. So you'll want to define the following:
option.job_79 = 1;
Now I assigned the sub-object a 1 as a status, it's initialized. Since the option object is global scope you can have another call to your ajax() function and without it knowing that another ajax() function is already running you simply check for the typeof option.job_79 instead!
Some recommendations:
If you're enthusiastic about programming you'll eventually want to merge all your AJAX functions in to one single well refined function, it'll not only greatly simplify your code the up-front cost will save you and the earlier the better (though the more you'll have to refine it over time).
Also avoid the evils of frameworks and libraries. People make such a big deal about them but a few years later when you want to update you can't without spending days or weeks refactoring code. I've never had to refactor code using only pure JavaScript for any other reason other than my experience level, never because of a browser update. There are numerous other benefits that are hidden along that path and most people aren't aware of that.
I'm using Google App Engine with Java and Google Cloud Endpoints. In my JavaScript front end, I'm using this code to handle initialization, as recommended:
var apisToLoad = 2;
var url = '//' + $window.location.host + '/_ah/api';
gapi.client.load('sd', 'v1', handleLoad, url);
gapi.client.load('oauth2', 'v2', handleLoad);
function handleLoad() {
// this only executes once,
if (--apisToLoad === 0) {
// so this is not executed
}
}
How can I detect and handle when gapi.client.load fails? Currently I am getting an error printed to the JavaScript console that says: Could not fetch URL: https://webapis-discovery.appspot.com/_ah/api/static/proxy.html). Maybe that's my fault, or maybe it's a temporary problem on Google's end - right now that is not my concern. I'm trying to take advantage of this opportunity to handle such errors well on the client side.
So - how can I handle it? handleLoad is not executed for the call that errs, gapi.client.load does not seem to have a separate error callback (see the documentation), it does not actually throw the error (only prints it to the console), and it does not return anything. What am I missing? My only idea so far is to set a timeout and assume there was an error if initialization doesn't complete after X seconds, but that is obviously less than ideal.
Edit:
This problem came up again, this time with the message ERR_CONNECTION_TIMED_OUT when trying to load the oauth stuff (which is definitely out of my control). Again, I am not trying to fix the error, it just confirms that it is worth detecting and handling gracefully.
I know this is old but I came across this randomly. You can easily test for a fail (at least now).
Here is the code:
gapi.client.init({}).then(() => {
gapi.client.load('some-api', "v1", (err) => { callback(err) }, "https://someapi.appspot.com/_ah/api");
}, err, err);
function callback(loadErr) {
if (loadErr) { err(loadErr); return; }
// success code here
}
function err(err){
console.log('Error: ', err);
// fail code here
}
Example
Unfortunately, the documentation is pretty useless here and it's not exactly easy to debug the code in question. What gapi.client.load() apparently does is inserting an <iframe> element for each API. That frame then provides the necessary functionality and allows accessing it via postMessage(). From the look of it, the API doesn't attach a load event listener to that frame and rather relies on the frame itself to indicate that it is ready (this will result in the callback being triggered). So the missing error callback is an inherent issue - the API cannot see a failure because no frame will be there to signal it.
From what I can tell, the best thing you can do is attaching your own load event listener to the document (the event will bubble up from the frames) and checking yourself when they load. Warning: While this might work with the current version of the API, it is not guaranteed to continue working in future as the implementation of that API changes. Currently something like this should work:
var framesToLoad = apisToLoad;
document.addEventListener("load", function(event)
{
if (event.target.localName == "iframe")
{
framesToLoad--;
if (framesToLoad == 0)
{
// Allow any outstanding synchronous actions to execute, just in case
window.setTimeout(function()
{
if (apisToLoad > 0)
alert("All frames are done but not all APIs loaded - error?");
}, 0);
}
}
}, true);
Just to repeat the warning from above: this code makes lots of assumptions. While these assumptions might stay true for a while with this API, it might also be that Google will change something and this code will stop working. It might even be that Google uses a different approach depending on the browser, I only tested in Firefox.
This is an extremely hacky way of doing it, but you could intercept all console messages, check what is being logged, and if it is the error message you care about it, call another function.
function interceptConsole(){
var errorMessage = 'Could not fetch URL: https://webapis-discovery.appspot.com/_ah/api/static/proxy.html';
var console = window.console
if (!console) return;
function intercept(method){
var original = console[method];
console[method] = function() {
if (arguments[0] == errorMessage) {
alert("Error Occured");
}
if (original.apply){
original.apply(console, arguments)
}
else {
//IE
var message = Array.prototype.slice.apply(arguments).join(' ');
original(message)
}
}
}
var methods = ['log', 'warn', 'error']
for (var i = 0; i < methods.length; i++)
intercept(methods[i])
}
interceptConsole();
console.log('Could not fetch URL: https://webapis-discovery.appspot.com/_ah/api/static/proxy.html');
//alerts "Error Occured", then logs the message
console.log('Found it');
//just logs "Found It"
An example is here - I log two things, one is the error message, the other is something else. You'll see the first one cause an alert, the second one does not.
http://jsfiddle.net/keG7X/
You probably would have to run the interceptConsole function before including the gapi script as it may make it's own copy of console.
Edit - I use a version of this code myself, but just remembered it's from here, so giving credit where it's due.
I use a setTimeout to manually trigger error if the api hasn't loaded yet:
console.log(TAG + 'api loading...');
let timer = setTimeout(() => {
// Handle error
reject('timeout');
console.error(TAG + 'api loading error: timeout');
}, 1000); // time till timeout
let callback = () => {
clearTimeout(timer);
// api has loaded, continue your work
console.log(TAG + 'api loaded');
resolve(gapi.client.apiName);
};
gapi.client.load('apiName', 'v1', callback, apiRootUrl);
So I'm curious about the actual underlying behaviours that occur when aborting an async javascript request. There was some related info in this question but I've yet to find anything comprehensive.
My assumption has always been that aborting the request causes the browser to close the connection and stop processing it entirely, thus causing the server to do the same if it's been setup to do so. I imagine however that there might be browser-specific quirks or edge cases here I'm not thinking of.
My understanding is as follows, I'm hoping someone can correct it if necessary and that this can be a good reference for others going forwards.
Aborting the XHR request clientside causes the browser to internally close the socket and stop processing it. I would expect this behaviour rather than simply ignoring the data coming in and wasting memory. I'm not betting on IE on that though.
An aborted request on the server would be up to what's running there:
I know with PHP the default behaviour is to stop processing when the client socket is closed, unless ignore_user_abort() has been called. So closing XHR connections saves you server power as well.
I'm really interested to know how this could be handled in node.js, I assume some manual work would be needed there.
I have no idea really about other server languages / frameworks and how they behave but if anyone wants to contribute specifics I'm happy to add them here.
For the client, the best place to look is in the source, so let's do this! :)
Let's look at Blink's implementation of XMLHttpRequest's abort method (lines 1083-1119 in XMLHttpRequest.cpp):
void XMLHttpRequest::abort()
{
WTF_LOG(Network, "XMLHttpRequest %p abort()", this);
// internalAbort() clears |m_loader|. Compute |sendFlag| now.
//
// |sendFlag| corresponds to "the send() flag" defined in the XHR spec.
//
// |sendFlag| is only set when we have an active, asynchronous loader.
// Don't use it as "the send() flag" when the XHR is in sync mode.
bool sendFlag = m_loader;
// internalAbort() clears the response. Save the data needed for
// dispatching ProgressEvents.
long long expectedLength = m_response.expectedContentLength();
long long receivedLength = m_receivedLength;
if (!internalAbort())
return;
// The script never gets any chance to call abort() on a sync XHR between
// send() call and transition to the DONE state. It's because a sync XHR
// doesn't dispatch any event between them. So, if |m_async| is false, we
// can skip the "request error steps" (defined in the XHR spec) without any
// state check.
//
// FIXME: It's possible open() is invoked in internalAbort() and |m_async|
// becomes true by that. We should implement more reliable treatment for
// nested method invocations at some point.
if (m_async) {
if ((m_state == OPENED && sendFlag) || m_state == HEADERS_RECEIVED || m_state == LOADING) {
ASSERT(!m_loader);
handleRequestError(0, EventTypeNames::abort, receivedLength, expectedLength);
}
}
m_state = UNSENT;
}
So from this, it looks like the majority of the grunt work is done within internalAbort, which looks like this:
bool XMLHttpRequest::internalAbort()
{
m_error = true;
if (m_responseDocumentParser && !m_responseDocumentParser->isStopped())
m_responseDocumentParser->stopParsing();
clearVariablesForLoading();
InspectorInstrumentation::didFailXHRLoading(executionContext(), this, this);
if (m_responseLegacyStream && m_state != DONE)
m_responseLegacyStream->abort();
if (m_responseStream) {
// When the stream is already closed (including canceled from the
// user), |error| does nothing.
// FIXME: Create a more specific error.
m_responseStream->error(DOMException::create(!m_async && m_exceptionCode ? m_exceptionCode : AbortError, "XMLHttpRequest::abort"));
}
clearResponse();
clearRequest();
if (!m_loader)
return true;
// Cancelling the ThreadableLoader m_loader may result in calling
// window.onload synchronously. If such an onload handler contains open()
// call on the same XMLHttpRequest object, reentry happens.
//
// If, window.onload contains open() and send(), m_loader will be set to
// non 0 value. So, we cannot continue the outer open(). In such case,
// just abort the outer open() by returning false.
RefPtr<ThreadableLoader> loader = m_loader.release();
loader->cancel();
// If abort() called internalAbort() and a nested open() ended up
// clearing the error flag, but didn't send(), make sure the error
// flag is still set.
bool newLoadStarted = m_loader;
if (!newLoadStarted)
m_error = true;
return !newLoadStarted;
}
I'm no C++ expert but from the looks of it, internalAbort does a few things:
Stops any processing it's currently doing on a given incoming response
Clears out any internal XHR state associated with the request/response
Tells the inspector to report that the XHR failed (this is really interesting! I bet it's where those nice console messages originate)
Closes either the "legacy" version of a response stream, or the modern version of the response stream (this is probably the most interesting part pertaining to your question)
Deals with some threading issues to ensure the error is propagated properly (thanks, comments).
After doing a lot of digging around, I came across an interesting function within HttpResponseBodyDrainer (lines 110-124) called Finish which to me looks like something that would eventually be called when a request is cancelled:
void HttpResponseBodyDrainer::Finish(int result) {
DCHECK_NE(ERR_IO_PENDING, result);
if (session_)
session_->RemoveResponseDrainer(this);
if (result < 0) {
stream_->Close(true /* no keep-alive */);
} else {
DCHECK_EQ(OK, result);
stream_->Close(false /* keep-alive */);
}
delete this;
}
It turns out that stream_->Close, at least in the BasicHttpStream, delegates to the HttpStreamParser::Close, which, when given a non-reusable flag (which does seem to happen when the request is aborted, as seen in HttpResponseDrainer), does close the socket:
void HttpStreamParser::Close(bool not_reusable) {
if (not_reusable && connection_->socket())
connection_->socket()->Disconnect();
connection_->Reset();
}
So, in terms of what happens on the client, at least in the case of Chrome, it looks like your initial intuitions were correct as far as I can tell :) seems like most of the quirks and edge cases have to do with scheduling/event notification/threading issues, as well as browser-specific handling, e.g. reporting the aborted XHR to the devtools console.
In terms of the server, in the case of NodeJS you'd want to listen for the 'close' event on the http response object. Here's a simple example:
'use strict';
var http = require('http');
var server = http.createServer(function(req, res) {
res.on('close', console.error.bind(console, 'Connection terminated before response could be sent!'));
setTimeout(res.end.bind(res, 'yo'), 2000);
});
server.listen(8080);
Try running that and canceling the request before it completes. You'll see an error at your console.
Hope you found this useful. Digging through the Chromium/Blink source was a lot of fun :)
First, I've looked at related SO questions, and didn't find much in the way of a suitable answer, so here goes:
I've been working on an HTML/Javascript page that acts as a UI to a back-end server. I made some pretty good strides in completing it, all while using synchronous calls in AJAX (aka var xmlhttp = new XMLHttpRequest(); xmlhttp.open(type, action, false);), but have now come to find out that Mozilla apparently doesn't like synchronous requests, and has therefore deprecated some much-needed functionality from them.
To quote https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest:
Note: Starting with Gecko 11.0 (Firefox 11.0 / Thunderbird 11.0 / SeaMonkey 2.8), as well as WebKit build 528, these browsers no longer let you use the responseType attribute when performing synchronous requests. Attempting to do so throws an NS_ERROR_DOM_INVALID_ACCESS_ERR exception. This change has been proposed to the W3C for standardization.
So that's great. I'm about to need to change the response type conditionally, but it won't work. It is now my intention to wrap an AJAX asynchronous request in something that will simulate synchronicity.
The following is a generic "make web request" function that my code uses, that I've started adapting to work for my purposes. Unfortunately, it isn't working quite like I'd hoped.
var webResponse = null;
function webCall(action, type, xmlBodyString) {
console.log("In webCall with " + type + ": " + action);
webResponse = null;
var xmlhttp = new XMLHttpRequest();
xmlhttp.onreadystatechange = function()
{
if (xmlhttp.readyState == 4)
{
if (xmlhttp.status == 200) {
webResponse = xmlhttp.responseXML;
} else {
var statusTxt = xmlhttp.statusText;
if (statusTxt == null || statusTxt.length == 0) {
statusTxt = "An Unknown Error Occurred";
}
throw "ERROR " + xmlhttp.status + ":" + statusTxt;
}
}
}
xmlhttp.open(type, action, true);
if (xmlBodyString == null) {
xmlhttp.send();
} else {
xmlhttp.setRequestHeader("Content-Type", "text/xml");
xmlhttp.send(xmlBodyString);
}
for (var i = 0; i < 20; i++) {
if (webResponse != null) {
break;
}
window.setTimeout(nop, 250);
}
if (webResponse == null) {
throw "Waited 5 seconds for a response, and didn't get one.";
}
console.log("Responding with " + webResponse);
return webResponse;
}
function nop() {
}
So, I thought this was pretty straight-forward. Create a global variable (in retrospect, it probably doesn't even have to be global, but for now, w/e), set up the onreadystatechange to assign a value to it once it's ready, make my asynchronous request, wait a maximum of 5 seconds for the global variable to be not null, and then either return it, or throw an error.
The problem is that my code here doesn't actually wait 5 seconds. Instead, it immediately exits, claiming it waited 5 seconds before doing so.
I made a fiddle, for what it's worth. It doesn't work in there either.
http://jsfiddle.net/Z29M5/
Any assistance is greatly appreciated.
You can't do it. Stick to asynchronous requests. Callback hell sucks, but that's what you get in event-driven systems with no language support.
There is simply no way to simulate synchronous code in plain JavaScript in browsers at the moment.
If you could severely limit your supported set of browsers (pretty much just Firefox at the moment AFAIK) you can get synchronous-looking code by using generators.
There are also languages that compile into JS and support synchronous-looking code. One example I can think of (from a few years ago) is this: https://github.com/maxtaco/tamejs
Firstly, for all the pain, using asynchronous code asynchronously is the way to go. It takes a different approach, that's all.
Secondly, for your specific question, this is what your 'delay' loop is doing:
For twenty iterations
if we've had a response, break
set a timeout for 250ms
go round again
(The entire for loop completes all 20 iterations immediately. You won't have a response)
.
.
.
after 250ms
execute the first setTimeout callback, which is nop
execute the second...
I can't think of a quick way to fix this, other than putting your processing code in the AJAX call back, which is where it should be for async code anyway.
Why not create an array of requests and just pop them off one-by-one when you get the response from the previous ajax call.
Is there any way to detect global AJAX calls (particularly responses) on a web page with generic JavaScript (not with frameworks)?
I've already reviewed the question "JavaScript detect an AJAX event", here on StackOverflow, and tried patching in the accepted answer's code into my application but it didn't work. I've never done anything with AJAX before either so, I don't know enough to modify it to work.
I don't need anything fancy, I just need to detect all (specific, actually, but I'd have to detect all first and go from there) AJAX responses and patch them into an IF statement for use. So, eventually, I'd like something like:
if (ajax.response == "certainResponseType"){
//Code
}
, for example.
Update:
It seems I should clarify that I'm not trying to send a request - I'm developing a content script and I need to be able to detect the web page's AJAX requests (not make my own), so I can execute a function when a response is detected.
Here's some code (tested by pasting into Chrome 31.0.1650.63's console) for catching and logging or otherwise processing ajax requests and their responses:
(function() {
var proxied = window.XMLHttpRequest.prototype.send;
window.XMLHttpRequest.prototype.send = function() {
console.log( arguments );
//Here is where you can add any code to process the request.
//If you want to pass the Ajax request object, pass the 'pointer' below
var pointer = this
var intervalId = window.setInterval(function(){
if(pointer.readyState != 4){
return;
}
console.log( pointer.responseText );
//Here is where you can add any code to process the response.
//If you want to pass the Ajax request object, pass the 'pointer' below
clearInterval(intervalId);
}, 1);//I found a delay of 1 to be sufficient, modify it as you need.
return proxied.apply(this, [].slice.call(arguments));
};
})();
This code solves the above issue with the accepted answer:
Note that it may not work if you use frameworks (like jQuery), because
they may override onreadystatechange after calling send (I think
jQuery does). Or they can override send method (but this is unlikely).
So it is a partial solution.
Because it does not rely on the 'onreadystatechange' callback being un-changed, but monitors the 'readyState' itself.
I adapted the answer from here: https://stackoverflow.com/a/7778218/1153227
Gives this a try. Detects Ajax responses, then I added a conditional using the XMLHttpRequest propoerties readyState & status to run function if response status = OK
var oldXHR = window.XMLHttpRequest;
function newXHR() {
var realXHR = new oldXHR();
realXHR.addEventListener("readystatechange", function() {
if(realXHR.readyState==4 && realXHR.status==200){
afterAjaxComplete() //run your code here
}
}, false);
return realXHR;
}
window.XMLHttpRequest = newXHR;
Modified from:
Monitor all JavaScript events in the browser console
This can be a bit tricky. How about this?
var _send = XMLHttpRequest.prototype.send;
XMLHttpRequest.prototype.send = function() {
/* Wrap onreadystaechange callback */
var callback = this.onreadystatechange;
this.onreadystatechange = function() {
if (this.readyState == 4) {
/* We are in response; do something,
like logging or anything you want */
}
callback.apply(this, arguments);
}
_send.apply(this, arguments);
}
I didn't test it, but it looks more or less fine.
Note that it may not work if you use frameworks (like jQuery), because they may override onreadystatechange after calling send (I think jQuery does). Or they can override send method (but this is unlikely). So it is a partial solution.
EDIT: Nowadays (the begining of 2018) this gets more complicated with the new fetch API. Global fetch function has to be overridden as well in a similar manner.
A modern (as of April 2021) answer to the question is to use PerformanceObserver which lets you observe both XMLHttpRequest requests and fetch() requests:
Detect fetch API request on web page in JavaScript
Detect ajax requests from raw HTML
<!-- Place this at the top of your page's <head>: -->
<script type="text/javascript">
var myRequestLog = []; // Using `var` (instead of `let` or `const`) so it creates an implicit property on the (global) `window` object so you can easily access this log from anywhere just by using `window.myRequestLog[...]`.
function onRequestsObserved( batch ) {
myRequestLog.push( ...batch.getEntries() );
}
var requestObserver = new PerformanceObserver( onRequestsObserved );
requestObserver.observe( { type: 'resource' /*, buffered: true */ } );
</script>
I use the above snippet in my pages to log requests so I can report them back to the mothership in my global window.addEventListenr('error', ... ) callback.
The batch.getEntries() function returns an array of DOM PerformanceResourceTiming objects (because we're only listening to type: 'resource', otherwise it would return an array of differently-typed objects).
Each PerformanceResourceTiming object has useful properties like:
The initiatorType property can be:
A HTML element name (tag name) if the request was caused by an element:
'link' - Request was from a <link> element in the page.
'script' - Request was to load a <script>.
'img' - Request was to load an <img /> element.
etc
'xmlhttprequest' - Request was caused by a XMLHttpRequest invocation.
'fetch' - Request was caused by a fetch() call.
name - The URI of the resource/request. (If there's a redirection I'm unsure if this is the original request URI or the final request URI).
startTime: Caution: this is actually the time since PerformanceObserver.observe() was called when the request was started.
duration: Caution: this is actually the time since PerformanceObserver.observe() was called when the request completed: it is not the duration of the request alone. To get the "real" duration you need to subtract startTime from duration.
transferSize: the number of bytes in the response.