Slightly obscure title but here goes ....
I have a Backbone UI that makes a massive amount of calls to an API on page load. It uses Backbone Fetch Cache to cache the GET requests. On Chrome, a cache miss means that when executing many GET requests to the same URL, at the same time, Chrome causes the duplicate XHRs to wait until the first has finished, then subsequent ones hit the cache.
In Firefox, all XHRs process immediately, even when they are GET requests to the same API endpoint. Refactoring this out of the code would be a pain, so the question is:
Question:
Is there an existing method to patch either the sync() part of Backbone or jQuery so that the Chrome behavior is used across all browsers? such that Firefox waits on the first of duplicate GET requests before processing the others?
You can modify Backbone.ajax to create a lists of requests and wait for the first to complete before emitting the subsequent ones. For example
//cached requests
Backbone.xhrs = {};
Backbone.ajax = function(opts) {
// cache GET requests, not the others
if (opts.type!=='GET')
return Backbone.$.ajax.apply(Backbone.$, arguments);
var xhr;
// issue the request if a cached version does not exist
if (!Backbone.xhrs[opts.url]) {
xhr = Backbone.xhrs[opts.url] = Backbone.$.ajax.call(Backbone.$, opts);
} else {
xhr = Backbone.xhrs[opts.url].then(function() {
return Backbone.$.ajax.call(Backbone.$, opts);
});
}
return xhr;
};
And a demo http://jsfiddle.net/nikoshr/vexNP/
Related
The chrome.webRequest API has the concept of a request ID (source: Chrome webRequest documention):
Request IDs
Each request is identified by a request ID. This ID is unique within a browser session and the context of an extension. It remains constant during the the life cycle of a request and can be used to match events for the same request. Note that several HTTP requests are mapped to one web request in case of HTTP redirection or HTTP authentication.
You can use it to correlate the requests even across redirects. But how do you initially get hold off the id when start a new request with fetch or XMLHttpRequest?
So far, I have not found anything better than to use the URL of the request as a way to make the initial link between the new request and the requestId. However, if there are overlapping requests to the same resource, this is not reliable.
Questions:
If you make a new request (either with fetch or XMLHttpRequest), how do you reliably get access to the requestId?
Does the fetch API or XMLHttpRequest API allow access to the requestId?
What I want to do is to use the functionality provided by the webRequest API to modify a single request, but I want to make sure that I do not accidentally modify other pending requests.
To the best of my knowledge, there is no direct support in the fetch or XHMLHttpRequest API. Also I'm not aware of completely reliable way to get hold of the requestId.
What I ended up doing was installing a onBeforeRequest listener, storing the requestId, and then immediately removing the listener again. For instance, it could look like this:
function makeSomeRequest(url) {
let listener;
const removeListener = () => {
if (listener) {
chrome.webRequest.onBeforeRequest.removeListener(listener);
listener = null;
}
};
let requestId;
listener = (details) => {
if (!requestId && urlMatches(details.url, url)) {
requestId = details.requestId;
removeListener();
}
};
chrome.webRequest.onBeforeRequest.addListener(listener, { urls: ['<all_urls>'] });
// install other listeners, which can then use the stored "requestId"
// ...
// finally, start the actual request, for instance
const promise = fetch(url).then(doSomething);
// and make sure to always clean up the listener
promise.then(removeListener, removeLister);
}
It is not perfect, and matching the URL is a detail that I left open. You could simply compare whether the details.url is identical to url:
function urlMatches(url1, url2) {
return url1 === url2;
}
Note that it is not guaranteed that you see the identical URL, for instance, if make a request against http://some.domain.test, you will see http://some.domain.test/ in your listener (see my other question about the details). Or http:// could have been replaced by https:// (here I'm not sure, but it could be because of other extensions like HTTPS Everywhere).
That is why the code above should only be seen as a sketch of the idea. It seems to work good enough in practice, as long as you do not start multiple requests to the identical URL. Still, I would be interested in learning about a better way to approach the problem.
I am using Vue.js and Choices.js javascript plugin and I have to dynamically populate values of two select fields via ajax.
What I am trying achieve is initate a get request at page load and populate the universities select, and after a value in universities select is chosen start a new getrequest to populate the faculties select.
What is happening is that when I pick the university for the first time, everything will work normally. For example if I pick a university option with value="1" an ajax get request will be sent to /faculties?university_id=1.The console log will print onChange startedso we are sure the method is running correctly; the appropriate v-model="selectedUniversity"is updating too.
If I now change the value of the select field again, the ajax function won't be called anymore and no additional requests will be done to the server. The console.logwill still run, and the v-modelis still being updated. Does anyone understand what is going on here?
var Choices = require('choices.js');
module.exports = {
data: function() {
return {
selectedUniversity: '',
selectedFaculty: '',
universities: {},
faculties: {}
}
},
mounted: function () {
var self = this;
var universitySelect = new Choices(document.getElementById('university'));
universitySelect.ajax(function(callback) {
fetch('/universities')
.then(function(response) {
response.json().then(function(data) {
callback(data, 'id', 'name');
self.universities = data;
});
})
.catch(function(error) {
console.log(error);
});
});
},
methods: {
onChange: function () {
console.log("onChange started");
var self = this;
var url = '/faculties?university_id=' + self.selectedUniversity;
var facultySelect = new Choices(document.getElementById('faculty'));
//This part below only runs the first time when the university select is selected
facultySelect.ajax(function(callback) {
fetch(url)
.then(function(response) {
response.json().then(function(data) {
callback(data, 'id', 'name');
self.faculties = data;
});
})
.catch(function(error) {
console.log(error);
});
});
}
}
}
The Headers are set like this:
I think your request URL /faculties?university_id=1 is cached and that's why it worked on first time and second time, the response is coming from the cached response.
In your fetch API, set cache mode to ignore the cached response,
fetch(url, {cache: "no-store"}).then(....)
For complete list of cache modes for fetch() API,
https://hacks.mozilla.org/2016/03/referrer-and-cache-control-apis-for-fetch/
In case if above link is unavailable,
Fetch cache control APIs
The idea behind this API is specifying a caching policy for fetch to explicitly indicate how and when the browser HTTP cache should be consulted. It’s important to have a good understanding of the HTTP caching semantics in order to use these most effectively. There are many good articles on the web such as this one that describe these semantics in detail. There are currently five different policies that you can choose from.
“default” means use the default behavior of browsers when downloading resources. The browser first looks inside the HTTP cache to see if there is a matching request. If there is, and it is fresh, it will be returned from fetch(). If it exists but is stale, a conditional request is made to the remote server and if the server indicates that the response has not changed, it will be read from the HTTP cache. Otherwise it will be downloaded from the network, and the HTTP cache will be updated with the new response.
“no-store” means bypass the HTTP cache completely. This will make the browser not look into the HTTP cache on the way to the network, and never store the resulting response in the HTTP cache. Using this cache mode, fetch() will behave as if no HTTP cache exists.
“reload” means bypass the HTTP cache on the way to the network, but update it with the newly downloaded response. This will cause the browser to never look inside the HTTP cache on the way to the network, but update the HTTP cache with the downloaded response. Future requests can use that updated response if appropriate.
“no-cache” means always validate a response that is in the HTTP cache even if the browser thinks that it’s fresh. This will cause the browser to look for a matching request in the HTTP cache on the way to the network. If such a request is found, the browser always creates a conditional request to validate it even if it thinks that the response should be fresh. If a matching cached entry is not found, a normal request will be made. After a response has been downloaded, the HTTP cache will always be updated with that response.
“force-cache” means that the browser will always use a cached response if a matching entry is found in the cache, ignoring the validity of the response. Thus even if a really old version of the response is found in the cache, it will always be used without validation. If a matching entry is not found in the cache, the browser will make a normal request, and will update the HTTP cache with the downloaded response.
Let’s look at a few examples of how you can use these cache modes.
// Download a resource with cache busting, to bypass the cache
// completely.
fetch("some.json", {cache: "no-store"})
.then(function(response) { /* consume the response */ });
// Download a resource with cache busting, but update the HTTP
// cache with the downloaded resource.
fetch("some.json", {cache: "reload"})
.then(function(response) { /* consume the response */ });
// Download a resource with cache busting when dealing with a
// properly configured server that will send the correct ETag
// and Date headers and properly handle If-Modified-Since and
// If-None-Match request headers, therefore we can rely on the
// validation to guarantee a fresh response.
fetch("some.json", {cache: "no-cache"})
.then(function(response) { /* consume the response */ });
// Download a resource with economics in mind! Prefer a cached
// albeit stale response to conserve as much bandwidth as possible.
fetch("some.json", {cache: "force-cache"})
.then(function(response) { /* consume the response */ });
For the past days, we've been trying to develop a Devtools extension that could intercept only XHR requests. We can use the chrome.webRequest API on a normal extension, but that is not possible on a Devtools Extension Panel. We tried to used the devtools.network, but it catches all requests.
Is there a way to catch only the XHR requests?
Thanks in advance.
You can use the chrome.devtools.network API to get the HAR, and then you can determine whether a request is XHR or not, filtering the output.
I'm not totally sure how DevTools determines this, but the X-Requested-With header is (typically) sent when AJAX requests are made. It is a non-standard, but is used widely. You can check for the XMLHttpRequest value in the HAR.
It's possible this doesn't catch all the requests, and there might be some other data DevTools uses, but here's a little snippet I created that will filter the HAR based on this header.
chrome.devtools.network.getHAR(function(result) {
var entries = result.entries;
var xhrEntries = entries.filter(function(entry) {
var headers = entry.request.headers;
var xhrHeader = headers.filter(function(header) {
return header.name.toLowerCase() === 'x-requested-with'
&& header.value === 'XMLHttpRequest';
});
return xhrHeader.length > 0;
});
console.log(xhrEntries);
});
Note. You can access the HAR data in the same way, per request, as it finishes, using the chrome.devtools.network.onRequestFinished event.
Cache all requests from an app without explicitly specifying urlsToCache. So I will cache stuff under fetch event.
To respond to requests from the cache.
Update the cache when fetch is success.
Initially,
this.addEventListener('fetch', function(event) {
var fetchReq = event.request.clone(),
cacheReq = event.request.clone();
event.respondWith(fetch(fetchReq).then(function(response) {
var resp = response.clone();
caches.open(CACHE_NAME).then(function(cache) {
req = event.request.clone();
cache.put(req, resp);
});
return response;
}).catch(function() {
return caches.match(cacheReq);
}));
});
The offline situations were handled perfectly well. But the problem here was with the slow connections. The user has to wait till fetch times out or throws an error to get the response from cache.
self.addEventListener('fetch', function(event) {
var cacheRequest = event.request.clone();
event.respondWith(caches.match(cacheRequest).then(function(response) {
if(response) return response;
var fetchRequest = event.request.clone();
return fetch(fetchRequest).then(function(response) {
var responseToCache = response.clone();
caches.open(cache_name).then(function(cache) {
var cacheSaveRequest = event.request.clone();
cache.put(cacheSaveRequest, responseToCache);
});
return response;
});
}));
});
With the cache taking precedence, the responses served were fine. But the problem here is that when the code updates. When /public/main.css served via sw is updated, on page reload only the cache is served, the updated content is not served.
I also tried modifying the cache_name to cache-v2 from cache-v1 (so that sw binary diff exists and sw is updated and that old cache can be cleared), and cleared cache-v1 on activate event. But it gave rise to new problems where two service workers were running at the same time under the same Registration ID. More on this is in this other SO question: How to stop older service workers?
Two service workers running at the same time are not technically a problem—it's working as designed. (See my answer to How to stop older service workers?) Make sure that you close other tabs that might have an older version of your service worker active.
You're running into the inevitable tradeoffs between the different cache vs. network scenarios here. If you haven't yet read through the offline cookbook, it's a great starting point when trying to decide which caching strategy works best for your specific resources.
I'm trying to implement a iGoogle like dashboard interface using widgets that get their content from other sites using JSONP calls.
The problem is that if the first widget that calls the "$.ajax" takes 8 seconds to get the content back, it seems that the callbacks of the other widgets will only be called after the callback of the first widget gets executed. For the user experience, it would be better if the widgets could be displayed as soon as they get the content back from the remote sites, and not wait for those that were scheduled before to complete.
Is there a way I can do that?
EDIT :
I use jquery 1.4.1.
I tested on Chrome and the behaviour seems to be different than on Firefox.
Here is a script that I've made up to try to get what happens :
function showTime(add) { console.log(getTime() + ': ' + add); }
function getNow() { return new Date().getTime(); }
initialTime = getNow();
function getTime() { return getNow() - initialTime; }
function display(data) { showTime('received a response'); }
showTime("Launched a request");
jQuery.getJSON("http://localhost:51223/WaitXSeconds/3?callback=?", display);
showTime("Launched a request");
jQuery.getJSON("http://localhost:51223/WaitXSeconds/4?callback=?", display);
showTime("Launched a request");
jQuery.getJSON("http://localhost:63372/WaitXSeconds/9?callback=?", display);
showTime("Launched a request");
jQuery.getJSON("http://services.digg.com/stories/top?appkey=http%3A%2F%2Fmashup.com&type=javascript&callback=?", display);
showTime("Launched a request");
jQuery.getJSON("http://www.geonames.org/postalCodeLookupJSON?postalcode=10504&country=US&callback=?", display);
The first three calls are just fake calls that wait the specified number of seconds.
Note that I use two different servers implementing this method.
Here is the result in the console on Firefox 3.6.2 :
0: Launched a request
3: Launched a request
6: Launched a request
11: Launched a request
14: Launched a request
3027: received a response
7096: received a response
9034: received a response
9037: received a response
9039: received a response
.. and here is the result in Chrome 4.1.249.1036 (41514) :
1: Launched a request
2: Launched a request
3: Launched a request
4: Launched a request
5: Launched a request
165: received a response
642: received a response
3145: received a response
7587: received a response
9157: received a response
It seems that in Firefox, the two requests to the two public APIs get called at the end, after all the other calls succeed.
Chrome, on the other hand, manages to execute the callback as soon as it receives the response.
On both browsers, when the request happen on the same server, they are not done in parallel. They are scheduled one after the other. But I guess this is a reasonable behaviour.
Can anybody explain Firefox's behaviour or has any hack to go around this?
In Firefox, if one of concurrent JSONP request isn't finished, then all successive JSONP request aren't executed, even if their responses have already arrived and written into these tags. This is because <script> tags, used by JSONP, executed synchronously in Firefox. So if one <script> isn't finished, successive <script> tags aren't executed, even if they are populated with response data.
The solution is to wrap concurrent JSONP requests by iFrame. There is a project called jquery-jsonp that solves this issue.
Here is a simplified version of iFramed JSONP:
var jsc = (new Date()).getTime();
function sendJsonpRequest(url, data, callback) {
var iframe = document.createElement("iframe");
var $iframe = jQuery(iframe);
$iframe.css("display", "none");
jQuery("head").append($iframe);
var iframeWindow = iframe.contentWindow;
var iframeDocument = iframeWindow.document;
iframeDocument.open();
iframeDocument.write("<html><head></head><body></body></html>");
iframeDocument.close();
var jsonp = "jsonp" + jsc++;
var url = url + "?callback=" + jsonp;
var params = jQuery.param(data);
if (params) {
url += "&" + params;
}
// Handle JSONP-style loading
iframeWindow[jsonp] = function(data){
if (callback) {
callback(data);
}
// Garbage collect
iframeWindow[jsonp] = undefined;
try{ delete iframeWindow[jsonp]; } catch(e){}
if (head) {
head.removeChild(script);
}
$iframe.remove();
};
var head = iframeDocument.getElementsByTagName("head")[0];
var script = iframeDocument.createElement("script");
script.src = url;
head.appendChild(script);
}
According to the jQuery.ajax() page:
The first letter in Ajax stands for "asynchronous," meaning that the operation occurs in parallel and the order of completion is not guaranteed.
I don't know why the latter-called widgets are returning later, but I don't think it's to do with the jQuery call, unless, as Peter suggested, you've explicitly set async to false.
By default $.ajax is asynchronous.
asyncBoolean Default: true
Make sure you don't have it set to false. Debug the XHR requests using Firebug to see if the requests are correctly sent and why the dom is not getting updated.
You could have a look at this Tutorial to see how to use these tools and how to discover what's wrong with your GUI.