ServiceWorker: No FetchEvent for javascript triggered request? - javascript

I am using Chrome stable 46.
I have a very basic site. One button.onclick that fetch('a') and one <a> tag that opens b. Both urls a and b do not exist, I intend to catch them in the service worker and just return a new Response().
If I click on the button which fetches a, the service worker is not involved and I get a 404 error.
But if I click on the link that goes to b, the service worker onfetches and returns the response.
Question: Why is the service worker not getting any FetchEvent for the fetch('a') request?
See the files below
HTML
<html>
<body>
<p><button onclick="fetch('a');">fetch a</button></p>
<p>go to b</p>
<script src="js.js" defer></script>
</body>
</html>
js.js
navigator.serviceWorker.register('sw.js');
sw.js
self.onfetch = function(event) {
var request = event.request;
event.respondWith(
caches.match(request).then(function(response) {
return new Response('here is your onfetch response');
})
);
};

The service worker's fetch event handler isn't being called the first time you load the page because the service worker hasn't yet "claimed" the page, meaning it's not under the service worker's control. By default, the first time you visit a site that registers a service worker, the service worker's install (and potentially activate) event handlers will run, but fetch won't get triggered until the service worker takes control. That happens the next time you navigate to a page under the service worker's scope, or if you reload the current page.
You can override this default behavior and have the service worker take control during the initial navigation by calling self.clients.claim() inside your activate event handler.
I also want to point out that the fetch event handler in your sample has some issues—there's no reason to call caches.match(request) if you're always planning on returning a new Response object. More importantly, you need to do some sort of check of the event.request.url value and only return the new Response if it matches one of your "special" URLs, which in your case is a and b. The way things are implemented now, your fetch handler will return the dummy Response unconditionally, even if it's a subsequent request for your main page (index.html). That's presumably not what you want.
I put a gist together that modifies your service worker code to accomplish what I believe you're attempting. You can try out a live version thanks to RawGit. For posterity, here's the modified service worker code:
self.addEventListener('install', event => {
// Bypass the waiting lifecycle stage,
// just in case there's an older version of this SW registration.
event.waitUntil(self.skipWaiting());
});
self.addEventListener('activate', event => {
// Take control of all pages under this SW's scope immediately,
// instead of waiting for reload/navigation.
event.waitUntil(self.clients.claim());
});
self.addEventListener('fetch', event => {
// In a real app, you'd use a more sophisticated URL check.
if (event.request.url.match(/a|b$/)) {
event.respondWith(new Response('here is your onfetch response'));
}
});

Related

How to add additional handlers to an existing connection?

Within my MVC 5 application, I am setting up a Signal R connection on the client end upon page load, this works as expected.
At some point later on I want add an additional handler and make a server side call, I can see that the server recieves this call which then initiates some client side calls, the handlers at the client don't get invoked.
Connection setup upon page load
function initialiseRealTimeDataRetrieval() {
var hub = $.connection.autoGeneratedProxyForHub;
hub.client.recieveRealTimeData = function (data) {
//Do Stuff
};
$.connection.hub.start().done(function () {
hub.server.getRealTimeData();
});
}
Additional calls made later on
function initialiseFeed () {
var hub = $.connection.autoGeneratedProxyForHub;
hub.client.recieveRealTimeDataFeed = function (data) {
//Do stuff
};
if ($.connection.hub.state == $.connection.connectionState.connected) {
hub.server.getRealTimeDataFeed();
}
else {
$.connection.hub.start().done(function () {
hub.server.getRealTimeDataFeed();
});
}
}
So far I have tried the following:
Made sure that calls made from the client to server are being invoked on the server.
Made sure that the additional calls are work as expected if they were made along with the calls and handlers executing upon page load.
Reviewd documentation to see if a connection must be restarted to register the new handlers.
Attempted various methods of restarting the connection after new handlers were added
The below works as expected for the additional calls however makes everything done for the connection upon page load redundant:
function initialiseFeed () {
var hub = $.connection.autoGeneratedProxyForHub;
hub.client.recieveRealTimeDataFeed = function (data) {
//Do stuff
};
$.connection.hub.stop();
$.connection.hub.start().done(function () {
hub.server.getRealTimeDataFeed();
});
}
Inspecting the hub object through the debugger does show that all clients are connected, including the additional ones.
According to the Signal R JS API Docs, the automaically generated proxy for the hub can't be used to register multiple event handler:
When to use the generated proxy
If you want to register multiple event handlers for a client method
that the server calls, you can't use the generated proxy. Otherwise,
you can choose to use the generated proxy or not based on your coding
preference. If you choose not to use it, you don't have to reference
the "signalr/hubs" URL in a script element in your client code.
Also to register new handlers for an existing connection, that connection must have at least one handler associated with it prior to establishing a connection, upon registering new handlers you must call start():
Note
Normally you register event handlers before calling the start method
to establish the connection. If you want to register some event
handlers after establishing the connection, you can do that, but you
must register at least one of your event handler(s) before calling the
start method. One reason for this is that there can be many Hubs in an
application, but you wouldn't want to trigger the OnConnected event on
every Hub if you are only going to use to one of them. When the
connection is established, the presence of a client method on a Hub's
proxy is what tells SignalR to trigger the OnConnected event. If you
don't register any event handlers before calling the start method, you
will be able to invoke methods on the Hub, but the Hub's OnConnected
method won't be called and no client methods will be invoked from the
server.

Why my Service Worker is always waiting to activate?

I have this very basic question
I'm striving to understand the Service Worker life cycle, or even better, what in practical terms initialize and change the states.
I got 2 questions right now:
1 - in chrome://inspect/#service-workers there are always 2 ou 3 lines, showing service workers all running with the same PID. Why? Why not only one?
2- When i inspect my service worker on refresh i got this:
#566 activated and is running [stop]
#570 waiting to activate [skipWaiting]
What does that mean? What is 566 and what is 570? I suppose they are instances of the the sw, but why there are two of them? And why 570 is still waiting? What do I have to do to make sure it will be registered-installed-activated?
3- General questions
What ends the install event in a normal life cycle?
What fires the activate event in a normal life cycle?
index.html
<script>
if ('serviceWorker' in navigator) {
window.addEventListener('load', function() {
navigator.serviceWorker.register('./sw.js')
.then(function(registration) {
// successful
console.log('Success: ', registration);
}).catch(function(err) {
// registration failed
console.log('Error: ', err);
});
});
}
</script>
sw.js
var cache_name = 'v1';
var cache_files = [
'./',
'./index.html',
'./style.css'
]
self.addEventListener('install', function(e){
console.log('SW install:', e);
e.waitUntil(
caches.open(cache_name)
.then(function(cache){
console.log('cache', cache);
return cache.addAll(cache_files);
})
.then(function(cache){
console.log('Cache completed');
})
)
});
self.addEventListener('activate', function(event) {
console.log('SW activate:', event);
});
self.addEventListener('fetch', function(e){
console.log('SW fetch:', e.request.url)
e.respondWith(
caches.match(e.request)
.then(function(cache_response){
if(cache_response) return cache_response;
return fetch(e.request);
})
.catch(function(err){
console.log('Cache error', err)
})
);
});
Thanks!
The ids shown by Chrome Devtools are internal. Just to point out. So they name all the Service Workers by an id. That's all.
The reason for having two SWs at the "same time" is that you had one, then you reloaded the page, navigated away and came back, or something along those lines, and you got another one. But at this point in time, when you "just got another one", it has yet to be activated and the previous SW is still controlling the page. The new SW will take control over the previous SW when you navigate back to the site from somewhere else, refreshing the page isn't enough. Basically this means closing all tabs and windows of the page and then loading it again, then the new SW takes over.
The time when the new SW hasn't taken over is called waiting state which happens between installation and activation. That can be skipped by calling self.skipWaiting() from inside the install handler of the SW.
The basic idea behind this flow is the page shouldn't be controlled by a SW that didn't control the page when the page was loaded – for this reason the first visit to a site that registers an SW will not be controlled by that SW, only the second time the SW will be activated etc.
You should REALLY read this brilliant article: The Service Worker Lifecycle

To check if ServiceWorker is in waiting state

I am trying to understand the Service Worker API, and I know the bits and parts about registering a Service Worker.
As stated in the API doc, if a service worker update is found, the service worker is registered and added to the queue. This SW takes over a page if and only if, the page is closed and opened again.That is, A window is closed and reopened again.
Now, this has a few downfalls:
The user might be seeing a previous version that might have a very serious grammatical mistake, or whatever.
The user needs to be somehow notified that the content has changed and that a referesh would do it.
I know how to tell the SW.js to skipWaiting() and take over. I also know how to send a message to the SW.js telling it that the user wants a automatic refresh.
However, what I do not know is how to know whether a new SW is actually in a waiting state.
I have used this:
navigator.serviceWorker.ready.then((a) => {
console.log("Response, ", a);
if (a.waiting !== null && a.waiting.state === "installed") {
console.log("okay");
}
});
However, it usually returns the waiting state as null.(Possibly because the SW is still installing when the request is fired.)
How can I know on the client page that a waiting service worker is available?
Here's some code that will detect and allow you to handle various states whenever there's a new or updated service worker registration.
Please note that the log message assumes that skipWaiting() is not being called during the service worker's installation; if it is being called, then instead of having to close all tabs to get the new service worker to activate, it will just activate automatically.
if ('serviceWorker' in navigator) {
window.addEventListener('load', async function() {
const registration = await navigator.serviceWorker.register('/service-worker.js');
if (registration.waiting && registration.active) {
// The page has been loaded when there's already a waiting and active SW.
// This would happen if skipWaiting() isn't being called, and there are
// still old tabs open.
console.log('Please close all tabs to get updates.');
} else {
// updatefound is also fired for the very first install. ¯\_(ツ)_/¯
registration.addEventListener('updatefound', () => {
registration.installing.addEventListener('statechange', () => {
if (event.target.state === 'installed') {
if (registration.active) {
// If there's already an active SW, and skipWaiting() is not
// called in the SW, then the user needs to close all their
// tabs before they'll get updates.
console.log('Please close all tabs to get updates.');
} else {
// Otherwise, this newly installed SW will soon become the
// active SW. Rather than explicitly wait for that to happen,
// just show the initial "content is cached" message.
console.log('Content is cached for the first time!');
}
}
});
});
}
});
}

Angular: $http.get() only fires every second onpopstate trigger

I have an AngularJS app that makes a call to an API and returns a bunch of data that users can then filter by tags for greater granularity in the results. Each time a tag is clicked to filter the data, the app makes a new $http.get() call, and the URL is modified with the appropriate query parameters so that the user can save the permalink and come back to any particular data set.
I'm trying to give the app proper history handling with window.history.pushState(), and passing the relevant query parameters for each history object as state data. I'm using window.onpopstate to detect when the back/forward buttons are clicked, and using that to make the new $http.get() call with the relevant state data from the history.
For some reason, the $http.get() function only fires on every second popstate, and then it makes two calls. It's almost as if there's some caching going on, but I haven't been able to find the culprit. This behaviour persists in both directions, backwards and forwards, and is consistently every second event. I've verified that window.history.length is only incremented by 1 for every tag added/removed, that the state data is being successfully sent, that new search queries are being correctly assembled, and that the request path is correct. It's just not firing. What's going on??
To illustrate, the behaviour flow looks like this:
Load page at /default
Add first tag: URL is /default&tags=a, $http.get() returns new data
Add second tag: URL is /default&tags=a,b, $http.get() returns new data
Add third tag: URL is /default&tags=a,b,c, $http.get() returns new data
Add fourth tag: URL is /default&tags=a,b,c,d, $http.get() returns new data
First back button event
window.onpopstate fires, URL is now /default&tags=a,b,c
No network changes
Second back button event
window.onpopstate fires, URL is now /default&tags=a,b
$http.get() fires, sends network request for data with /default&tags=a,b,c
$http.get() fires again, sends network request for data with /default&tags=a,b
dataset for /default&tags=a,b loads
Third back button event
window.onpopstate fires, URL is now /default&tags=a
No network changes
Fourth back button event
window.onpopstate fires, URL is now /default
$http.get() fires, sends network request for data with /default&tags=a
$http.get() fires again, sends network request for data with /default
dataset for /default loads
Relevant code snippet:
$scope.apiRequest = function(options, callback) {
// Omitted: a bunch of functions to build query
// based on user-selected tags.
// I've verified that this is working correctly.
$http.get(path)
.then(function(response) {
console.log('http request submitted');
if (callback) {
callback(response.data.response, response.data.count, response.data.facets);
console.log('data returned');
}
}, function(response) {
console.log('there has been an error');
});
}
Neither the success nor error events fire. I've tried using $http.get().then().catch() to see if there might be something else going on, but for some reason I keep getting an error in my console that says that ...catch() is not a valid function, which is in and of itself bewildering. Any ideas?
Thanks!
This sounds indicative of a function not cycling through the $digest loop. In this case you may attempt to add $scope.$apply(); as the last line in your window.onpopstate handler function to kick the $digest cycle to execute your function call.
This article, Notes On AngularJS Scope Life-Cycle helped me to better understand the $digest cycle and how you can force the $digest cycle to run with $scope.$apply(); Keep in mind you want to use $scope.$apply() sparingly but in some cases you are forced to kick off the cycle, especially with async callbacks.

Block javascript execution and wait on event triggered in UI

I have a method in which I want to stop execution (do not return), wait on an event triggered by the UI, then continue that method.
chrome.webRequest.onBeforeSendHeaders.addListener(
function(details) {
var newHeaders;
//I need to stop here, wait on some user event, update the `newHeaders` variable with
//the content set by the user on the page
return {requestHeaders:newHeaders};
},
{urls: ["<all_urls>"]},
["blocking", "requestHeaders"]
);
What I'm doing: Developing a chrome plugin that intercepts requests, modify them by the input of the user from html, then send the request. Based on the docs, I assumed I have to modify it directly in the method and return it.
I hope this points you in the right direction:
The docs at: http://developer.chrome.com/trunk/extensions/webRequest.html say:
If the optional opt_extraInfoSpec array contains the string 'blocking'
(only allowed for specific events), the callback function is handled
synchronously. That means that the request is blocked until the
callback function returns. In this case, the callback can return a
BlockingResponse that determines the further life cycle of the
request. Depending on the context, this response allows cancelling or
redirecting a request (onBeforeRequest), cancelling a request or
modifying headers (onBeforeSendHeaders, onHeadersReceived), or
providing authentication credentials (onAuthRequired).
My guess is that you can specify "blocking" and then gather information from your user.

Categories