I have a React app. In that react app, I have certain button click events which are triggered just before page changes (analytics event)
sendEvent('xyz')
let win = window.open(`${PAGE_PATHS.dashboard}`, "_self");
if (win !== null) {
win.focus();
}
Where event is
async event(name, data) {
await this.sendEvent(name, data)
}
Where sendEvent() is
async sendEvent(name, data) {
// some code
axios.post(url, payload, userConfig)
return
}
Now, because of the current structure, the events sometimes don't get logged. There are two things which I could do here.
Use await but I don't want to do it because sending event and getting response might take some time (user experience) and I don't care about the response
Use setTimeout.
For some reason I don't like either one of the approach. Is there a way, I can have the task to execute (if it isn't completed) even after the webpage have changed it's href? maybe using service or webworker
What you are doing is commonly called "sending a beacon", and there is a method in the Web standards especially for this case: Navigator.sendBeacon().
This method will allow your script to send a request to the web server, even after the page has been killed.
I am not an axios ninja, so I can't tell you how it should be rewritten to perform the same, but certainly it can.
The basic usage is:
navigator.sendBeacon(url, data);
where data can be an ArrayBuffer, a TypedArray, a Blob, a DOMString, FormData or an URLSearchParams object.
Related
I have recently been working on a project with both a client (in the browser) and node.js. I am using web sockets to communicate between the two. I have implemented an API on my server that communicates over the WebSockets and the server works just fine. In the browser, I am implementing the following class (in javascript) to interface with the API.
class ApiHandlerV1 {
constructor(apiUrl) {
//create the object
this.ws = null;
}
makeRequest(request,callback) {
//make a request
}
connect() {
//connect the websocket
this.ws = new WebSocket(this.url);
this.ws.onmessage = function () {
//call callback?
}
}
}
The issue that I am caught up on is that I want to be able to call makeRequest, provide a callback, and once the socket has gotten data back trigger the callback. I have thought about just re-defining .onmessage every time that I make a request but that just seems dirty to me and there is most likely a nice and easy solution to this.
Clarifications: Because of how I implemented my server I will only get a single message back from the server.
As Dominik pointed out in the comments I should also say that I am going to call .connect() before I make a request. I will be calling makeRequest multiple times after in other parts of my code.
The chrome.webRequest API has the concept of a request ID (source: Chrome webRequest documention):
Request IDs
Each request is identified by a request ID. This ID is unique within a browser session and the context of an extension. It remains constant during the the life cycle of a request and can be used to match events for the same request. Note that several HTTP requests are mapped to one web request in case of HTTP redirection or HTTP authentication.
You can use it to correlate the requests even across redirects. But how do you initially get hold off the id when start a new request with fetch or XMLHttpRequest?
So far, I have not found anything better than to use the URL of the request as a way to make the initial link between the new request and the requestId. However, if there are overlapping requests to the same resource, this is not reliable.
Questions:
If you make a new request (either with fetch or XMLHttpRequest), how do you reliably get access to the requestId?
Does the fetch API or XMLHttpRequest API allow access to the requestId?
What I want to do is to use the functionality provided by the webRequest API to modify a single request, but I want to make sure that I do not accidentally modify other pending requests.
To the best of my knowledge, there is no direct support in the fetch or XHMLHttpRequest API. Also I'm not aware of completely reliable way to get hold of the requestId.
What I ended up doing was installing a onBeforeRequest listener, storing the requestId, and then immediately removing the listener again. For instance, it could look like this:
function makeSomeRequest(url) {
let listener;
const removeListener = () => {
if (listener) {
chrome.webRequest.onBeforeRequest.removeListener(listener);
listener = null;
}
};
let requestId;
listener = (details) => {
if (!requestId && urlMatches(details.url, url)) {
requestId = details.requestId;
removeListener();
}
};
chrome.webRequest.onBeforeRequest.addListener(listener, { urls: ['<all_urls>'] });
// install other listeners, which can then use the stored "requestId"
// ...
// finally, start the actual request, for instance
const promise = fetch(url).then(doSomething);
// and make sure to always clean up the listener
promise.then(removeListener, removeLister);
}
It is not perfect, and matching the URL is a detail that I left open. You could simply compare whether the details.url is identical to url:
function urlMatches(url1, url2) {
return url1 === url2;
}
Note that it is not guaranteed that you see the identical URL, for instance, if make a request against http://some.domain.test, you will see http://some.domain.test/ in your listener (see my other question about the details). Or http:// could have been replaced by https:// (here I'm not sure, but it could be because of other extensions like HTTPS Everywhere).
That is why the code above should only be seen as a sketch of the idea. It seems to work good enough in practice, as long as you do not start multiple requests to the identical URL. Still, I would be interested in learning about a better way to approach the problem.
I'm introducing service worker on my site.And i'm using app-shell approach for responding to requests.Below is my code structure.
serviceWorker.js
self.addEventListener("fetch", function(event) {
if (requestUri.indexOf('-spid-') !== -1) {
reponsePdPage(event,requestUri);
}else{
event.respondWith(fetch(requestUri,{mode: 'no-cors'}).catch(function (error){
console.log("error in fetching => "+error);
return new Response("not found");
})
);
}
});
function reponsePdPage(event,requestUri){
var appShellResponse=appShellPro();
event.respondWith(appShellResponse); //responds with app-shell
event.waitUntil(
apiResponse(requestUri) //responds with dynamic content
);
}
function appShellPro(){
return fetch(app-shell.html);
}
function apiResponse(requestUri){
var message=['price':'12.45cr'];
self.clients.matchAll().then(function(clients){
clients.forEach(function (client) {
if(client.url == requestUri)
client.postMessage(JSON.stringify(message));
});
});
}
App-shell.html
<html>
<head>
<script>
if ('serviceWorker' in navigator) {
navigator.serviceWorker.onmessage = function (evt) {
var message = JSON.parse(evt.data);
document.getElementById('price').innerHTML=message['price'];
}
}
</script>
</head>
<body>
<div id="price"></div>
</body>
</html>
serviceWorker.js is my only service worker file. whenever i'm getting request of -spid- in url i calls reponsePdPage function.In reponsePdPage function i'm first responding with app-shell.html. after that i'm calling apiResponse function which calls postmessage and send the dynamic data.The listener of post message is written in app-shell.html.
The issue i'm facing is, sometimes post message gets called before the listener registration.It means the apiResponse calls post message but their is not register listener to that event. So i cant capture the data.?Is their something wrong in my implementation.
I'm going to focus on just the last bit, about the communication between the service worker and the controlled page. That question is separate from many of the other details you provide, such as using PHP and adopting the App Shell model.
As you've observed, there's a race condition there, due to the fact that the code in the service worker and the parsing and execution of the HTML are performed in separate processes. I'm not surprised that the onmessage handler isn't established in the page yet at the time the service worker calls client.postMessage().
You've got a few options if you want to pass information from the service worker to controlled pages, while avoiding race conditions.
The first, and probably simplest, option is to change the direction of communication, and have the controlled page use postMessage() to send a request to the service worker, which then responds with the same information. If you take that approach, you'll be sure that the controlled page is ready for the service worker's response. There's a full example here, but here's a simplified version of the relevant bit, which uses a Promise-based wrapper to handle the asynchronous response received from the service worker:
Inside the controlled page:
function sendMessage(message) {
// Return a promise that will eventually resolve with the response.
return new Promise(function(resolve) {
var messageChannel = new MessageChannel();
messageChannel.port1.onmessage = function(event) {
resolve(event.data);
};
navigator.serviceWorker.controller.postMessage(message,
[messageChannel.port2]);
});
}
Inside the service worker:
self.addEventListener('message', function(event) {
// Check event.data to see what the message was.
// Put your response in responseMessage, then send it back:
event.ports[0].postMessage(responseMessage);
});
Other approaches include setting a value in IndexedDB inside the service worker, which is then read from the controlled page once it loads.
And finally, you could actually take the HTML you retrieve from the Cache Storage API, convert it into a string, modify that string to include the relevant information inline, and then respond with a new Response that includes the modified HTML. That's probably the most heavyweight and fragile approach, though.
I am having some content in local storage . I want to send this in http header every time a request to the server is being made by invoking something like (xhr.setRequestHeader('custom-header', 'value');). Instead of calling the function which does this task before every request , I want it to be called automatically .
This can be done easily by overwriting the send method:
// save the real `send`
var realSend = XMLHttpRequest.prototype.send;
// replace `send` with a wrapper
XMLHttpRequest.prototype.send = function() {
this.setRequestHeader("X-Foobar", "my header content");
// run the real `send`
realSend.apply(this, arguments);
}
This turns XMLHttpRequest.prototype.send into a function that does some arbitrary operation (here, setting the X-Foobar request header on the XMLHttpRequest instance) and then executes the actual Ajax request with the real send method.
Local storage was actually designed not to be sent to the server automatically. This was done to improve on cookies, which result in a large overhead (if they hold much data) due to their being sent with every page request. That slows things down and makes is very bad for mobile phones particularly. So you will have to continue with the method you are already using, or take one of the alternative suggestions offered in other replies.
I'm trying to figure out a way to cache my knockoutJS SPA data and I've been experimenting with amplifyJS. Here's one of my GET functions:
UserController.prototype.getUsers = function() {
var self = this;
return $.ajax({
type: 'GET',
url: self.Config.api + 'users'
}).done(function(data) {
self.usersArr(ko.utils.arrayMap(data.users, function(item) {
// run each item through model
return new self.Model.User(item);
}));
}).fail(function(data) {
// failed
});
};
Here's the same function, "amplified":
UserController.prototype.getUsers = function() {
var self = this;
if (amplify.store('users')) {
self.usersArr(ko.utils.arrayMap(amplify.store('users'), function(item) {
// run each item through model
return new self.Model.User(item);
}));
} else {
return $.ajax({
type: 'GET',
url: self.Config.api + 'users'
}).done(function(data) {
self.usersArr(ko.utils.arrayMap(data.users, function(item) {
// run each item through model
return new self.Model.User(item);
}));
}).fail(function(data) {
// failed
});
};
This works as expected, but I'm not sure about the approach I used, because it will also require extra work on the addUser, removeUser and editUser functions. And seeing as I have many more similar functions throughout my app, I'd like to avoid the extra code if possible.
I've found a way of handling things with the help of ko.extenders, like so:
this.usersArr = ko.observableArray().extend({ localStore: 'users' });
Then use the ko.extenders.localStore function to update the local storage data whenever it detects a change inside the observableArray. So on init it will write to the observableArray in case local storage data exists for users key and on changes it will update the local storage data.
My problem with this approach is that I need to run my data through the model and I couldn't find a way to do that from the localStore function, which is kept on a separate page.
Has any of you worked with KO and Amplify? What approach did you use? Should I use the first one or try a combination of the two and rewrite the extender in a way that it only updates the local storage without writing to the observableArray on init?
Following the discussion in the question's comments, I suggested to use native HTTP caching instead of adding another caching layer on the client by means of an extra library.
This would require implementing a conditional request scheme.
Such a scheme relies on freshness information in the Ajax response headers via the Last-Modified (or E-Tag) HTTP headers and other headers that influence browser caching (like Cache-Control: with its various options).
The browser transparently sends an If-Modified-Since (or If-None-Match) header to the server when the same resource (URL) is requested subsequently.
The server can respond with HTTP 304 Not Modified if the client's information is still up-to-date. This can be a lot faster than re-creating a full response from scratch.
From the Ajax request's point of view (jQuery or otherwise) a response works the same way, no matter if it actually came from the server or if it came from the browser's cache, the latter is only a lot faster.
Carefully adapting the server side is necessary for this, the client side on the other hand does not need much change.
The benefit of implementing conditional requests is reduced load on the server and faster response behavior on the client.
A specialty of Knockout to improve this even further:
If you happen to use the mapping plugin to map raw server data to a complex view model, you can define - as part of the options that control the mapping process - a key function. Its purpose is to match parts of your view model against parts of the source data.
This way parts of the data that already have been mapped will not be mapped again, the others are updated. That can help reduce the client's processing time for data it already has and, potentially, unnecessary screen updates as well.