Track XMLHTTPRequest results - javascript

I have a system built of many modules that use AJAX to do POSTs and GETs. If I monitor the results of these requests, I can know if the system is responsive - ie if the browser is still connected to the IP source.
I can by hand inject some callback methods in jQuery's AJAX .fail(). I have actually done this but it can be easy to forget and it adds a lot of extra code, as everything about this system requires AJAX.
I saw this interesting code to intercept XMLHTTPRequest open prototype method
(function(open) {
XMLHttpRequest.prototype.open = function() {
this.addEventListener("readystatechange", function() {
console.log(this.readyState);
}, false);
open.apply(this, arguments);
};
})(XMLHttpRequest.prototype.open);
I looked through the api, https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest, but I did not find somewhere that I could intercept success/failed/timed out requests. Is this possible?

It's all about listening to the appropriate events, they appear in the left sidebar in your link under "Events".
That said, I would consider using / taking some inspiration of zone.js.
A Zone is an execution context that persists across async tasks. You can think of it as thread-local storage for JavaScript VMs.

Related

Is it possible to watch over XHR/Fetch requests happening in the browser?

I want to react to some events that are triggered by a 3rd party on my site.
Those events are Fetch/XHR requests, I'd like to be notified when a "watched" request happens (I assume I would watch them based on the "Request URL"), and I would like to read its request/response headers and payload.
I've never seen that being done before, is it even possible?
The browser is aware of all requests happening, but I'm uncertain whether we can read this, can we?
Eventually, I hope to achieve a watcher that would trigger my own code when some endpoints have been called, while re-using the data sent by those requests. I'm hoping to both detect WHEN a particular endpoint is called, and WHAT the response is, so that I can use it for my own needs (without needing to perform yet another identical request)
I'm not sure if reading the data will be possible, I found out that reacting to fetch requests is possible with the help of Resource Timing (see https://www.w3.org/TR/resource-timing/)
For example, you can create decorator-function for fetch:
const origFetch = fetch;
window.fetch = function(...args) {
console.log(args);
return origFetch.apply(this, args);
}
Also you can create decorator for Response.prototype.json function.
It's possible using the WebRequest built-in library. The following example is specific to the Chrome Extension runtime, but something similar is doable for browsers, too.
For Chrome Extensions, it needs to run as a Service Worker (e.g: background.js)*
chrome.webRequest.onBeforeSendHeaders.addListener(
(requestHeadersDetails) => {
console.log('requestHeadersDetails', requestHeadersDetails);
},
{
urls: ['https://*'],
},
[
'requestHeaders',
'extraHeaders',
],
);
See https://developer.chrome.com/docs/extensions/reference/webRequest/#event-onBeforeRequest and https://developer.chrome.com/docs/extensions/reference/webRequest/#event-onBeforeSendHeaders (both are very similar)
I didn't find a way to re-use the data that were fetched, though.
But at least I'm notified when a request is sent.

Fetch API before the user closes the browser is not behaving consistently [duplicate]

I'm trying to find out when a user left a specified page. There is no problem finding out when he used a link inside the page to navigate away but I kind of need to mark up something like when he closed the window or typed another URL and pressed enter. The second one is not so important but the first one is. So here is the question:
How can I see when a user closed my page (capture window.close event), and then... doesn't really matter (I need to send an AJAX request, but if I can get it to run an alert, I can do the rest).
Updated 2021
TL;DR
Beacon API is the solution to this issue (on almost every browser).
A beacon request is supposed to complete even if the user exits the page.
When should you trigger your Beacon request ?
This will depend on your usecase. If you are looking to catch any user exit, visibilitychange (not unload) is the last event reliably observable by developers in modern browsers.
NB: As long as implementation of visibilitychange is not consistent across browsers, you can detect it via the lifecycle.js library.
# lifecycle.js (1K) for cross-browser compatibility
# https://github.com/GoogleChromeLabs/page-lifecycle
<script defer src="/path/to/lifecycle.js"></script>
<script defer>
lifecycle.addEventListener('statechange', function(event) {
if (event.originalEvent == 'visibilitychange' && event.newState == 'hidden') {
var url = "https://example.com/foo";
var data = "bar";
navigator.sendBeacon(url, data);
}
});
</script>
Details
Beacon requests are supposed to run to completion even if the user leaves the page - switches to another app, etc - without blocking user workflow.
Under the hood, it sends a POST request along with the user credentials (cookies), subject to CORS restrictions.
var url = "https://example.com/foo";
var data = "bar";
navigator.sendBeacon(url, data);
The question is when to send your Beacon request. Especially if you want to wait until the last moment to send session info, app state, analytics, etc.
It used to be common practice to send it during the unload event, but changes to page lifecycle management - driven by mobile UX - killed this approach. Today, most mobile workflows (switching to new tab, switching to the homescreen, switching to another app...) do not trigger the unload event.
If you want to do things when a user exits your app/page, it is now recommended to use the visibilitychange event and check for transitioning from passive to hidden state.
document.addEventListener('visibilitychange', function() {
if (document.visibilityState == 'hidden') {
// send beacon request
}
});
The transition to hidden is often the last state change that's reliably observable by developers (this is especially true on mobile, as users can close tabs or the browser app itself, and the beforeunload, pagehide, and unload events are not fired in those cases).
This means you should treat the hidden state as the likely end to the user's session. In other words, persist any unsaved application state and send any unsent analytics data.
Details of the Page lifecyle API are explained in this article.
However, implementation of the visibilitychange event, as well as the Page lifecycle API is not consistent across browsers.
Until browser implementation catches up, using the lifecycle.js library and page lifecycle best practices seems like a good solution.
# lifecycle.js (1K) for cross-browser compatibility
# https://github.com/GoogleChromeLabs/page-lifecycle
<script defer src="/path/to/lifecycle.js"></script>
<script defer>
lifecycle.addEventListener('statechange', function(event) {
if (event.originalEvent == 'visibilitychange' && event.newState == 'hidden') {
var url = "https://example.com/foo";
var data = "bar";
navigator.sendBeacon(url, data);
}
});
</script>
For more numbers about the reliability of vanilla page lifecycle events (without lifecycle.js), there is also this study.
Adblockers
Adblockers seem to have options that block sendBeacon requests.
Cross site requests
Beacon requests are POST requests that include cookies and are subject to CORS spec. More info.
There are unload and beforeunload javascript events, but these are not reliable for an Ajax request (it is not guaranteed that a request initiated in one of these events will reach the server).
Therefore, doing this is highly not recommended, and you should look for an alternative.
If you definitely need this, consider a "ping"-style solution. Send a request every minute basically telling the server "I'm still here". Then, if the server doesn't receive such a request for more than two minutes (you have to take into account latencies etc.), you consider the client offline.
Another solution would be to use unload or beforeunload to do a Sjax request (Synchronous JavaScript And XML), but this is completely not recommended. Doing this will basically freeze the user's browser until the request is complete, which they will not like (even if the request takes little time).
1) If you're looking for a way to work in all browsers, then the safest way is to send a synchronous AJAX to the server. It is is not a good method, but at least make sure that you are not sending too much of data to the server, and the server is fast.
2) You can also use an asynchronous AJAX request, and use ignore_user_abort function on the server (if you're using PHP). However ignore_user_abort depends a lot on server configuration. Make sure you test it well.
3) For modern browsers you should not send an AJAX request. You should use the new navigator.sendBeacon method to send data to the server asynchronously, and without blocking the loading of the next page. Since you're wanting to send data to server before user moves out of the page, you can use this method in a unload event handler.
$(window).on('unload', function() {
var fd = new FormData();
fd.append('ajax_data', 22);
navigator.sendBeacon('ajax.php', fd);
});
There also seems to be a polyfill for sendBeacon. It resorts to sending a synchronous AJAX if method is not natively available.
IMPORTANT FOR MOBILE DEVICES : Please note that unload event handler is not guaranteed to be fired for mobiles. But the visibilitychange event is guaranteed to be fired. So for mobile devices, your data collection code may need a bit of tweaking.
You may refer to my blog article for the code implementation of all the 3 ways.
I also wanted to achieve the same functionality & came across this answer from Felix(it is not guaranteed that a request initiated in one of these events will reach the server).
To make the request reach to the server we tried below code:-
onbeforeunload = function() {
//Your code goes here.
return "";
}
We are using IE browser & now when user closes the browser then he gets the confirmation dialogue because of return ""; & waits for user's confirmation & this waiting time makes the request to reach the server.
Years after posting the question I made a way better implementation including nodejs and socket.io (https://socket.io) (you can use any kind of socket for that matter but that was my personal choice).
Basically I open up a connection with the client, and when it hangs up I just save data / do whatever I need. Obviously this cannot be use to show anything / redirect the client (since you are doing it server side), but is what I actually needed back then.
io.on('connection', function(socket){
socket.on('disconnect', function(){
// Do stuff here
});
});
So... nowadays I think this would be a better (although harder to implement because you need node, socket, etc., but is not that hard; should take like 30 min or so if you do it first time) approach than the unload version.
The selected answer is correct that you can't guarantee that the browser sends the xhr request, but depending on the browser, you can reliably send a request on tab or window close.
Normally, the browser closes before xhr.send() actually executes. Chrome and edge look like they wait for the javascript event loop to empty before closing the window. They also fire the xhr request in a different thread than the javascript event loop. This means that if you can keep the event loop full for long enough, the xhr will successfully fire. For example, I tested sending an xhr request, then counting to 100,000,000. This worked very consistently in both chrome and edge for me. If you're using angularjs, wrapping your call to $http in $apply accomplishes the same thing.
IE seems to be a little different. I don't think IE waits for the event loop to empty, or even for the current stack frame to empty. While it will occasionally correctly send a request, what seems to happen far more often (80%-90% of the time) is that IE will close the window or tab before the xhr request has completely executed, which result in only a partial message being sent. Basically the server receives a post request, but there's no body.
For posterity, here's the code I used attached as the window.onbeforeunload listener function:
var xhr = new XMLHttpRequest();
xhr.open("POST", <your url here>);
xhr.setRequestHeader("Content-Type", "application/json;charset=UTF-8");
var payload = {id: "123456789"};
xhr.send(JSON.stringify(payload));
for(var i = 0; i < 100000000; i++) {}
I tested in:
Chrome 61.0.3163.100
IE 11.608.15063.0CO
Edge 40.15063.0.0
Try this one. I solved this problem in javascript, sending ajax call to server on browse or tab closing. I had a problem with refreshing page because on onbeforeunload function including refreshing of the page. performance.navigation.type == 1 should isolate refresh from closing (on mozzila browser).
$(window).bind('mouseover', (function () { // detecting DOM elements
window.onbeforeunload = null;
}));
$(window).bind('mouseout', (function () { //Detecting event out of DOM
window.onbeforeunload = ConfirmLeave;
}));
function ConfirmLeave() {
if (performance.navigation.type == 1) { //detecting refresh page(doesnt work on every browser)
}
else {
logOutUser();
}
}
$(document).bind('keydown', function (e) { //detecting alt+F4 closing
if (e.altKey && e.keyCode == 115) {
logOutUser();
}
});
function logOutUser() {
$.ajax({
type: "POST",
url: GWA("LogIn/ForcedClosing"), //example controller/method
async: false
});
}
Im agree with Felix idea and I have solved my problem with that solution and now I wanna to clear the Server Side solution:
1.send a request from client side to server
2.save time of the last request recived in a variable
3.check the server time and compare it by the variable of last recived
request
4.if the result is more than the time you expect,start running the
code you want to run when windows closed...
Use:
<body onUnload="javascript:">
It should capture everything except shutting down the browser program.

Worker using synchronous XMLHttpRequest to get data from GUI

I'd like a Web Worker which is deep in a call stack to be able to make a synchronous request to get information from the GUI.
The GUI itself is not blocked--it's able to process messages. But the JavaScript on the worker's stack is not written in async / await style. It is just a lot of synchronous code. So if the GUI tried to send a response back to the worker with a postMessage, that would just be stuck in the onmessage() queue.
I've found at least one hack that works in today's browsers. The worker can postMessage to the GUI for the information it wants--along with some sort of ID (e.g. a UUID). Then it can make a synchronous XMLHttpRequest--which is not deprecated on workers--to some server out on the web with that ID.
While the worker is waiting on that http request, the GUI processes the information request. When it's done, it does an XMLHttpRequest to POST to that same server with the ID and the data. The server then uses that information to fulfill the blocking request it is holding open for the worker. This fulfills the synchronous request.
It may seem hare-brained to outsource synchronization between the GUI and the worker to a server. But I'll do it if I have to, because it does not fit the use case to force the worker code to be written in asynchronous style. Also, I'm assuming that someday the browser will be able to do this kind of synchronization natively. But it looks like the one mechanism which could have been used--SharedArrayBuffer, has been disabled for the time being.
UPDATE circa late 2018: SharedArrayBuffer was re-enabled in Chrome for desktop v67. It's not back on for Android Chrome or other browsers yet, and might be a while.
(More bizarre options like compiling a JavaScript interpreter into the worker so the JS stack could be suspended and restarted at will are not on the table--not just due to size and performance, but the inability to debug the worker using the browser's developer tools.)
So...
Is there any way for a synchronous XMLHttpRequest to be fooled into making a request of something coming from within the browser itself (maybe via a custom link scheme?) If the GUI thread could directly answer an XMLHttpRequest that would cut out the middleman.
Could the same functionality be provided via some kind of plugin? I'm thinking maybe synchronization could be done as an abstraction. If someone doesn't have the plugin, it falls back to using the network as a synchronization surrogate. (And presumably if they ever re-enable SharedArrayBuffer, it could just use that.)
I'm wondering also if there is some kind stock JS-ready service which already implements the protocol for the echo server...if anyone knows of one. Seems quite easy to write.
I don't see a way to do what you're trying to do. Approaches that appear initially promising eventually run into hard problems.
Service Workers and fetch
In a comment, you suggested service workers as a possible solution. The service workers examples I've seen mention providing "custom responses to requests". However, all examples use the fetch event to provide the custom response. AFAIK, it is produced only when you actually use the fetch API specifically. An xhr won't generate a fetch event. (Yes, I've tried it and it did not work.) And you cannot just use fetch in your specific situation instead of xhr because fetch does not operate synchronously. The specs for fetch mention a "synchronous flag", but it is not part of the API.
Note that the fetch API and the associated event are not specific to service workers so you could use fetch in a plain worker, or elsewhere, if it solved your problem. You often see fetch mentioned with service workers because service workers can be used for scenarios where regular workers cannot be used and some of those scenarios entail providing custom responses to fetch requests.
Fake XMLHttpRequest
Marinos An suggested in a comment using a fake XMLHttpRequest object. In most cases, that would work. Testing frameworks like Sinon provide fake XMLHttpRequest that allow testing code to have complete control over the responses that the code under test gets. However, it does not work for your use-case scenario. If your fake xhr implementation is implemented as one JavaScript object, and you try sending it to the worker, the worker will get a complete clone of it. Actions on the fake xhr performed inside the worker won't be seen outside the worker. Actions on the fake xhr performed outside the worker won't be seen inside the worker.
It is theoretically possible to work around the cloning issue by having the fake xhr consist of two objects: a front end through which requests are performed, and a backend through which fake responses are established. You could send the front end to the worker, but the front end and the back end would have to communicate with each other and this brings you right back to the communication problem you were trying to solve. If you could make the two parts of the fake xhr talk to each other in a way that allows you to fake synchronous xhr requests, then by the same token you would be able to solve the communication problem without the fake xhr.
Hm...perhaps you could create your workers on the fly, like so
function startNewWorker (code) {
var blob = new Blob([code], {type: "application/javascript"});
var worker = new Worker(URL.createObjectURL(blob));
}
And then, for each new http request you need, you start its own worker:
const w1 = startNewWorker(yourCodeThatDoesSomething);
w1.onmessage = function () { /* ... */};
const w2 = startNewWorker(yourCodeThatDoesSomething);
w2.onmessage = function () { /* ... */};
Both will be asynchronous and will not block the interface for you user, and they both will be able to do their own work, and each of them will have its own listeners.
Notice that code is a string, so, if you have a function, you can use it .toString() concatenated with (), like this:
function myWorkerContent () {
// do something ....
}
const code = "(" + myWorkerContent.toString() + ")()";
// or, if you want to use templateLiterals
// const code = `(${myWorkerContent.toString()})()`;
This way, when running your worker, it will create and execute the function instantly inside each worker of yours.

How to intercept all AJAX requests made by different JS libraries

I am building a web app with different JS libraries (AngularJS, OpenLayers,...) and need a way to intercept all AJAX responses to be able, in case the logged user session expired (response gets back with 401 Unauthorized status), to redirect him to the login page.
I know AngularJS offers interceptors to manage such scenarios, but wasn't able to find a way to achieve such injection into OpenLayers requests. So I opted for a vanilla JS approach.
Here I found this piece of code...
(function(open) {
XMLHttpRequest.prototype.open = function(method, url, async, user, pass) {
this.addEventListener("readystatechange", function() {
console.log(this.readyState); // this one I changed
}, false);
open.call(this, method, url, async, user, pass);
};
})(XMLHttpRequest.prototype.open);
...which I adapted and looks like it behaves as expected (only tested it on last Google Chrome).
As it modifies the prototype of XMLHTTPRequest I am wondering how dangerous this could result or if it could produce serious performance issues. And by the way would there be any valid alternative?
Update: how to intercept requests before they get sent
The previous trick works ok. But what if in the same scenarion you want to inject some headers before the request gets sent? Do the following:
(function(send) {
XMLHttpRequest.prototype.send = function(data) {
// in this case I'm injecting an access token (eg. accessToken) in the request headers before it gets sent
if(accessToken) this.setRequestHeader('x-access-token', accessToken);
send.call(this, data);
};
})(XMLHttpRequest.prototype.send);
This type of function hooking is perfectly safe and is done regularly on other methods for other reasons.
And, the only performance impact is really only one extra function call for each .open() plus whatever code you execute yourself which is probably immaterial when a networking call is involved.
In IE, this won't catch any code that tries to use the ActiveXObject control method of doing Ajax. Well written code looks first for the XMLHttpRequest object and uses that if available and that has been available since IE 7. But, there could be some code that uses the ActiveXObject method if it's available which would be true through much later versions of IE.
In modern browsers, there are other ways to issue Ajax calls such as the fetch() interface so if one is looking to hook all Ajax calls, you have to hook more than just XMLHttpRequest.
This won't catch XMLHttpRequests for some versions of IE (9 and below). Depending upon the library, they may look for IE's proprietary ActiveX control first.
And, of course, all bets are off if you are using a non-strict DOCTYPE under IE, but I'm sure you knew that.
Reference: CanIuse
As kindly pointed out by by Firefox AMO Editor Rob W,
The following code changes the behavior of XMLHttpRequest. By default,
if the third ("async") parameter is not specified, it defaults to
true. When it is specified and undefined, it is equivalent to "false",
which turns a request in a synchronous HTTP request. This causes the
UI to block while the request is being processed, and some features of
the XMLHttpRequest API are disabled too.
...
To fix this, replace open.call(....) with open.apply(this, arguments);
And here is a reference link:
https://xhr.spec.whatwg.org/#the-open()-method
Try this
let oldXHROpen = window.XMLHttpRequest.prototype.open;
window.XMLHttpRequest.prototype.open = function(method, url, async, user, password) {
console.log({method});
// Show loader
this.addEventListener('load', function() {
console.log('load: ' + this.responseText);
// Hide loader
});
return oldXHROpen.apply(this, arguments);
}

AJAX fire-and-forget, looking for the opposite of server-sent event

Is these an API symmetric to Server-Sent Event to generate fire-and-forget events from browser to server? I know how to not reply to a request on the server side, but how to tell the browser that it does not need to wait for a reply?
The goal here is to save resources on the client side, say you want to send 10k events to the server as fast as possible, not caring about what the sever replies.
Edit: While mostly irrelevant to the question, here is some background about the project I'm working on which would make use of an "AJAX fire-and-forget". I want to build a JavaScript networking library for Scala.js that will have as one of its applications to be the transport layer between Akka actors on the JVM and on a browser (compiled with Scala.js). When WebSockets are not available I want to have some sort of fallback, and having a pending connection for the duration of a round trip on each JS->JVM message is not acceptable.
As you have asked for "how to tell the browser that it does not need to wait for a reply?"
I assume that you do not want to process the server reply.
in such case, it is better to utilize one pixel image response trick which is implemented by Google for analytics and tracking, and many other such services.
More details here
The trick is to create new image using javascript and set src property, the browser will immediately fire the request for image and browser can parallelly request form multiple such requests.
var image = new Image();
image.src = "your-script.php?id=123&other_params=also";
PROs:
easy to implement
less load on server/client, then ajax request
CONs:
you can send only GET requests using this appproach.
Edit
For more references:
http://help.yahoo.com/l/us/yahoo/ywa/faqs/tracking/advtrack/3520294.html
https://support.google.com/dfp_premium/answer/1347585?hl=en
How to create and implement a pixel tracking code
Again they are using same technique of pixel image.
So, just to be clear, you're trying to use the XMLHttpRequest as a proxy for your network communication, which means you are 100% at the mercy of whatever XMLHttpRequest offers you, right?
My take is that if you're going to stick with XMLHttpRequest for this, you're going to have to just make peace with getting a server response. Just make the call asynchronously and have the response handled by a no-op function. Consider what somebody else suggested, using a queue on the server (or an asynchronous method on the server) so you return immediately to the client. Otherwise, I really think JavaScript is just the wrong tool for the job you're describing.
XMLHttpRequest is going to be a different implementation (presenting a more or less common interface contract) in every browser. I mean, Microsoft invented the thing, then the other browser makers emulated it, then voila, everybody started calling it Web 2.0. Point being, if you push too hard at the doughy center of XMLHttpRequest, you may get different behavior in different browsers.
XMLHttpRequest, as far as I know, strictly uses TCP (no UDP option), so at the very least your client is going to receive a TCP ACK from the server. There is no way to tell the server not to respond at that level. It's baked into the TCP/IP network stack.
Additionally, the communication uses the HTTP protocol, so the server will respond with HTTP headers... right? I mean, that is simply the way the protocol is defined. Telling HTTP to be something different is kind of like telling a cat to bark like a chicken.
Even if you could cancel the request on the client side by calling abort() on XMLHttpRequest, you're not cancelling it on the server side. To do so, even if it were possible with XMLHttpRequest, would require an additional request sent all the way to the server to tell it to cancel the response to the preceding request. How does it know which response to cancel? You'd have to manage request id's of some kind. You would have to be resilient to out-of-order cancellation requests. Complicated.
So here's a thought (I'm just thinking out loud): Microsoft's XMLHttpRequest was based at least in spirit on an even earlier Microsoft technology from the Visual Interdev days, which used a Java applet on the client to asynchronously fire off a request to the server, then it would pass control to your preferred JavaScript callback function when the response showed up, etc. Pretty familiar.
That Java async request thing got skewered during the whole Sun vs. Microsoft lawsuit fiasco. I heard rumors that a certain original Microsoft CEO would blow a gasket any time he learned about Microsoft tech being implemented using Java, and kill the tech. Who knows? I was unhappy when that capability disappeared for a couple of years, then happy again when XMLHttpRequest eventually showed up.
Maybe you see where I'm going, here... :-)
I think perhaps you're trying to squeeze behavior out of XMLHttpRequest that it just isn't built for.
The answer might be to just write your own Java applet, do some socket programming and have it do the kind communications you want to see from it. But then, of course, you'll have issues with people not having Java enabled in their browsers, exacerbated by all the recent Java security problems. So you're looking at code-signing certificates and so on. And you're also looking at issues that you'll need to resolve on the server side. If you still use HTTP and work through your web server, the web server will still want to send HTTP responses, which will still tie up resources on the server. You could make those actions on the server asynchronous so that TCP sockets don't stay tied up longer than necessary, but you're still tying up resources on the server side.
I managed to get the expected behavior using a very small timeout of 2ms. The following call is visible by the server but the connection is closed on the client side before any reply from the server:
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function () {
if (xhr.readyState == 2) {
alert("Response header recived, it was not a fire-and-forget...");
}
};
xhr.open("POST", "http://www.service.org/myService.svc/Method", true);
xhr.timeout = 2;
xhr.send(null);
This is not fully satisfactory because the timeout may change between browser/computers (for instance, 1ms does not work on my setup). Using a large timeout in the order of 50ms means that the client might hit the limit of maximum concurrent opened connections (6 on my setup).
Using XMLHttpRequest to send an async request (i.e. where you don't care if it succeeds or what the response is:
var req = new XMLHttpRequest();
req.open('GET', 'http://my.url.goes.here.com');
req.send();
You can do much the same thing with an Image object, too, btw:
new Image().src = 'http://my.url.goes.here.com';
The Image approach works particularly well if you're making cross-domain requests, since Images aren't subject to same-origin security restrictions the way XHR requests are. (BTW, it's good practice but not essential to have your endpoint return a 1x1 pixel PNG or GIF response with the appropriate Content-Type, to avoid browser console warnings like 'Resource interpreted as Image but transferred with MIME type text/html'.)
It sounds like you're trying to solve the wrong problem. Instead of dealing with this on the client, why not handle this on the server side.
Take the message from the client and put a message on a service bus or store the data in a database and return to the client. Depending on your stack and architecture, this should be fairly simple and very fast. You can process the message out of band, either a second service listens to the message bus and processes the request, or some sort of batch processor can come along later and process the records in the database.
You won't have the same level of fine-grained control of the connection with XHR as with WebSockets. Ultimately, it's the browser that manages the HTTP connection lifecycle.
Instead of falling back from WebSockets to discrete XHR connections, maybe you can store and batch your events. For instance:
Client JS
function sendMessage(message) {
WebSocketsAvailable ? sendWithWebSockets(message) : sendWithXHR(message);
}
var xhrQueue = [];
function sendWithXHR(message) {
xhrQueue.push({
timestamp: Date.now(), // if this matters
message: message
});
}
function flushXhrQueue() {
if (xhrQueue.length) {
var req = new XMLHttpRequest();
req.open('POST', 'http://...');
req.onload = function() { setTimeout(flushXhrQueue, 5000); };
// todo: needs to handle errors, too
req.send(JSON.stringify(xhrQueue));
xhrQueue = [];
}
else {
setTimeout(flushXhrQueue, 5000);
}
}
setTimeout(flushXhrQueue, 5000);
On the server, maybe you can have two endpoints: one for WebSockets and one for XHR. The XHR handler deserialises the JSON queue object and calls (once per message) the same handler used by the WebSockets handler.
Server pseudo-code
function WSHandler(message) {
handleMessage(message, Date.now());
}
function XHRHandler(jsonString) {
var messages = JSON.parse(jsonString);
for (var messageObj in messages) {
handleMessage(messageObj.message, messageObj.timestamp);
}
}
function handleMessage(message, timestamp) {
...
}

Categories