link Click tracking does not work on Safari browser - javascript

I have a basic html page which has links that point to different site. What I want to do is track the clicks. I am doing so by sending a 0 pixel image call on Click event of the link without returning false on click event.
The same works fine on all the browsers except Safari(on windows OS).
when a link is clicked using javascript I delay the redirect and send an image request over to the server and log the click on server side. I have tried increasing the delay but with no success... The trackers work gr8 on all the browsers except Safari which does not sent the request at all.
I dont know why but possibly its that safari waits for the complete js to be executed before making the request and after the whole js is executed it gets redirected....
=========================================================
<html>
<head>
<script type="text/javascript">
function logEvent(){
image = new Image(1,1);
image.onLoad=function(){alert("Loaded");};
image.onLoad=function(){alert("Error");};
image.src='http://#path_to_logger_php#/log.php?'+Math.random(0, 1000) + '=' + Math.random(0, 1000);
pauseRedirect(500);
}
function pauseRedirect(millis){
var date = new Date();
var curDate = null;
do {curDate = new Date();}
while(curDate-date < millis);
}
</script>
</head>
<body>
Site 1<br/>
Site 2<br/>
</body>
</html>
=========================================================
Code works in chrome, firefox, ie and Opera. Does not work on Safari only..... any clues....

I had the same issue with all WebKit browsers. In all others you only need to do new Image().src = "url", and the browser will send the request even when navigating to a new page. WebKit will stop the request before it's sent when you navigate to a new page right after. Tried several hacks that inject the image to the document and even force a re-paint through img.clientHeight. I really don't want to use event.preventDefault, since that causes a lot of headaches when a link has target="_blank", form submit, etc. Ended up using a synchronous XmlHttpRequest for browsers supporting Cross Origin Resource Sharing, since it will send the request to the server even though you don't get to read the response. A synchronous request has the unfortunate side-effect of locking the UI-thread while waiting for response, so if the server is slow the page/browser will lock up until it receives a response.
var supportsCrossOriginResourceSharing = (typeof XMLHttpRequest != "undefined" && "withCredentials" in new XMLHttpRequest());
function logEvent() {
var trackUrl = 'http://#path_to_logger_php#/log.php?'+Math.random(0, 1000) + '=' + Math.random(0, 1000);
if(supportsCrossOriginResourceSharing) {
xhrTrack(trackUrl);
} else {
imgTrack(trackUrl);
}
}
function xhrTrack(trackUrl) {
var xhr = new XMLHttpRequest();
xhr.open("GET", trackUrl, false);
xhr.onreadystatechange = function() {
if(xhr.readyState >= this.OPENED) xhr.abort();
}
try { xhr.send() } catch(e) {}
}
function imgTrack(trackUrl) {
var trackImg = new Image(1,1);
trackImg.src = trackUrl;
}

Related

JavaScript - how to detect if the Custom URL scheme is available or not available?

In Windows operating system i have a custom URI scheme, which is used from
IE, Firefox, Opera, Safari, Google Chrome
to launch Juniper router VPN SSH client (like Cisco). Basically it works as below if the SSH Client is installed, from the web page VPN SSH Client can be launched.
VPN SSH Client
Problem:
sometimes the user did not installed the Juniper router SSH client application from the CD/DVD box, therefore the juniper:open does nothing.
So in that case, i need to detect weather or not the URL scheme is available.
Therefore, I tried Javascript method but its not working exactly. because the juniper:open is actually not web link.
How do i then detect it please?
<script>
// Fails
function test1(){
window.location = 'juniper:open';
setTimeout(function(){
if(confirm('Missing. Download it now?')){
document.location = 'https://www.junper-affiliate.com/setup.zip';
}
}, 25);
//document.location = 'juniper:open';
}
// Fails
function test2(h){
document.location=h;
var time = (new Date()).getTime();
setTimeout(function(){
var now = (new Date()).getTime();
if((now-time)<400) {
if(confirm('Missing. Download it now?')){
document.location = 'https://www.junper-affiliate.com/setup.zip';
} else {
document.location=h;
}
}
}, 300);
}
</script>
Then:
<a onclick="test1()">TEST 1</a>
TEST 2
EDIT
Following suggestions in comments:
function goto(url, fallback) {
var script = document.createElement('script');
script.onload = function() {
document.location = url;
}
script.onerror = function() {
document.location = fallback;
}
script.setAttribute('src', url);
document.getElementsByTagName('head')[0].appendChild(script);
}
and
TEST 2
The price you have to pay, is a duplicated request for the page.
EDIT
This is a good workaround for same-origin policy, which prevents an async version using XMLHTTPRequest to work properly, since SOP restricts cross-domain requests to http and juniper:open would therefore always fail.
function goto(url, fallback) {
var xmlhttp = new XMLHttpRequest();
xmlhttp.open('GET', url, false);
try {
xmlhttp.send(null); // Send the request now
} catch (e) {
document.location = fallback;
return;
}
// Throw an error if the request was not 200 OK
if (xmlhttp.status === 200) {
document.location = url;
} else {
document.location = fallback;
}
}
EDIT
The initial solution below doesn't actually work 'cause no exception is being thrown if the protocol is not supported.
try {
document.location = 'juniper:open';
} catch (e) {
document.location = 'https://www.junper-affiliate.com/setup.zip';
}
After searching a lot i have not came to anything which can help to solve my problem. But this is what i am now making
1) Write a small tiny webserver which can run in the PC for 24/7 on boot, on crash. Use native python:
python -m SimpleHTTPServer [port]
2) tiny webserver will listen on port abnormal such as 10001 or such TCP ports which never get used
3) from Browser we have to communicate with the http://localhost:10001 just to get a reply
4) based on that reply we have to decide
Otherwise there is no way seems to be available
EDIT: Or you can do this way: InnoSetup - Is there any way to manually create cookie for Internet explorer?

Handling the browser's window / tab close event in javascript [duplicate]

This is the code which i used for window.onbeforeunload
<head>
<script>
window.onbeforeunload = func;
function func()
{
var request = new XMLHttpRequest();
request.open("POST", "exit.php", true);
request.onreadystatechange = stateChanged;
request.send(null);
}
function stateChanged()
{
if (request.readyState == 4 || request.readyState == "complete")
alert("Succes!");
}
</script>
</head>
this works with IE and Mozilla but does not work with Chrome..... please help......
thanks in advance.....
It seems that the only thing you can do with onbeforeunload in recent version of Chrome is to set the warning message.
window.onbeforeunload = function () {
return "Are you sure";
};
Will work. Other code in the function seems to be ignored by Chrome
UPDATE: As of Chrome V51, the returned string will be ignored and a default message shown instead.
Know I'm late to this, but was scratching my head why my custom beforeunload message wasn't working in Chrome and was reading this. So in case anyone else does the same, Chrome from Version 51 onwards no longer supports custom messages on beforeunload. Apparently it's because the feature has been misused by various scams. Instead you get a predefined Chrome message which may or may not suit your purposes. More details at:
https://developers.google.com/web/updates/2016/04/chrome-51-deprecations?hl=en#remove-custom-messages-in-onbeforeload-dialogs
Personally do not think the message they've chosen is a great one as it mentions leaving the site and one of the most common legitimate uses for onbeforeunload is for dirty flag processing/checking on a web form so it's not a great wording as a lot of the time the user will still be on your site, just have clicked the cancel or reload button by mistake.
You should try this:
window.onbeforeunload = function(e) {
e.returnValue = 'onbeforeunload';
return 'onbeforeunload';
};
This works on latest Chrome. We had the same issue the e.returnValue with value of onbeforeunload solved my problem.
Your code should be like this:
<head>
<script>
window.onbeforeunload = function(e) {
e.returnValue = 'onbeforeunload';
func();
return ''onbeforeunload'';
};
function func()
{
var request = new XMLHttpRequest();
request.open("POST", "exit.php", true);
request.onreadystatechange = stateChanged;
request.send(null);
}
function stateChanged()
{
if (request.readyState == 4 || request.readyState == "complete")
alert("Succes!");
}
</script>
</head>
Confirmed this behavior on chrome 21.0.1180.79
this seems to work with the same restritions as XSS, if you are refreshing the page or open a page on same domain+port the the script is executed, otherwise it will only be executed if you are returning a string (or similar) and a dialog will be shown asking the user if he wants to leans or stay in the page.
this is an incredible stupid thing to do, because onunload/onbeforeunload are not only used to ask/prevent page changes.
In my case i was using it too save some changes done during page edition and i dont want to prevent the user from changing the page (at least chrome should respect a returning true or change the page without the asking if the return is not a string), script running time restrictions would be enought.
This is specially annoying in chrome because onblur event is not sent to editing elements when unloading a page, chrome simply igores the curent page and jumps to another. So the only change of saving the changes was the unload process and it now can't be done without the STUPID question if the user wants to change it... of course he wants and I didnt want to prevent that...
hope chrome resolves this in a more elegant way soon.
Try this, it worked for me:
window.onbeforeunload = function(event) {
event.returnValue = "Write something clever here..";
};
Try this. I've tried it and it works. Interesting but the Succes message doesn`t need confirmation like the other message.
window.onbeforeunload = function()
{
if ( window.XMLHttpRequest )
{
console.log("before"); //alert("before");
var request = new XMLHttpRequest();
request.open("POST", "exit.php", true);
request.onreadystatechange = function () {
if ( request.readyState == 4 && request.status == 200 )
{
console.log("Succes!"); //alert("Succes!");
}
};
request.send();
}
}
None of the above worked for me. I was sending a message from the content script -> background script in the before unload event function. What did work was when I set persistent to true (in fact you can just remove the line altogether) in the manifest:
"background": {
"scripts": [
"background.js"
],
"persistent": true
},
The logic is explained at this SO question here.
Current versions of Chrome require setting the event's returnValue property. Simply returning a string from the event handler won't trigger the alert.
addEventListener('beforeunload', function(event) {
event.returnValue = 'You have unsaved changes.';
});
I'm running Chrome on MacOS High Sierra and have an Angular 6 project whithin I handle the window.beforeunload an window.onbeforeunload events. You can do that, it's worked for me :
handleUnload(event) {
// Chrome
event.returnValue = true;
}
It show me an error when I try to put a string in event.returnValue, it want a boolean.
Don't know if it allows custom messages to display on the browser.
<script type="text/javascript">
window.addEventListener("beforeunload", function(e) {
e.preventDefault(); // firefox
e.returnValue = ''; // Chrome
});
</script>

Using gBrowser and observerService to find a tab on startup

I'm trying to create a firefox addon that will look for a certain page on startup and grab some info from it. I'm having trouble finding the page at load. Here's what I have so far:
var myfancyaddon = {
onLoad: function() {
var observerService = Components.classes["#mozilla.org/observer-service;1"].getService(Components.interfaces.nsIObserverService);
observerService.addObserver(function restored() {
observerService.removeObserver( restored, "sessionstore-windows-restored");
var browser = myfancyaddon.findMySite();
if (browser) {
alert("tree falling in the woods"); // THIS LINE NEVER RUNS
browser.contentWindow.addEventListener("load", function tab_loaded(){
browser.contentWindow.removeEventListener("load", tab_loaded(), false);
alert("mysite loaded!");
}, false);
}
}, "sessionstore-windows-restored", false);
},
findMySite: function() {
var browsers = gBrowser.browsers;
for ( var i = 0; i < browsers.length; i++ ) {
var browser = browsers[i];
if (!browser.currentURI.spec) continue;
if ( browser.currentURI.spec.match('^https?://(www\.)?mysite\.com/') ) return browser;
}
return null;
}
};
window.addEventListener("load", function ff_loaded(){
window.removeEventListener("load", ff_loaded, false); //remove listener, no longer needed
myfancyaddon.onLoad();
},false);
after some investigation it seems the currentURI.spec is "about:blank" for a short time before it becomes mysite.com. Any ideas?
Instead of filtering first and then adding the load listener, you could use gBrowser.addEventListener("DOMContentLoaded", myfunction, false); to listen for page loads on all tab documents and then only run your code based on the url.
https://developer.mozilla.org/en/XUL_School/Intercepting_Page_Loads
The "sessionstore-windows-restored" notification is sent when the tabs from the previous session have been restored and the loading in these tabs has been started (sometimes: "Don't load tabs until selected" option means that the load isn't even started in the background tabs). But the location of these tabs is still about:blank until the server is contacted because the address loaded might redirect or the server might be unreachable (meaning an internal redirect to about:neterror). Firefox only changes browser location when content is definitely being served from the new location.
It should be indeed better to intercept page loads rather than waiting for session restore.

onBeforeUnload with ajax does not work with IE

I'm just wondering why this is not working with IE. It works fine with Chrome and Firefox.
window.onbeforeunload = function()
{
fetch("http://"+window.location.hostname+"/process_sc.php?cC=" + 1);
}
function fetch(url) {
var x = (window.ActiveXObject) ? new ActiveXObject('Microsoft.XMLHTTP') : new XMLHttpRequest();
x.open("GET", url, false);
x.send(null);
}
How can you tell it isn't working?
In general, there's little time between beforeunload event, unload event and actual page exit. At page unload all running scripts are dropped (browser than closes the window or navigates to address provided by user for example).
What might be happening here is browser doesn't really have time to send ajax request before page is unloaded.
I've seen couple of ways to ensure your final request before page unload will be completed. One of them is sending request and then introducing loop that is running for X number of miliseconds, postponing unload event and ensuring ajax request can be completed.
window.onbeforeunload = function() {
fetch("http://"+window.location.hostname+"/process_sc.php?cC=" + 1);
// here we force browser to wait for 300 seconds before proceeding with unload
var t = Date.now() + 300;
while(Date.now() < t) {};
}
The problem is that you use a GET instead of a POST request. The browser may use a cached result from the very first request.
This explains the fact that "it works only on the first time I open IE" as written as in response to another answer.
By the way, the AJAX call in onunload seems to work reliably in IE10 only if you use the parameter async = false in XMLHttpRequest.open(...), which you already did.

Ajax Request Not Loading New Data

My application uses polling to update the status of a music player. I'm using setInterval to make an Ajax call every half a second to do this. It works on many browsers (Chrome,Firefox, Safari... ) except the Nook color's browser. When the page loads it updates the correct information, but after that it always loads the same information. This was confirmed using alert. Here's the original code
function getStatus() {
request = new XMLHttpRequest();
request.open("GET", SOME_URL, true);
request.setRequestHeader("Content-Type", "application/x-www-form-urlencoded");
request.onreadystatechange = function () {
if (request.readyState === 4 && request.status === 200)
updateStatus(request.responseText);
};
request.send()
}
setInterval(getStatus, 500);
Any ideas why it is always loading the same info (the info it fetches initially) ?
Also: it only loads the most current information if you clear the cache. This Nook was rooted and also had Firefox and it would work just fine. It's the Nook native browser that is doing this (rooted or unrooted).
Internet Explorer has a weird quirk where it caches AJAX content. I imagine you are seeing the same issue in the Nook browser. The solution is to add a "cache buster" parameter, which is basically just a random parameter so the URL is treated freshly:
"SOME_URL?random=" + Math.random()

Categories