Firefox Extension : Stopping the page load when suspicious url found - javascript

I am working on a simple firefox extension that tracks the url requested and call a web service at the background which detects whether the URL is suspicious or not and based on the result returned by the service, extension decides to stop the page load and alert the user about the case of forgery or whatever, and if user still wishes to go to that page he can get redirected to the original page he has requested for
I have added a http-on-modify-request observer
var observerService = Components.classes["#mozilla.org/observer-service;1"].getService(Components.interfaces.nsIObserverService);
observerService.addObserver(requestObserverListener.observe, "http-on-modify-request", false);
and the observer
var requestObserverListener = {observe: function (subject, topic, data) {
//alert("Inside observe");
if (topic == "http-on-modify-request") {
subject.QueryInterface(Components.interfaces.nsIHttpChannel);
var url = subject.URI.spec; //url being requested. you might want this for something else
//alert("inside modify request");
var urlbarvalue = document.getElementById("urlbar").value;
urlbarvalue = processUrl(urlbarvalue, url);
//alert("url bar: "+urlbarvalue);
//alert("url: "+url);
document.getElementById("urlbar").style.backgroundColor = "white";
if(urlbarvalue == url && url != "")
{
var browser = getWindowForRequest(subject);
if (browser != null) {
//alert(""+browser.contentDocument.body.innerHTML);
alert("inside browser: "+url);
getXmlHttpRequest(url);
}
}
}
},
}
so when the URL in the URLbar and the requested url matches REST service will be called through ajax getXmlHttpRequest(url); method
now when i am running this extension call is made to the service but before the service return any response the page gets loaded which is not appropriate because user might enter his credentials in the meanwhile and get compromised
I want to first display user a warning message on the browser tab and if he still wanted to visit to that page he can then be redirected to that page on a link click in warning message window

I haven't tried this code out so I'm not sure that suspend and resume will work well but here's what I would try. You're working with an nsIRequest object as your subject so you can call subject.suspend() on it. From there use callbacks to your XHR call to either cancel() or resume() the nsIRequest.
Here's the relevant (untested) snippet of code. My XHR assumes some kind of promise .the() return but hopefully you understand the intention:
if(urlbarvalue == url && url != "")
{
var browser = getWindowForRequest(subject);
if (browser != null) {
// suspend the pending request
subject.suspend();
getXmlHttpRequest(url).then(
function success() { subject.resume(); },
function failure() { subject.cancel(Components.results.NS_BINDING_ABORTED); });
}
}
Just some fair warning that you actually don't want to implement an add-on in this way.
It's going to be extremely slow to do a remote call for every HTTP request. The safe browsing module does a single call to download a database of sites considered 'unsafe', it then can quickly check the database against the HTTP request page such that it doesn't have to make individual calls every time.
Here's some more info on this kind of intercepting worth reading: https://developer.mozilla.org/en-US/docs/XUL/School_tutorial/Intercepting_Page_Loads#HTTP_Observers
Also I'd worry that your XHR request will actually loop because XHR calls creates an http-on-modify-request event so your code might actually check that your XHR request is valid before being able to check the current URL. You probably want a safety check for your URL checking domain.
And here's another stackoverflow similar question to yours that might be useful: How to block HTTP request on a particular tab?
Good luck!

Related

This site cant be reached (ERR_CONNECTION_CLOSED) javascript detection for window.open

I have this piece of javascript code doing my clickouts and it should enable correct click-out tracking. clickDestinations are all different, and there are many ( cross domain ).
var response = window.open(clickDestination, randomName);
if (typeof response.focus === 'function') {
alert('tracking this click-out');
}
Problem with this implementation is the clickDestination was given by users and some of it is very old, so there is no guarantee that http or https protocol is correctly set.
When window.open is called with the wrong protocol, ex. with https on sites where https is not supported, i get "This site can’t be reached" page (ERR_CONNECTION_CLOSED). But my tracker tracks anyway since var response is a window object.
Any ideas how can i detect the mistake and not track in this case ?
First idea valid if url is on the same domain (same origin policy applies here):
var w = window.open(url);
// if window opened successfully
if ( w ) {
w.onload = function() {
alert('tracking this click-out');
};
}
Second idea:
window.open returns a reference to the newly created window.
If the call failed, it will be null instead. Ref.
So in case the connection fails because the server at specified URL does not support https or either http null will be returned so you can use this information to skip your tracking code.
Example (not tested):
var response = window.open(clickDestination, randomName);
// if destination cannot be open, skip tracking code
if(!response){
return;
}
if (typeof response.focus === 'function') {
alert('tracking this click-out');
}

Ajax Request in Progress suddenly started returning status = 0

I have a web project in PHP and it accesses a Java Project that uses the Restlet Framework. The web project is running on Apache and I am testing it using localhost. The Restlet Framework also uses localhost as the domain, but the url is slightly different: localhost:8888/
This is the Javascript that, using Ajax, makes a call to one of the Java classes (CollectionPublic) using the URL above.
var url = "<?php echo $config['restServer_url'] ?>collectionPublic";
var params= "pageList="+facebookPages+"&time="+time;
var client = new XMLHttpRequest();
client.open("POST", url,true);
client.setRequestHeader("Content-type", "application/x-www-form-urlencoded");
client.onreadystatechange = function () {
if (client.readyState != 4) return;
if (client.status != 200 && client.status != 304) {
alert("error "+client.status);
} else {
alert("success");
}
callback(client);
}
if (client.readyState == 4) return;
client.send(params);
I have tested and the call is being made correctly, using the URL localhost:8888/collectionPublic, and it is reaching the CollectionPublic class (the class is working fine).
The PROBLEM is: When this call is made, the CollectionPublic class takes a long time to complete its task, and the user should be able to access other pages (on the same server) or reload the page. However, when either of these things happen, the alert("error "+client.status) pops up and the value of client.status is 0. The call is then aborted, but the CollectionPublic's task continue normally, and when it finishes, nothing happens in the web page (before, the alert("success") was being fired).
I spent hours trying to figure out what was causing the error, since this was working last week. Most of the posts I found said that it could be a Cross-Origin Resource problem, since localhost and localhost:8888 are not considered as the same domain. To see if that was really the problem, I started Chrome using the --disable-web-security argument (and it was really disabled) but the issue was still there.
The weirdest thing is that it has worked before, and I changed absolutely NOTHING in the code.
I have seen this post Reloading page while an Ajax request in progress gives empty response and status as zero and it seems quite similar to what I am facing.
Hopefully, I have made myself clear, but if you have any doubts regarding this issue, just ask.
Thanks a lot in advance.
I'm not convinced that the ajax request itself is quite right. if (client.readyState != 4) return; will always be true aside from when its actually 4. This may be better:
client.onreadystatechange = function () {
if(client.readyState < 4) {
//not complete yet
return;
}
if(client.status != 200 && client.status != 304) {
//an error
alert("error "+client.status);
return;
}
if(client.readyState === 4) {
//complete
callback(client);
}
}
As for the problem whereby the ajax call is aborted: This is correct behaviour. All XHR calls will be aborted by the browser as soon the page is reloaded or unloaded. Perhaps this was somehow not the case when viewing pages locally. I would not allow the user to navigate away (or reload) whilst the ajax in progress. As a work-around, your class could set a session variable that is read by your page.

The request is too large for IE to process properly

I am using Websync3, Javascript API, and subscribing to approximately 9 different channels on one page. Firefox and Chrome have no problems, but IE9 is throwing an alert error stating The request is too large for IE to process properly.
Unfortunately the internet has little to no information on this. So does anyone have any clues as to how to remedy this?
var client = fm.websync.client;
client.initialize({
key: '********-****-****-****-************'
});
client.connect({
autoDisconnect: true,
onStreamFailure: function(args){
alert("Stream failure");
},
stayConnected: true
});
client.subscribe({
channel: '/channel',
onSuccess: function(args) {
alert("Successfully connected to stream");
},
onFailure: function(args){
alert("Failed to connect to stream");
},
onSubscribersChange: function(args) {
var change = args.change;
for (var i = 0; i < change.clients.length; i++) {
var changeClient = change.clients[i];
// If someone subscribes to the channel
if(change.type == 'subscribe') {
// If something unsubscribes to the channel
}else{
}
}
},
onReceive: function(args){
text = args.data.text;
text = text.split("=");
text = text[1];
if(text != "status" && text != "dummytext"){
//receiveUpdates(id, serial_number, args.data.text);
var update = eval('(' + args.data.text + ')');
}
}
});
This error occurs when WebSync is using the JSON-P protocol for transfers. This is mostly just for IE, cross domain environments. Meaning websync is on a different domain than your webpage is being served from. So IE doesn't want do make regular XHR requests for security reasons.
JSON-P basically encodes the up-stream data (your 9 channel subscriptions) as a URL encoded string that is tacked onto a regular request to the server. The server is supposed to interpret that URL-encoded string and send back the response as a JavaScript block that gets executed by the page.
This works fine, except that IE also has a limit on the overall request URL for an HTTP request of roughly 2kb. So if you pack too much into a single request to WebSync you might exceed this 2kb upstream limit.
The easiest solution is to either split up your WebSync requests into small pieces (ie: subscribe to only a few channels at a time in JavaScript), or to subscribe to one "master channel" and then program a WebSync BeforeSubscribe event that watches for that channel and re-writes the subscription channel list.
I suspect because you have a key in you example source above, you are using WebSync On-Demand? If that's the case, the only way to make a BeforeSubscribe event handler is to create a WebSync proxy.
So for the moment, since everyone else is stumped by this question as well, I put a trap in my PHP to not even load this Javascript script if the browser is Internet Destroyer (uhh, I mean Internet Explorer). Maybe a solution will come in the future though.

XDomainRequest POST with XML...what am I doing wrong?

This is (hopefully) an easy question. I have to submit a request to a web service via POST with XDomainRequest. I have found sparse documentation for this across the internet, but I refuse to believe that nobody has figured this out.
Here is my XDomainRequest code:
var testURL = 'http://localhost:4989/testendpoint';
//let us check to see if the browser is ie. If it is not, let's
if ($.browser.msie && window.XDomainRequest) {
var xdr2 = new XDomainRequest();
xdr2.open("POST", testURL);
xdr2.timeout = 5000;
xdr2.onerror = function () {
alert('we had an error!');
}
xdr2.onprogress = function () {
alert('we have some progress!');
};
xdr2.onload = function () {
alert('we load the xdr!');
var xml2 = new ActiveXObject("Microsoft.XMLDOM");
xml2.async = true;
xml2.loadXML(xdr2.responseText);
};
//what form should my request take to be sending a string for a POST request?
xdr2.send("thisisastring");
}
My web service (WCF) takes a single parameter according to the web service's help page, that looks like this:
<string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">String content</string>
I've gotten this to work via other http clients (mobile and desktop APIs, fiddler) by building a string that concatenates the parameter I am trying to pass to the web service with the rest of the string serialization. For example, I have tried:
xdr2.send("thisisastring");
xdr2.send("<string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">thisisastring</string>");
but the onerror handler is always tripped. I don't think it has anything to do with the WCF because:
The WCF is always successful in every other client I call it from,
and
If it was the service, the onerror method would never get tripped.
It would return garbage, but it would be returning something.
When i use the console (in the dev tools in ie9) to log the responseText, it says:
LOG:undefined
So I am fairly sure that the issue is in how I use the XDomainRequest.
If anybody comes across this, I ended up converting my web services to return JSON-formatted data. Using JSON negates the need for XDomainRequest, allowing me to use the conventional ajax jquery tools instead.

Query URL for Log In Status, javascript

This may seem like a no-brainer, but I can't find a way to do this that isn't considered a security issue (other than the obvious ways)...
So, I want to build an add-on for Firefox to use with my team. Basically it will be a status bar icon letting us know if the authentication cookie for our tools site has expired, so we can tell without losing any work currently in the browser.
At first I thought I could have the add-on check the cookie, but this seems to be a huge hassle for such a simple idea. Then it occurred to me...DUH...that I could just have the add on try to access the main page of our site. If it gets a "Access Denied" response, it can show the icon for "Not Logged In", but if it gets anything else, it can show "Signed In".
However, all attempts to do this with AJAX are proving to be almost as difficult as my cookie attempts.
Is there a simple way, with javascript preferably, but in XUL otherwise, to say
var url = "http://example.com";
var response = pingURL(url, "blah);
status = (response = "Welcome!") ? "Signed in" : "Not Signed In";
where "pingURL" would be the method of "going" to the url and getting the response?
function checkAccess(url, callback) {
var xhr = new XMLHttpRequest();
xhr.open("GET", url);
xhr.onreadystatechange = function () {
if (xhr.readyState == 4) {
if (xhr.status == 200) {
callback(true);
} else {
callback(false);
}
}
};
}
This should work... Just call with "checkAccess('http://example.com', function(ready){});" as an example where ready is a boolean value.
Exactly why do you consider cookies a huge hassle? That would undoubtedly be faster and probably simpler to implement. Reading cookies from chrome is simple and well-documented. Ask for help if you can't figure out how to parse the cookie.

Categories