how to prevent firefox from making a specific request
to an url, ex: site.com/ajax/something.php
i have found a lot of addons but they couldn't really do the job they
can block requests to another domain but not an absolute uri,
Is there any way to accomplish this ?
The solution with addons is to use AdsBlockPlus->Filter preferences->Add filter, then simply put the url after || eg: ||facebook.com/ajax/mercury/change_read_status.phpwhich will prevent any requests to the url, Programmatically #Noitidart link is a perfect solution i was looking for that too.
Here you go man: firefox extension: intercepting url it is requesting and blocking conditionally
That uses observer service. Ideally you want to use nsIContentPolicy which i think is more performant but i dont have a solution with that to share. The Adblock Plus author is on this forum he may be able to give us a solution i can spam. :P
Related
I try to get the raw response body inside a Web Extension using Firefox 55.0.3.
Only "solutions" I have seen for now:
Repeat the request (I absolutly don't want to repeat the request)
Using Javascript to get innerHTML attribute of HTML tags such as head and body (tell me if I'm wrong, but with a solution like that I will not always have the whole content, for example I will get nothing in case of response without HTML. So it will never be the real raw response and in some case it will simply not work.)
Also, I saw this answer for Chrome (from 2015) using the debugger, but I wasn't able to do it with Firefox. This kind of solutions are interesting, I read Mozilla documentation about devtools but I didn't find a way of using the network tab of webtools interface with Javascript inside a Web Extension.
To give you more details, my goal is to intercept the full request and response from server (header and body). This is not a problem to do it, except for the response body.
Here an example of code to get the request body: (background script)
browser.webRequest.onBeforeRequest.addListener(
function (e) {
console.log(e);
},
{urls: ["http://*/*", "https://*/*"]},
["requestBody"]
)
Here some documentations that I used (there is more, but these links are all official):
Mozilla documentation about Web Extension
Intercept HTTP requests
webRequest
webRequest.onHeadersReceived
webRequest.onBeforeRequest
webRequest.onBeforeSendHeaders
Here some examples of Web Extensions.
Any ideas, solutions or even explainations "why this is not possible" are welcome, thank you in advance for your time !
Cheers++
This is now available, as of Firefox 57:
browser.webRequest.filterResponseData allows you to add a listener via browser.webRequest.onBeforeRequest which receives, and allows you to modify the response.
You can see an example in the Mozilla github webextensions-examples repo
Firefox 57 is going to provide the API browser.webRequest.filterResponseData. This doesn't seem to be documented yet, but you can look through bug 1255894 for details.
Why is this not possible?
For the simple reason that WebRequest was ported over from Chrome extensions, where this is explicitly impossible.
Requests for such functionality (to edit, or just to read) has been around for a very long time (since 2011 and 2015 respectively); they are challenging from both the security perspective and technical perspective, however a principal agreement that read access is a good idea is there.
However, it's simply not yet implemented. Rob W has been doing some work in this direction but it's not done yet.
Perhaps Firefox has a different implementation?
A cursory glance on Mozilla bugtracker doesn't find any bugs on providing this functionality. So, it's not likely that the implementation will diverge anytime soon.
Any workarounds?
Well, only the debugger-level access can touch actual response data.
Since debugger is not implemented in the WebExtension platform, only a devtools.network-using extension can access it - and only while Dev Tools are open for the tab making said request, which is the main limitation of devtools.* APIs.
I'm working on a chrome-extension that reads the domain from window.location.hostname. Now for this extension to work properly, I need to be able to separate subdomains and other url variation to the same host. example:
I need all of the following url:s
www.google.com
accounts.google.com
photos.google.se
example.google.co.uk
https://google.com
all of these need to be resolved to, in this case, "google", in a way that is reliable and will work for any website with sometimes quirky subdomainconfigurations.
this is my current aproach, somewhat simplified:
var url = window.location.hostname.split(".") //returns an array of strings
for(i=0;i<url.length;i++){
if(url[i].match(domainregex) //regex for identifying domains ".com",".se",".co.uk" etc
return url[i-1] //usually what I'm after is directly before the domain, thus i-1
}
This approach is alot of hassle, and has proven unreliable at times...Is there any more straitforward way of doing this?
A more reliable solution to strip the top level domain part and get the main domain part is to use Public Suffix List which is used by Firefox and Chrome and other browsers.
Several js parsers of the list data are available if you don't want to write your own.
I had to do it for my fork of edit-my-cookies, so It will able to change profile of cookies per site. (https://github.com/AminaG/swap-my-cookies-multisite/blob/master/js/tools.js)
It is what I did, and it is working for me. I am sure if it not complete solution, but I am sure it can helps.
var remove_sub_domain=function(v){
var is_co=v.match(/\.co\./)
v=v.split('.')
v=v.slice(is_co ? -3: -2)
v=v.join('.')
console.log(v)
return v
}
it is working for:
www.google.com
accounts.google.com
photos.google.se
example.google.co.uk
google.com
if you want it to work also for:
http://gooogle.com
You first need to remove the protocol:
parser=document.createElement('a');
parser.href=url;
host=parser.host;
newurl=remove_sub_domain(host);
I want to create a list of links opening the targets in new tabs from my private page and I don't want the referring URL to be passed on.
I tried the following method, but it didn't solve the problem:
<script>
function op(url){
window.open(url.replace(/<(?:.|\n)*?>/gm,''),'_newtab');
}
</script>
<span onclick="javascript:op(this.innerHTML);">http://www.google.com<span>
Is there any way how to spoof or blank the referrer? In the worst case I might create an iframe and put the page with links on some free hosting, but I'd prefer some more elegant solution. The only requirements are tha t it should work in Chrome, Opera, IE and FF (2011+ versions), accessibility is not an issue, since it'll be used by very few users I know.
The referring URL is part of the HTTP protocol, not the mark-up. You can't change this.
Also, you never need to specify javascript: in an event handler. It's always is and can only be javascript.
There is a rel="noreferrer" which is not yet suported by Firefox...
See also https://stackoverflow.com/a/8957778/22470
Create a tiny app on Heroku that receives a URL then forwards the user.
You could redirect to an intermediate page that redirects to the final website, this would hide the true referer.
It seems the easiest is the iframe dirty way.
I have a html page on my localhost - get_description.html.
The snippet below is part of the code:
<input type="text" id="url"/>
<button id="get_description_button">Get description</button>
<iframe id="description_container" src="#"/>
When the button is clicked the src of the iframe is set to the url entered in the textbox. The pages fetched this way are very big with lots of linked files. What I am interested in the page is a block of text contained in a <div id="description"> element.
Is there a way to mitigate downloading of resources linked in the page that loads into the iframe?
I don't want to use curl because the data is only available to logged in users and the steps to take with curl to get the content is too complicated. The iframe is simple as I use this on a box which sends the right cookies to identify the request as coming from a logged in user, but the problem is that it is very wasteful to get nearly 1 MB of data to keep 1 KB of it and throw out the rest.
Edit
If the proposed method just works in Firefox it is fine, so I added Firefox tag. Also, it is possible that the answer actually is from the realm of Firefox add-on techniques, so I added that tag as well.
The problem is not that I cannot get at what I'm looking for, rather, the problem is the easy iframe method is wasteful.
I know that Firefox does allow loading only the text of a page. If you open a page and press Ctrl+U you are taken to 'view page source' window, There links behave as normal and are clickable, if you click on a link in source view, the source of the new page is loaded into the view source window, without the linked resources being downloaded, exactly what I'm trying to get. But I don't know how to access this behaviour.
Another example is the Adblock add-on. It somehow kills elements before they get loaded. With plain Javascript this is not possible. Because it only is triggered too late to intervene in good time.
The Same Origin Policy forbids any web page to access contents of any other web page in a different domain so basically you cannot do that.
However it seems that with some browsers it is allowed to access web pages content if you are trying to access it from a local web page which seems to be your case.
Safari, IE 6/7/8 are browser that allow a local web page to do so via XMLHttpRequest (source: Google Browser Security Handbook) so you may want to choose to use one of those browsers to do what you need (note that future versions of those browsers may not allow to do so anymore).
A part from this solution I only see two possibities:
If the web pages you need to fetch content from are somehow controlled by you, you can create a simpler interface to let other web pages to get the content you need (for example allowing JSONP requests).
If the web pages you need to fetch content from are not controlled by you the only solution I see is to fetch content server side logging in from the server directly (I know that you don't want to do so, but I don't see any other possibility if the previous I mentioned are not practicable)
Hope it helps.
Actually I've seen Cross Domain jQuery .load request before, here: http://james.padolsey.com/javascript/cross-domain-requests-with-jquery/
The author claims that codes like these found on that page
$('#container').load('http://google.com'); // SERIOUSLY!
$.ajax({
url: 'http://news.bbc.co.uk',
type: 'GET',
success: function(res) {
var headline = $(res.responseText).find('a.tsh').text();
alert(headline);
}
});
// Works with $.get too!
would work. (The BBC code might not work because of the recent redesign, but you get the idea)
Apparently it is using YQL wrapped into a jQuery plugin to do the trick. Now I cannot say I fully understand what he is doing there but it appears to work, and fits the bill. Once you load the data I suppose it is a simple matter of filtering out the data that you need.
If you prefer something that works at the browser level, may I suggest Mozilla's Jetpack framework for lightweight extensions. I've not yet read the documentations in its entirety but it should contain the APIs needed for this to work.
There are various ways to go about this in AJAX, I'm going to show the jQuery way for brevity as one option, though you could do this in vanilla JavaScript as well.
Instead of an <iframe> you can just use a container, let's say a <div> like this:
<div id="description_container"></div>
Then to load it:
$(function() {
$("#get_description_button").click(function() {
$("#description_container").load($("input").val() + " #description");
});
});
This uses the .load() method which takes a string in this format: .load("url selector"), then takes that element in the page and places it's content inside the container you're loading, in this case #description_container.
This is just the jQuery route, mainly to illustrate that yes, you can do what you want, but you don't have to do it exactly like this, just showing the concept is getting what you want from an AJAX request, rather than in an <iframe>.
Your description sounds like you are fetching pages from the same domain (you said that you need to be logged in and have session credentials) so have you tried to use async request via XMLHttpRequest? It might complain if the html on a page is particularly messed up but you chould still be able to get raw text via .responseText and extract what you need with a regex.
Has anyone else had any problems using google's Domain Tracking API, I am specifically talking about the _link() method.
The documentation is here
The example provided shows that the _link() method should be used in the onclick event like this:
Go to our sister site
However, this essentially just makes the link...do nothing (most probably because of the 'return false').
My understanding is that the pageTracker._link() method is 'supposed' to add additional parameters to the url and do it's own document.location style redirect.
Any ideas / catches / previous posts??
Sorry for the obvious question, but did you enable linking on the target page:
You must also enable linking on the target site (pageTracker._setAllowLinker(true);) in order for link to work properly.
Apparently a miss-interpretation of the documentation:
You must also enable linking on the target site
So lets clarify also
pageTracker._setAllowLinker(true); is set on the ORIGINATING page
pageTracker._setAllowLinker(true); is set on the TARGET page
I only had it enabled on the target page, as the docs indicate.