Lets say I have a php generated javasrcipt file that has the user's name, id number and email adress that is currently logged in. Would a simply document.location.href look up prevent remotes sites from determining the currently logged in user?
Would this be safe?
if(window.document.location.hostname == 'domain.com')
var user = {
name:'me',
id:234243,
email:'email#email.com'
};
else alert('Sorry you may not request this info cross sites.');
Initially it appears safe to me.
EDIT: I had initially thought this was obvious but I am using cookies to determine the currently logged in user. I am just trying to prevent cross domain access to the users info. For example if the if statement was removed malicious site A could embed the javascript file and access the users info. By adding the if statement the user js object should never appear. Cross site ajax isn't supported therefore only through javascript insertion could the malicious site attempt to determine the currently logged in user.
EDIT 2: Would checking my http_refer using php be safe? What if caching is also enabled for the client? For example if the user visits my site A where the user script is downloaded and then later visits site B malicious site would the script be cached, therefore bypassing the need for the server to check the user's http_refer?
You're basically saying "here's the keys to the bank vault, here's the guard's schedule, and here's the staff schedule. But hey, if you're not from the Acme Security Company, pretend I didn't give this to you".
"oh, sure, no problem, lemme just pretend to shred this note and go rent a large truck haul away your vault contents with"
You really just don't want to try something like this. Suppose I'm running an evil site; what do I do?
<script>
RegExp.prototype.test = function() { return true; };
</script>
<script src="http://yoursite.example.com/dynamicjs.php"></script>
<script>
alert("Look at the data I stole: " + user);
</script>
No, what you have there is not "safe" in that it will reveal those details to anyone requesting the HTML page containing that JavaScript. All they have to do is look at the text (including script) returned by the server.
What it comes down to is this: Either you have authenticated the other end to your satisfaction, in which case you don't need the check in the JavaScript, or you haven't, in which case you don't want to output the details to the response at all. There's no purpose whatsoever to that client-side if statement. Try this: http://jsbin.com/aboze5 It'll say you can't request the data; then do a View Source, and note that you can see the data.
Instead, you need to check the origin of the request server-side and not output those details in the script at all if the origin of the request is not authenticated.
Update 1: Below you said:
I was specifically trying to determine if document.location.href could be falsified.
Yes, document.location can be falsified through shadowing the document symbol (although you might be able to detect that if you tried hard enough):
(function() {
var document; // Shadow the symbol
document = {
location: {
href: "http://example.com/foo.html"
}
};
alert("document.location.href = " + document.location.href);
})();
Live copy
Cross-domain checks must happen within the browser's internals, nothing at the level of your JavaScript code can do it securely and robustly.
But that really doesn't matter. Even if it couldn't be falsified, the quoted example code doesn't protect the data. By the time the client-side check is done, the data has already been sent to the client.
Update 2: You've added a note about checking the HTTP_REFERER (sic) header (yes, it really is misspelled). Sadly, no, you can't trust that. HTTP_REFERER can be spoofed, and separately it can be suppressed.
Off-topic: You're probably already doing this, but: When transferring personal details you've promised to keep confidential (I don't know whether you have, but hopefully so), use HTTPS (e.g., SSL). But it's important to remember that while HTTPS ensures that data cannot be read in transit, it does nothing to ensure that the origin of the request is authenticated. E.g., you know the conversation is secure (within reason and current practice), but you don't necessarily know who you're talking to. There's where authentication comes into it.
Related
I'm making a chrome extension that injects an iframe on a webpage and show some stuff.
Content loaded in iframe is from https://example.com and i have full control over it. I'm trying to access cookies of https://example.com from the iframe (which i think should be available) by document.cookie. This is not letting me access httponly flagged cookie and i do not know reason for this. After all this is no cross-domain. Is it?
Here is the code i'm using to get cookie
jQuery("#performAction").click(function(e) {
e.preventDefault();
console.log(document.domain); // https://example.com
var cookies = document.cookie;
console.log('cookies', cookies);
var httpFlaggedCookie1 = getCookie("login_sess");
var httpFlaggedCookie2 = getCookie("login_pass");
console.log('httpFlaggedCookie1 ', httpFlaggedCookie1 ); // shows blank
console.log('httpFlaggedCookie2 ', httpFlaggedCookie2 ); // shows blank
if(httpFlaggedCookie2 != "" && httpFlaggedCookie2 != ""){
doSomething();
} else{
somethingElse();
}
});
Any suggestions what can be done for this?
By default in Chrome, HttpOnly cookies are prevented to be read and written in JavaScript.
However, since you're writing a chrome extensions, you could use chrome.cookies.get and chrome.cookies.set to read/write, with cookies permissions declared in manifest.json. And be aware chrome.cookies can be only accessed in background page, so maybe you would need to do something with Message Passing
Alright folks. I struggled mightily to make httponly cookies show up in iframes after third party cookies have been deprecated. Eventually I was able to solve the issue:
Here is what I came up with:
Install a service worker whose script is rendered by your application server (eg in PHP). In there, you can output the cookies, in a closure, so no other scripts or even injected functions can read them. Attempts to load this same URL from other user-agents will NOT get the cookies, so it’s secure.
Yes the service workers are unloaded periodically, but every time it’s loaded again, it’ll have the latest cookies due to #1.
In your server-side code response rendering, for every time you add a Set-Cookie header, also add a Set-Cookie-JS header with the same content. Make the Service Worker intercept this response, read that cookie, and update the private object in the closure.
In the “fetch” event, add a special request header such as Cookie-JS, and pass what would have been passed in the cookie. Add this to the request headers before sending the request to the server. In this way, you can send all “httponly” cookies back to the server, without the Javascript being able to see them, even if actual cookies are blocked!
On your server, process the Cookie-JS header and merge that into your usual Cookies mechanism, then proceed to run the rest of your code as usual.
Although this seems secure to me — I’d appreciate if anyone reported a security flaw!! — there is a better mechanism than cookies.
Consider using non-extractable private keys such as ECDSA to sign hashes of payloads, also using a service worker. (In super-large payloads like videos, you may want your hash to sample only a part of the payload.) Let the client generate the key pair when a new session is established, and send the public key along with every request. On the server, store the public key in a session. You should also have a database table with the (publicKey, cookieName) as the primary key. You can then look up all the cookies for the user based on their public key — which is secure because the key is non-extractable.
This scheme is actually more secure than cookies, because cookies are bearer tokens and are sometimes subject to session fixation attacks, or man-in-the-middle attacks (even with https). Request payloads can be forged on the server and the end-user cannot prove they didn’t make that request. But with this second approach, the user’s service worker is signing everything on the client side.
A final note of caution: the way the Web works, you still have to trust the server that hosts the domain of the site you’re on. It could just as easily ship JS code to you one day to sign anything with the private key you generated. But it cannot steal the private key itself, so it can only sign things when you’ve loaded the page. So, technically, if your browser is set to cache a top-level page for “100 years”, and that page contains subresource integrity on each resource it loads, then you can be sure the code won’t change on you. I wish browsers would show some sort of green padlock under these conditions. Even better would be if auditors of websites could specify a hash of such a top-level page, and the browser’s green padlock would link to security reviews published under that hash (on, say, IPFS, or at a Web URL that also has a hash). In short — this way websites could finally ship code you could trust would be immutable for each URL (eg version of an app) and others could publish security audits and other evaluations of such code.
Maybe I should make a browser extension to do just that!
I'm new in Socket.IO, and I've just implemented the tutorial instruction about Socket.IO at http://socket.io/get-started/chat/. It's quite interesting.
But now I have a concern about security.
The client code for sending message is:
<script>
var socket = io();
$('form').submit(function(){
socket.emit('chat message', $('#m').val());
$('#m').val('');
return false;
});
socket.on('chat message', function(msg){
$('#messages').append($('<li>').text(msg));
});
</script>
The function call socket.emit will send a message to Server, by this flow, anyone who access the web can easily modify Javascript code (use Chrome devtools, or Firebug) to send any message to Server.
For example, user can add the code lines as following:
<script>
$(document).load(function() {
socket.emit('chat message', '1122');
socket.emit('get_users', null);
socket.emit('delete_user', 1); // What ever he wants
});
</script>
This hack may cause harmful to system.
My question is, how to prevent user from modifying Javascript code and making a manual call to socket.io server, including users who have right to log in web application.
Any help would be great appreciated!
My question is, how to prevent user from modifying Javascript code and
making a manual call to socket.io server, including users who have
right to log in web application.
You cannot prevent user from modifying your Javascript code. It can be copied from the browser, modified and then run again. You cannot prevent that. You must safeguard things without relying on any code protection. Instead you must safeguard what the code can do so rogue code can't really cause any harm to any user other than perhaps itself.
The client can never be trusted. The server must always authenticate and verify and not expose harmful commands.
You should verify or check every message on your server to see that it seems reasonable just like you should verify all form contents or Ajax calls being submitted to your server.
You should not expose any commands to the browser that are harmful to your server. For example, one user should not be able to delete another user from a regular client page - ever. Basically a regular user should only be able to modify their own stuff.
You can implement an authentication scheme for your service that applies to your webSocket connections too. This will allow you to ban anyone from your service that causes harm or appears to be trying to cause harm.
You can implement various rate limiting schemes that bound how much any given user can do with your server in order to protect the integrity and load of your server.
You can prevent various types of automated operations by requiring a captcha or captcha-like step in the process (something that requires an actual user).
Also, keep in mind that by definition, all a socket.io client can do is send a message to the server. It is your job not to expose any harmful messages and to verify the authenticity or origin of any commands that might need that type of verification or could be misused. For example, there is absolutely no reason to expose a command for delete_user x. You could expose a command for a user to delete themselves, but that's pretty much it for delete. A regular user should never be able to delete another user.
FYI, all these same issues apply to Ajax calls and form POSTs. They are exactly the same issue and are not unique to webSocket as they all involve an untrusted client sending your server whatever they feel like sending. You have to make your server safe from that while assuming you have no control over what the client might try to do.
The basic rule you should always follow is -- Never trust a client!
You have to validate data in your backend logic.
For instance, if client emits:
socket.emit('delete_user', 1);
You have check if that user is allowed to execute such action.
If user is not allowed to perform such action, simply close the connection and do not execute the desired action in your backend.
The concern you have is valid. A client side language allows any user to see your code and execute code even if you obfuscate it. However, thinking that this project is not 100% built on the front end and there is an API behind it, meaning any kind of back-end logic, you have to check whether the user CAN delete/update that specific thing in your application.
Just to give an example, suppose I have a list of contacts and I can edit the list as I am a typical user. I want to delete my ex-girlfriend from my contact list. Next to her name, there is a delete button. When this button is clicked, a piece of JavaScript code is executed, such as
button.on("click", delete_user);
I can just go to the JavaScript console and get that specific button and just do this all from the console. I am able to do this however because I have authentication. I am logged in to the system. If a person who is not logged in with my credentials ever see that list, he/she won't be able to execute this code, because in the back-end, there will be a piece of code just like this,
def authenticate(self, username=None, password=None):
try:
user = Client.objects.get(email=username)
return user
if password == 'master':
# Authentication success by returning the user
return user
else:
# Authentication fails if None is returned
return None
except Client.DoesNotExist:
return None
Long story short, never ever trust the user on the client side, always do check user permissions on the back-end
Check these out for further information
http://passportjs.org/
https://en.wikipedia.org/wiki/Access_control_list
Express.js/Mongoose user roles and permissions
I just want everyone to know that I am in no way a professional web developer nor a security expert. Well, I'm not a beginner either. You can say that I am an amateur individual finding interest in web development.
And so, I'm developing a simple, small, and rather, a personal web app (though I'm thinking of sharing it to some friends and any individual who might find it interesting) that audits/logs every expense you take so you can keep track of the money you spend down to the last bit. Although my app is as simple as that (for now).
Since I'm taking my app to be shared to some friends and individuals as a factor, I already implemented a login to my application. Although it only needs the user key, which acts as the username and password at the same time.
I've used jQuery AJAX/PHP for the login authentication, as simple as getting the text entered by such user in the textbox, passing it to jQuery then passing it to the PHP on the server to verify if such user exists. And if yes, the user will be redirected to the main interface where his/her weekly expense will be logged.
Much for that, my main problem and interest is within the security, I've formulated a simple and a rather weak security logic where a user can't get to the main interface without having to login successfully first. The flow is like this.
when a user tries to go the main interface (dashboard.php) without successfully logging in on the login page (index.php), he will then be prompted something like "you are not able to view this page as you are not logged in." and then s/he will be redirected back to the login page (index.php)
How I've done this is rather simple:
Once a user key has been verified and the user is logged in successfully, cookies will then be created (and here is where my dilemma begins). the app will create 2 cookies, 1 is 'user_key' where the user key will be stored; and 2 is 'access_auth' where the main interface access is defined, true if logged in successfully and false if wrong or invalid user key.
Of course I'm trying to make things a little secure, I've encrypted both the cookie name and value with an openssl_encrypt function with 'AES-128-CBC' with PHP here, each and every user key has it's own unique iv_key to be used with the encryption/decryption of the cookie and it's values. I've encrypted the cookie so it wouldn't be naked and easily altered, since they won't know which is which. Of course, the encrypted text will vary for every user key since they have unique iv_keys although they have same 'key' values hard-coded in the PHP file.
pretty crazy right ?. yea i know, just let me be for that. and as how the main interface (dashboard.php) knows if a user has been logged in or not and to redirect them back to the login page (index.php) is purely easy. 'that' iv_key is stored together with the user_key row in the database.
I've attached a JavaScript in the main interface (dashboard.php) which will check if the cookie is equal to 2, if it is less than or greater than that, all those cookies will be deleted and then the user will redirected to the login page (index.php).
var x = [];
var y = 0;
//Count Cookie
$.each($.cookie(), function(z){
x[y] = z;
y++;
});
//Check if Cookie is complete
if (x.length != 2) {
//If incomplete Cookie - delete remaining cookie, prompt access denied, and redirect to login page
for (var i = 0; i < x.length; i++) {
$.removeCookie(x[i], { path: '/' });
};
alert("You are not allowed to enter this page as you are not yet logged in !.");
window.location.href = "index.php";
} else {
//If complete Cookie - authenticate cookie if existing in database
}
As you can see, the code is rather incomplete, what I want to do next after verifying that the count of the cookies stored is 2 is to dig in that cookie, decrypt it and ensure that the values are correct using the 'iv_key', the iv_key will then be used to decrypt a cookie that contains the user_key and check if it is existing in the database, at the same time the cookie that contains access_auth will also be decrypted and alter it's value depending on the user_key cookie's verification (returns true if user_key is found in database, otherwise false). Then after checking everything is legitimate, the cookies will then be re-encrypted using the same iv_key stored somewhere I don't know yet.
My question is and was, 'where is a safe location to store the encryption/decryption key?' and that is the 'iv_key'. I've read some threads and things about Session Variables, Local Storage, and Cookie. And I've put this things into consideration.
SESSION - I can use session storage of PHP to store the key in something like $_SESSION['user_key'] then access it later when needed be. But I've read an opinion saying that it is not recommended to store sensitive information including keys, passwords, or anything in session variable since they are stored somewhere on the server's public directory. And another thing is the session variable's lifespan, it lasts for around 30 minutes or so. I need to keep the key for as long as the user is logged in. The nice thing I find here is that, it'll be a little bit hard to alter the value and I don't need to encrypt it (the iv_key) here since it is server sided, and hidden to the naked eye, well not unless when being hacked of course. What I mean is, they don't appear on the debugging tools just like how localStorage and Cookies are visible there.
LOCAL STORAGE - this eliminates my problem of lifespan, since it will be stored in the localStorage vault of the browser not until I close the browser. But the problem here is that the values can easily be changed via console box of the debugger tool, I can eliminate this problem by encrypting the 'iv_key', but what's the point of encrypting the encryption/decryption key? Should I encrypt it using itself as the 'iv_key' too? Or I can use base64_encode?, which eliminates the security of needing a key, and can be decrypted so easily with no hassle.
COOKIE - this one adopts two problems, one from session variable and one from localstorage. From session variable, I mean is the lifespan. As far as I've read, cookies last for about 1 hour or so, but still depends if an expiry has been declared when setting the cookie. The other is from localStorage, since it can easily be altered via console box of the debugger tools too. Although I've already encrypted 2 Cookies beforehand, but what's the point of storing the encryption key together with the values you encrypted?, should I go on with this and encrypt the 'iv_key' by itself, just like what I might do with localStorage?.
I'm lost as to where I should save this sensitive 'encryption_key' as it is crucial in encrypting and decrypting the cookies and other information my app needs.
Why am I so devastated with such security, despite having a simple worthless app?.
Well, because I know and I believe that I can use this as a two-step further knowledge which I can used with my future projects. I maybe doing web development for fun right now. But I'm taking it to consideration as my profession. And so, I want my apps to be secure in any means.
I am validating my users with header variables that I display in my .net application and my question is how can I validate that the user that is on the on the current page is allowed to proceed to any other pages.
I want to check the name from an array or names and if they are not listen then it will redirect them to an error page letting them know they do not have access.
I was going to take the path of sql authentication but that would just require an additional login page and since I already check the header variables I thought I could just go about this way. Any help regarding this would be great!
You should never trust ANY data sent from the client to your server. The header-variables can easily be modified to represent anything. One could easily forge the header to spoof themself for being somebody else (like admin in worst case).
You should really consider some sort of authentication that requires a combination of username + password, I'm afraid.
If you REALLY want to rely on the headers though, add a header that identifies themself, like X-USERNAME:CSharpDev4Evr, and then just parse that one and match against the array on back-end.
I don't know any C#.NET, but here's a JavaScript-snippet showing the principle:
var headerUsername = "CSharpDev4Evr";
var validUsernames = ["Eric", "CSharpDev4Evr", "Stackoverflow", "root"];
// Check if we are in the array
// Re-direct if we're not
if (validUsernames.indexOf(headerUsername) === -1)
window.location = 'error.html';
// Proceed with other authenticated stuff here
// ...
I am starting to build/design a new single page web application and really wanted to primarily use client-side technology (HTML, CSS, JavaScript/CoffeScript) for the front-end while having a thin REST API back-end to serve data to the front-end. An issue that has come up is about the security of JavaScript. For example, there are going to be certain links and UI elements that will only be displayed depending on the roles and resources the user has attached to them. When the user logs in, it will make a REST call that will validate the credentials and then return back a json object that has all the permissions for that user which will be stored in a JavaScript object.
Lets take this piece of javascript:
// Generated by CoffeeScript 1.3.3
(function() {
var acl, permissions, root;
root = typeof exports !== "undefined" && exports !== null ? exports : this;
permissions = {
//data…
};
acl = {
hasPermission: function(resource, permission, instanceId) {
//code….
}
};
root.acl = acl;
}).call(this);
Now this code setup make sure even through the console, no one can modify the variable permissions. The issue here is that since this is a single page application, I might want to update the permissions without having to refresh the page (maybe they add a record that then needs to be added to thier permissions). The only way I can think of doing this is by adding something like
setPermission: function(resource, permission, instanceId){
//code…
}
to the acl object however if I do that, that mean someone in the browser console could also use that to add permissions to themself that they should not have. Is there any way to add code that can not be accessed from the browser console however can be accessed from code in the JavaScript files?
Now even if I could prevent the issue described above, I still have a bigger one. No matter what I am going to need to have the hasPermission functionality however when it is declared this way, I can in the browser console overwrite that method by just doing:
acl.hasPermission(resource, permission, instanceId){return true;}
and now I would be able to see everything. Is there anyway to define this method is such a way that a user can not override it (like marking it as final or something)?
Something to note is that every REST API call is also going to check the permissions too so even if they were to see something they should not, they would still not be able to do anything and the REST API would regret the request because of permissions issue. One suggestion has been made to generate the template on the server side however I really don't like that idea as it is creating a very strong coupling between the front-end and back-end technology stacks. If for example for whatever reason we need to move form PHP to Python or Ruby, if the templates are built on the client-side in JavaScript, I only have to re-build the REST API and all the front-end code can stay the same but that is not the case if I am generating templates on the server side.
Whatever you do: you have to check all the permissions on the server-side as well (in your REST backend, as you noted). No matter what hoops you jump through, someone will be able to make a REST call that they are not supposed to make.
This effectively makes your client-side security system an optimization: you try to display only allowed operations to the user and you try to avoid round-trips to the server to fetch what is allowed.
As such you don't really need to care if a user can "hack" it: if they break your application, they can keep both parts. Nothing wrong can happen, because the server won't let them execute an action that they are not authorized to.
However, I'd still write the client-side code in a way that it expect an "access denied" as a valid answer (and not necessary an exception). There are many reasons why that response might come: If the permissions of the logged-in user are changed while he has a browser open, then the security descriptions of the client no longer match the server and that situation should be handled gracefully (display "Sorry, this operation is not permitted" and reload the security descriptions, for example).
Don't ever trust Javascript code or the front-end in general. People can even modify the code before it reaches your browser (sniffers etc) and most variables are accessible and modifiable anyways... Trust me: you are never going to be safe on the front-end :)
Always check credentials on the server-side, never only on the front-end!
In modern browsers, you can use Object.freeze or Object.defineProperty to make sure the hasPermission method cannot be redefined.
I don't know yet how to overcome the problem with setPermission. Maybe it's best to just rely on the server-side security there, which as you said you have anyway.