While on a page on subdomain.foo.com, I know it is possible to set a cookie in JavaScript with a domain=foo.com clause (or domain=.foo.com according to earlier specifications), and have that cookie apply to all subdomains.
When I open up the developer console in Chrome on a GitHub Pages page (say, yelp.github.io) and try this using a domain=github.io clause, the cookie doesn't get set (document.cookie yields the empty string). I only seem to be able to get the cookie to set if I omit the domain clause, or use domain=yelp.github.io.
I can understand why GitHub would want to restrict cookie scope this way for security reasons, but I'm not sure how this is actually working or what's behind the behavior I'm seeing. Is there something special about the github.io domain? Is there a security policy being applied I'm not aware of? Or am I just doing it wrong?
document.cookie = 'foo=1; domain=github.io'
According to the relevant specification (RFC 6265), cookies are rejected for public suffixes if set from a subdomain:
If the user agent is configured to reject "public suffixes" and the domain-attribute is a public suffix:
If the domain-attribute is identical to the canonicalized request-host:
Let the domain-attribute be the empty string.
Otherwise:
Ignore the cookie entirely and abort these steps.
NOTE: A "public suffix" is a domain that is controlled by a
public registry, such as "com", "co.uk", and "pvt.k12.wy.us".
This step is essential for preventing attacker.com from
disrupting the integrity of example.com by setting a cookie
with a Domain attribute of "com". Unfortunately, the set of
public suffixes (also known as "registry controlled domains")
changes over time. If feasible, user agents SHOULD use an
up-to-date public suffix list, such as the one maintained by
the Mozilla project at http://publicsuffix.org/.
github.io is on the public suffix list.
Related
I just starting playing around with WebAuthn on localhost. I'm wondering if there is a way to restrict a credential for auth on a specific origin / website. Is that already the case or can anyone with the public key just call navigator.credentials.get on any host?
There are two levels of controls that prevent passkeys from being used on the wrong website:
“RP” stands for “relying party”. You (the website) are a “relying party” in authentication-speak. An RP ID is a domain name and every passkey has one that’s fixed at creation time. Every passkey operation asserts an RP ID and, if a passkey’s RP ID doesn’t match, then it doesn’t exist for that operation.
This prevents one site from using another’s passkeys. A passkey with an RP ID of foo.com can’t be used on bar.com because bar.com can’t assert an RP ID of foo.com. A site may use any RP ID formed by discarding zero or more labels from the left of its domain name until it hits an eTLD. So say that you’re https://www.foo.co.uk: you can assert www.foo.co.uk (discarding zero labels), foo.co.uk (discarding one label), but not co.uk because that hits an eTLD. If you don’t set an RP ID in a request then the default is the site’s full domain.
Our www.foo.co.uk example might happily be creating passkeys with the default RP ID but later decide that it wants to move all sign-in activity to an isolated origin, https://accounts.foo.co.uk. But none of the passkeys could be used from that origin! If would have needed to create them with an RP ID of foo.co.uk in the first place to allow that.
But you might want to be careful about always setting the most general RP ID because then usercontent.foo.co.uk could access and overwrite them too. That brings us to the second control mechanism: when a passkey is used to sign in, the browser includes the origin that made the request in the signed data. So accounts.foo.co.uk would be able to see that a request was triggered by usercontent.foo.co.uk and reject it, even if the passkey’s RP ID allowed usercontent.foo.co.uk to use it. But that mechanism can’t do anything about usercontent.foo.co.uk being able to overwrite them.
WebAuthn credentials are origin-bound by design, which is one of the reasons they are phishing resistant credentials.
I set session cookies but it creates new cookies. I'm tired of this Do you know how to fix it?
Code:
document.cookie = ".ROBLOSECURITY=cookie; expires=session; path=/";
Let's check documentation
;domain=domain (e.g., 'example.com' or 'subdomain.example.com'). If
not specified, this defaults to the host portion of the current
document location. Contrary to earlier specifications, leading dots in
domain names are ignored, but browsers may decline to set the cookie
containing such dots. If a domain is specified, subdomains are always
included.
Note: The domain must match the domain of the JavaScript
origin. Setting cookies to foreign domains will be silently ignored.
Your first cookie with domain www.roblox.com will be accessible only at www.roblox.com/... page but .roblox.com's cookie may be accessed by JS from all roblox.com subdomains.
Here is a good answer
So as #smac89 wrote in comment, You should add domain when create new cookie
document.cookie = ".ROBLOSECURITY=cookie; expires=session; path=/; domain=.roblox.com"
There is no syntax for what you want.
You can either not set the expiration value, the cookie will expire at the end of the session, or choose an arbitrarily large value.
Be aware that some browsers have problems with dates past 2038 (when the Unix epoch time exceeds a 32-bit integer).
See : https://stackoverflow.com/a/532660/1901857
I'm making a chrome extension that injects an iframe on a webpage and show some stuff.
Content loaded in iframe is from https://example.com and i have full control over it. I'm trying to access cookies of https://example.com from the iframe (which i think should be available) by document.cookie. This is not letting me access httponly flagged cookie and i do not know reason for this. After all this is no cross-domain. Is it?
Here is the code i'm using to get cookie
jQuery("#performAction").click(function(e) {
e.preventDefault();
console.log(document.domain); // https://example.com
var cookies = document.cookie;
console.log('cookies', cookies);
var httpFlaggedCookie1 = getCookie("login_sess");
var httpFlaggedCookie2 = getCookie("login_pass");
console.log('httpFlaggedCookie1 ', httpFlaggedCookie1 ); // shows blank
console.log('httpFlaggedCookie2 ', httpFlaggedCookie2 ); // shows blank
if(httpFlaggedCookie2 != "" && httpFlaggedCookie2 != ""){
doSomething();
} else{
somethingElse();
}
});
Any suggestions what can be done for this?
By default in Chrome, HttpOnly cookies are prevented to be read and written in JavaScript.
However, since you're writing a chrome extensions, you could use chrome.cookies.get and chrome.cookies.set to read/write, with cookies permissions declared in manifest.json. And be aware chrome.cookies can be only accessed in background page, so maybe you would need to do something with Message Passing
Alright folks. I struggled mightily to make httponly cookies show up in iframes after third party cookies have been deprecated. Eventually I was able to solve the issue:
Here is what I came up with:
Install a service worker whose script is rendered by your application server (eg in PHP). In there, you can output the cookies, in a closure, so no other scripts or even injected functions can read them. Attempts to load this same URL from other user-agents will NOT get the cookies, so it’s secure.
Yes the service workers are unloaded periodically, but every time it’s loaded again, it’ll have the latest cookies due to #1.
In your server-side code response rendering, for every time you add a Set-Cookie header, also add a Set-Cookie-JS header with the same content. Make the Service Worker intercept this response, read that cookie, and update the private object in the closure.
In the “fetch” event, add a special request header such as Cookie-JS, and pass what would have been passed in the cookie. Add this to the request headers before sending the request to the server. In this way, you can send all “httponly” cookies back to the server, without the Javascript being able to see them, even if actual cookies are blocked!
On your server, process the Cookie-JS header and merge that into your usual Cookies mechanism, then proceed to run the rest of your code as usual.
Although this seems secure to me — I’d appreciate if anyone reported a security flaw!! — there is a better mechanism than cookies.
Consider using non-extractable private keys such as ECDSA to sign hashes of payloads, also using a service worker. (In super-large payloads like videos, you may want your hash to sample only a part of the payload.) Let the client generate the key pair when a new session is established, and send the public key along with every request. On the server, store the public key in a session. You should also have a database table with the (publicKey, cookieName) as the primary key. You can then look up all the cookies for the user based on their public key — which is secure because the key is non-extractable.
This scheme is actually more secure than cookies, because cookies are bearer tokens and are sometimes subject to session fixation attacks, or man-in-the-middle attacks (even with https). Request payloads can be forged on the server and the end-user cannot prove they didn’t make that request. But with this second approach, the user’s service worker is signing everything on the client side.
A final note of caution: the way the Web works, you still have to trust the server that hosts the domain of the site you’re on. It could just as easily ship JS code to you one day to sign anything with the private key you generated. But it cannot steal the private key itself, so it can only sign things when you’ve loaded the page. So, technically, if your browser is set to cache a top-level page for “100 years”, and that page contains subresource integrity on each resource it loads, then you can be sure the code won’t change on you. I wish browsers would show some sort of green padlock under these conditions. Even better would be if auditors of websites could specify a hash of such a top-level page, and the browser’s green padlock would link to security reviews published under that hash (on, say, IPFS, or at a Web URL that also has a hash). In short — this way websites could finally ship code you could trust would be immutable for each URL (eg version of an app) and others could publish security audits and other evaluations of such code.
Maybe I should make a browser extension to do just that!
I updated to Firefox 40 today, and I see a neat new message in my Firebug console:
Found hi-entropy localStorage: 561.0263282209031 bits http://localhost:8080/my_app_path itemName
...where itemName is the name of a particular item I've stuck in localStorage.
The referenced line number is always unhelpful: the last one of the main HTML document (it is a single-page app).
Why does this happen? If you'd like an example of my "hi-entropy localStorage", here are the data in question:
Object {
id: "c9796c88-8d22-4d33-9d13-dcfdf4bc879a",
userId: 348,
userName: "admin"
}
Your browser has the Privacy Badger plugin (1.0), which can detect some types of super-cookies and browser fingerprinting. It identified your local storage item as a false positive and produced those cryptic logs.
A high-entropy string can be vaguely defined as complicated, hard to guess/repeat, or likely to contain meaningful information. If there's such a string in your local storage (in your example, the item id), it's possible that advertisers put it there to uniquely identify you. Privacy Badger has rough methods to estimate a string's entropy, which the developers discuss here.
You should check out the paper The Web never forgets: Persistent tracking mechanisms in the wild, particularly the section on cookie-syncing:
Cookie synchronization or cookie syncing is the practice of tracker domains passing pseudonymous IDs associated with a given user, typically stored in cookies, amongst each other.
I guess is a stranded value. I disabled a script from zopim chat and this started to show. looking for what entropy means I found this explanation "(in data transmission and information theory) a measure of the loss of information in a transmitted signal or message. " which make sense.
You can see what is in Local Storage by opening Developer tools (Ctrl+Shift+S) and enable Local Storage panel by pressing Toolbox option in the right side of the menu bar.
To delete the value in question, just follow the steps from here How to view/delete local storage in Firefox?
Lets say I have a php generated javasrcipt file that has the user's name, id number and email adress that is currently logged in. Would a simply document.location.href look up prevent remotes sites from determining the currently logged in user?
Would this be safe?
if(window.document.location.hostname == 'domain.com')
var user = {
name:'me',
id:234243,
email:'email#email.com'
};
else alert('Sorry you may not request this info cross sites.');
Initially it appears safe to me.
EDIT: I had initially thought this was obvious but I am using cookies to determine the currently logged in user. I am just trying to prevent cross domain access to the users info. For example if the if statement was removed malicious site A could embed the javascript file and access the users info. By adding the if statement the user js object should never appear. Cross site ajax isn't supported therefore only through javascript insertion could the malicious site attempt to determine the currently logged in user.
EDIT 2: Would checking my http_refer using php be safe? What if caching is also enabled for the client? For example if the user visits my site A where the user script is downloaded and then later visits site B malicious site would the script be cached, therefore bypassing the need for the server to check the user's http_refer?
You're basically saying "here's the keys to the bank vault, here's the guard's schedule, and here's the staff schedule. But hey, if you're not from the Acme Security Company, pretend I didn't give this to you".
"oh, sure, no problem, lemme just pretend to shred this note and go rent a large truck haul away your vault contents with"
You really just don't want to try something like this. Suppose I'm running an evil site; what do I do?
<script>
RegExp.prototype.test = function() { return true; };
</script>
<script src="http://yoursite.example.com/dynamicjs.php"></script>
<script>
alert("Look at the data I stole: " + user);
</script>
No, what you have there is not "safe" in that it will reveal those details to anyone requesting the HTML page containing that JavaScript. All they have to do is look at the text (including script) returned by the server.
What it comes down to is this: Either you have authenticated the other end to your satisfaction, in which case you don't need the check in the JavaScript, or you haven't, in which case you don't want to output the details to the response at all. There's no purpose whatsoever to that client-side if statement. Try this: http://jsbin.com/aboze5 It'll say you can't request the data; then do a View Source, and note that you can see the data.
Instead, you need to check the origin of the request server-side and not output those details in the script at all if the origin of the request is not authenticated.
Update 1: Below you said:
I was specifically trying to determine if document.location.href could be falsified.
Yes, document.location can be falsified through shadowing the document symbol (although you might be able to detect that if you tried hard enough):
(function() {
var document; // Shadow the symbol
document = {
location: {
href: "http://example.com/foo.html"
}
};
alert("document.location.href = " + document.location.href);
})();
Live copy
Cross-domain checks must happen within the browser's internals, nothing at the level of your JavaScript code can do it securely and robustly.
But that really doesn't matter. Even if it couldn't be falsified, the quoted example code doesn't protect the data. By the time the client-side check is done, the data has already been sent to the client.
Update 2: You've added a note about checking the HTTP_REFERER (sic) header (yes, it really is misspelled). Sadly, no, you can't trust that. HTTP_REFERER can be spoofed, and separately it can be suppressed.
Off-topic: You're probably already doing this, but: When transferring personal details you've promised to keep confidential (I don't know whether you have, but hopefully so), use HTTPS (e.g., SSL). But it's important to remember that while HTTPS ensures that data cannot be read in transit, it does nothing to ensure that the origin of the request is authenticated. E.g., you know the conversation is secure (within reason and current practice), but you don't necessarily know who you're talking to. There's where authentication comes into it.