I just starting playing around with WebAuthn on localhost. I'm wondering if there is a way to restrict a credential for auth on a specific origin / website. Is that already the case or can anyone with the public key just call navigator.credentials.get on any host?
There are two levels of controls that prevent passkeys from being used on the wrong website:
“RP” stands for “relying party”. You (the website) are a “relying party” in authentication-speak. An RP ID is a domain name and every passkey has one that’s fixed at creation time. Every passkey operation asserts an RP ID and, if a passkey’s RP ID doesn’t match, then it doesn’t exist for that operation.
This prevents one site from using another’s passkeys. A passkey with an RP ID of foo.com can’t be used on bar.com because bar.com can’t assert an RP ID of foo.com. A site may use any RP ID formed by discarding zero or more labels from the left of its domain name until it hits an eTLD. So say that you’re https://www.foo.co.uk: you can assert www.foo.co.uk (discarding zero labels), foo.co.uk (discarding one label), but not co.uk because that hits an eTLD. If you don’t set an RP ID in a request then the default is the site’s full domain.
Our www.foo.co.uk example might happily be creating passkeys with the default RP ID but later decide that it wants to move all sign-in activity to an isolated origin, https://accounts.foo.co.uk. But none of the passkeys could be used from that origin! If would have needed to create them with an RP ID of foo.co.uk in the first place to allow that.
But you might want to be careful about always setting the most general RP ID because then usercontent.foo.co.uk could access and overwrite them too. That brings us to the second control mechanism: when a passkey is used to sign in, the browser includes the origin that made the request in the signed data. So accounts.foo.co.uk would be able to see that a request was triggered by usercontent.foo.co.uk and reject it, even if the passkey’s RP ID allowed usercontent.foo.co.uk to use it. But that mechanism can’t do anything about usercontent.foo.co.uk being able to overwrite them.
WebAuthn credentials are origin-bound by design, which is one of the reasons they are phishing resistant credentials.
Related
I was testing out WebAuthn in front side(this means no backend thingy, like challenge, id, etc.)
Why does icon matter?
When I first tried, I could only auth with a security key. But when I added an icon: undefined to publickey.user.icon, I could auth with Windows Hello. And, even if I insert a REAL icon link, it didn't show up. Windows 10 Edu, the latest version
How can I implement it?
I've found that I could use res(navigator.credentials....).response.attestationObject. Is this the right way to use WebAuthn?
About physical security key
Let's say I've got a security key USB with fingerprint support. Then I put my fingerprint then register with WebAuthn. Then my friend comes in, and he does the registration with his fingerprint. Then would the key(.response.attestationObject) be the same together because it's the same physical fingerprint or be different because it's different fingerprints?
[Partial anwser here, I will be happy to see other answers from community members]
The icon parameter has been removed from the new version of the specification.
Webauthn-1: https://www.w3.org/TR/webauthn-1/#dictionary-pkcredentialentity
Webauthn-2: https://www.w3.org/TR/webauthn-2/#dictionary-pkcredentialentity
It was a property with an a priori authenticated URL e.g. data::/ instead of https://
Can you be more precise?
A security key is usually used by only one user. New credentials are generated each time a user uses the key to register on an application. With the use case you mentions, 2 sets of credentials will be generated by the key and associated with biometric data. There is no chance for user 2 to be logged in as user 1
Not sure if the title summarises my question well.
Basically, I am trying to authenticate routes such as checking if user exists etc. I only want to allow
requests coming from my frontend application to be approved, but, since no user is signed in there is no token to send.
Api request -
mywebiste/checkUser/email
This route is unprotected on my backend because no user is logged in.
BUT I want to protect this route, in such a way that it's accessible only from the frontend.
Some ideas I came up with were adding specific headers tag from the frontend and check them on the backend, but that could be easily replicated, is there something more secure like using tokens etc.
I am using React and Node.js
Same origin policy is going to give you some basic protection, but basically if an API endpoint is exposed publicly, it's exposed publicly. If you don't want that route to be publicly accessible you need to add access control.
If you use that route to check if a user is already registered, you could, for example, merge it with the user registration route and send a different error code if the user already exists (which is not a great idea because it leaks which emails are registered on your system).
You can verify that a request was originated by a user (by authenticating him) but you cannot verify that a request comes from a particular client because of these two reasons :
If you include some API key in your client (web page or other), it's easily retrievable by everyone (the best thing you could do is offuscate it which makes things slightly harder but still possible)
If you send an API key over the network it's easily retrievable as well
The only thing you could do is prevent other web pages from calling your backend on behalf of the user, by using CORS (which is actually active by default if you dont specify an Access-Control-Allow-Origin header)
I ended up creating a kind of working solution, so basically, I create a new base64 string on my frontend and attach that to the header while making a request to the backend. The base64 string is different every minute, so even if the header is copied, it differs every minute and is combined with your secret key.
I have made a package so that people can use it if they want - https://github.com/dhiraj1site/ncrypter
You can use it like so
var ncrypter = require('ncrypter');
//use encode on your frontend with number of seconds and secret key
var encodedString = ncrypter.encrypt(2, 'mysecret1')
//use decode on your backend with same seconds and secret
var decodedString = ncrypter.decrypt(encodedString, 2, 'mysecret1');
console.log('permission granted -->', decodedString);
While on a page on subdomain.foo.com, I know it is possible to set a cookie in JavaScript with a domain=foo.com clause (or domain=.foo.com according to earlier specifications), and have that cookie apply to all subdomains.
When I open up the developer console in Chrome on a GitHub Pages page (say, yelp.github.io) and try this using a domain=github.io clause, the cookie doesn't get set (document.cookie yields the empty string). I only seem to be able to get the cookie to set if I omit the domain clause, or use domain=yelp.github.io.
I can understand why GitHub would want to restrict cookie scope this way for security reasons, but I'm not sure how this is actually working or what's behind the behavior I'm seeing. Is there something special about the github.io domain? Is there a security policy being applied I'm not aware of? Or am I just doing it wrong?
document.cookie = 'foo=1; domain=github.io'
According to the relevant specification (RFC 6265), cookies are rejected for public suffixes if set from a subdomain:
If the user agent is configured to reject "public suffixes" and the domain-attribute is a public suffix:
If the domain-attribute is identical to the canonicalized request-host:
Let the domain-attribute be the empty string.
Otherwise:
Ignore the cookie entirely and abort these steps.
NOTE: A "public suffix" is a domain that is controlled by a
public registry, such as "com", "co.uk", and "pvt.k12.wy.us".
This step is essential for preventing attacker.com from
disrupting the integrity of example.com by setting a cookie
with a Domain attribute of "com". Unfortunately, the set of
public suffixes (also known as "registry controlled domains")
changes over time. If feasible, user agents SHOULD use an
up-to-date public suffix list, such as the one maintained by
the Mozilla project at http://publicsuffix.org/.
github.io is on the public suffix list.
I'm making a chrome extension that injects an iframe on a webpage and show some stuff.
Content loaded in iframe is from https://example.com and i have full control over it. I'm trying to access cookies of https://example.com from the iframe (which i think should be available) by document.cookie. This is not letting me access httponly flagged cookie and i do not know reason for this. After all this is no cross-domain. Is it?
Here is the code i'm using to get cookie
jQuery("#performAction").click(function(e) {
e.preventDefault();
console.log(document.domain); // https://example.com
var cookies = document.cookie;
console.log('cookies', cookies);
var httpFlaggedCookie1 = getCookie("login_sess");
var httpFlaggedCookie2 = getCookie("login_pass");
console.log('httpFlaggedCookie1 ', httpFlaggedCookie1 ); // shows blank
console.log('httpFlaggedCookie2 ', httpFlaggedCookie2 ); // shows blank
if(httpFlaggedCookie2 != "" && httpFlaggedCookie2 != ""){
doSomething();
} else{
somethingElse();
}
});
Any suggestions what can be done for this?
By default in Chrome, HttpOnly cookies are prevented to be read and written in JavaScript.
However, since you're writing a chrome extensions, you could use chrome.cookies.get and chrome.cookies.set to read/write, with cookies permissions declared in manifest.json. And be aware chrome.cookies can be only accessed in background page, so maybe you would need to do something with Message Passing
Alright folks. I struggled mightily to make httponly cookies show up in iframes after third party cookies have been deprecated. Eventually I was able to solve the issue:
Here is what I came up with:
Install a service worker whose script is rendered by your application server (eg in PHP). In there, you can output the cookies, in a closure, so no other scripts or even injected functions can read them. Attempts to load this same URL from other user-agents will NOT get the cookies, so it’s secure.
Yes the service workers are unloaded periodically, but every time it’s loaded again, it’ll have the latest cookies due to #1.
In your server-side code response rendering, for every time you add a Set-Cookie header, also add a Set-Cookie-JS header with the same content. Make the Service Worker intercept this response, read that cookie, and update the private object in the closure.
In the “fetch” event, add a special request header such as Cookie-JS, and pass what would have been passed in the cookie. Add this to the request headers before sending the request to the server. In this way, you can send all “httponly” cookies back to the server, without the Javascript being able to see them, even if actual cookies are blocked!
On your server, process the Cookie-JS header and merge that into your usual Cookies mechanism, then proceed to run the rest of your code as usual.
Although this seems secure to me — I’d appreciate if anyone reported a security flaw!! — there is a better mechanism than cookies.
Consider using non-extractable private keys such as ECDSA to sign hashes of payloads, also using a service worker. (In super-large payloads like videos, you may want your hash to sample only a part of the payload.) Let the client generate the key pair when a new session is established, and send the public key along with every request. On the server, store the public key in a session. You should also have a database table with the (publicKey, cookieName) as the primary key. You can then look up all the cookies for the user based on their public key — which is secure because the key is non-extractable.
This scheme is actually more secure than cookies, because cookies are bearer tokens and are sometimes subject to session fixation attacks, or man-in-the-middle attacks (even with https). Request payloads can be forged on the server and the end-user cannot prove they didn’t make that request. But with this second approach, the user’s service worker is signing everything on the client side.
A final note of caution: the way the Web works, you still have to trust the server that hosts the domain of the site you’re on. It could just as easily ship JS code to you one day to sign anything with the private key you generated. But it cannot steal the private key itself, so it can only sign things when you’ve loaded the page. So, technically, if your browser is set to cache a top-level page for “100 years”, and that page contains subresource integrity on each resource it loads, then you can be sure the code won’t change on you. I wish browsers would show some sort of green padlock under these conditions. Even better would be if auditors of websites could specify a hash of such a top-level page, and the browser’s green padlock would link to security reviews published under that hash (on, say, IPFS, or at a Web URL that also has a hash). In short — this way websites could finally ship code you could trust would be immutable for each URL (eg version of an app) and others could publish security audits and other evaluations of such code.
Maybe I should make a browser extension to do just that!
I've been thinking about services such as pwnedlist.com and shouldichangemypassword.com and the fundamental problem with them - trust.
That is to say the user must trust that these services aren't going to harvest the submitted queries.
Pwnedlist.com offers the option to submit a SHA-512 hash of the users query which is a step forward but still leaks information if the query does exist in the database. That is, a malicious service would know that the given email address was valid (see also: why you should never click unsubscribe links in spam email).
The solution I came up with is as follows:
1) Instead of the user calculating and submitting the hash herself, the hash (I'll use the much simpler md5 in my example) is calculated via client side javascript:
md5("user#example.com") = "b58996c504c5638798eb6b511e6f49af"
2) Now, instead of transmitting the entire hash as a query to the server, only the first N bits are transmitted:
GET http://remotesite.com?query=b58996
3) The server responds with all hashes that exist in it's database that begin with the same N bits:
{
"b58996afe904bc7a211598ff2a9200fe",
"b58996c504c5638798eb6b511e6f49af",
"b58996443fab32c087632f8992af1ecc",
...etc... }
4) The client side javascript compares the list of hashes returned by the server and informs the user whether or not her email address exists in the DB.
Since "b58996c504c5638798eb6b511e6f49af" is present in the server response, the email exists in the database - inform the user!
Now, the obvious problem with this solution is that the user must trust the client side javascript to only transmit what it says it is going to transmit. Sufficiently knowledgable individuals however, would be able to verify that the query isn't being leaked (by observing the queries sent to the server). It's not a perfect solution but it would add to the level of trust if a user could (theoretically) verify that site functions as it says it does.
What does SO think of this solution? Importantly, does anyone know of any existing examples or discussion of this technique?
NOTE: Both pwnedlist.com and shouldichangemypassword.com are apparently run by reputable people/organizations, and I have no reason to believe otherwise. This is more of a thought exercise.
Services like pwnedlist.com are working with public information. By definition everyone has access to this data, so attempting to secure it is a moot point. An attacker will just download it from The Pirate Bay.
However, using a hash function like this is still easy to break because its unsalted and lacks key straighting. In all reality a message digest function like sha-512 just isn't the right tool for the job.
You are much better off with a Bloom Filter. This allows you to create a blacklist of leaked data without any possibility of obtaining the plain-text. This is because a permutation based brute force likely to find collisions than real plain text. Lookups and insertions a cool O(1) complexity, and the table its self takes up much less space, maybe 1/10,000th of the space it would using a traditional sql database, but this value is variable depending on the error rate you specify.