I'm currently building a single page application using ReactJS.
I read that one of the reasons for not using localStorage is because of XSS vulnerabilities.
Since React escapes all user input, would it now be safe to use localStorage?
In most of the modern single page applications, we indeed have to store the token somewhere on the client side (most common use case - to keep the user logged in after a page refresh).
There are a total of 2 options available: Web Storage (session storage, local storage) and a client side cookie. Both options are widely used, but this doesn't mean they are very secure.
Tom Abbott summarizes well the JWT sessionStorage and localStorage security:
Web Storage (localStorage/sessionStorage) is accessible through JavaScript on the same domain. This means that any JavaScript running on your site will have access to web storage, and because of this can be vulnerable to cross-site scripting (XSS) attacks. XSS, in a nutshell, is a type of vulnerability where an attacker can inject JavaScript that will run on your page. Basic XSS attacks attempt to inject JavaScript through form inputs, where the attacker puts <script>alert('You are Hacked');</script> into a form to see if it is run by the browser and can be viewed by other users.
To prevent XSS, the common response is to escape and encode all untrusted data. React (mostly) does that for you! Here's a great discussion about how much XSS vulnerability protection is React responsible for.
But that doesn't cover all possible vulnerabilities! Another potential threat is the usage of JavaScript hosted on CDNs or outside infrastructure.
Here's Tom again:
Modern web apps include 3rd party JavaScript libraries for A/B testing, funnel/market analysis, and ads. We use package managers like Bower to import other peoples’ code into our apps.
What if only one of the scripts you use is compromised? Malicious JavaScript can be embedded on the page, and Web Storage is compromised. These types of XSS attacks can get everyone’s Web Storage that visits your site, without their knowledge. This is probably why a bunch of organizations advise not to store anything of value or trust any information in web storage. This includes session identifiers and tokens.
Therefore, my conclusion is that as a storage mechanism, Web Storage does not enforce any secure standards during transfer. Whoever reads Web Storage and uses it must do their due diligence to ensure they always send the JWT over HTTPS and never HTTP.
Basically it's OK to store your JWT in your localStorage.
And I think this is a good way.
If we are talking about XSS, XSS using CDN, it's also a potential risk of getting your client's login/pass as well. Storing data in local storage will prevent CSRF attacks at least.
You need to be aware of both and choose what you want. Both attacks it's not all you are need to be aware of, just remember: YOUR ENTIRE APP IS ONLY AS SECURE AS THE LEAST SECURE POINT OF YOUR APP.
Once again storing is OK, be vulnerable to XSS, CSRF,... isn't
I know this is an old question but according what #mikejones1477 said, modern front end libraries and frameworks escape the text giving you protection against XSS. The reason why cookies are not a secure method using credentials is that cookies doesn't prevent CSRF when localStorage does (also remember that cookies are accessible by JavaScript too, so XSS isn't the big problem here), this answer resume why.
The reason storing an authentication token in local storage and manually adding it to each request protects against CSRF is that key word: manual. Since the browser is not automatically sending that auth token, if I visit evil.example and it manages to send a POST http://example.com/delete-my-account, it will not be able to send my authn token, so the request is ignored.
Of course httpOnly is the holy grail but you can't access from reactjs or any js framework beside you still have CSRF vulnerability. My recommendation would be localstorage or if you want to use cookies make sure implemeting some solution to your CSRF problem like Django does.
Regarding with the CDN's make sure you're not using some weird CDN, for example CDN like Google or bootstrap provide, are maintained by the community and doesn't contain malicious code, if you are not sure, you're free to review.
A way to look at this is to consider the level of risk or harm.
Are you building an app with no users, POC/MVP? Are you a startup who needs to get to market and test your app quickly? If yes, I would probably just implement the simplest solution and maintain focus on finding product-market-fit. Use localStorage as its often easier to implement.
Are you building a v2 of an app with many daily active users or an app that people/businesses are heavily dependent on. Would getting hacked mean little or no room for recovery? If so, I would take a long hard look at your dependencies and consider storing token information in an http-only cookie.
Using both localStorage and cookie/session storage have their own pros and cons.
As stated by first answer: If your application has an XSS vulnerability, neither will protect your user. Since most modern applications have a dozen or more different dependencies, it becomes increasingly difficult to guarantee that one of your application's dependencies is not XSS vulnerable.
If your application does have an XSS vulnerability and a hacker has been able to exploit it, the hacker will be able to perform actions on behalf of your user. The hacker can perform GET/POST requests by retrieving token from localStorage or can perform POST requests if token is stored in a http-only cookie.
The only down-side of the storing your token in local storage is the hacker will be able to read your token.
One thing to keep in mind is whether the JWTs are:
First party (ie. simply for accessing your own server commands)
Third party (ie. a JWT for Google, Facebook, Twitter, etc.)
If the JWT is first-party:
Then it doesn't matter that much whether you store the JWT in local storage, or a secured cookie (ie. HttpOnly, SameSite=strict, and secure) [assuming your site is already using HTTPS, which it should].
This is because, assuming an XSS attack succeeds (ie. an attacker was able to insert Javascript code through a JS dependency that is now running on all visitor browsers), it's "game over" anyway; all the commands which were meant to be secured by the "JWT token verifications", can now be executed by the attacker just by having the script they've inserted into the frontend JS call all the needed endpoints. Even though they can't read the JWT token itself (because of the cookie's http-only flag), it doesn't matter because they can just send all the needed commands, and the browser will happily send the JWT token along with those commands.
Now while the XSS-attack situation is arguably "game over" either way (whether local-storage or secured cookie), cookies are still a little better, because the attacker is only able to execute the attacks if/when the user has the website open in their browser.
This causes the following "annoyances" for the attacker:
"My XSS injection worked! Okay, time to collect private data on my boss and use it as blackmail. Dang it! He only ever logs in while I'm here at work. I'll have to prepare all my code ahead of time, and have it run within the three minutes he's on there, rather than getting to poke around into his data on the platform in a more gradual/exploratory way."
"My XSS injection worked! Now I can change the code to send all Bitcoin transfers to me instead! I don't have any particular target in mind, so I don't need to wait for anyone. Man though, I wish I could access the JWT token itself -- that way I could silently collect them all, then empty everyone's wallets all at once. With these cookie-protected JWTs, I may only be able to hijack a few dozen visitors before the devs find out and suspend transfers..."
"My XSS injection worked! This'll give me access to even the data that only the admins can see. Hmmm, unfortunately I have to do everything through the user's browser. I'm not sure there's a realistic way for me to download those 3gb files using this; I start the download, but there are memory issues, and the user always closes the site before it's done! Also, I'm concerned that client-side retransfers of this size might get detected by someone."
If the JWT is third-party:
In this case, it really depends on what the third-party JWTs allow the holder to do.
If all they do is let someone "access basic profile information" on each user, then it's not that bad if attackers can access it; some emails may leak, but the attacker could probably get that anyway by navigating to the user's "account page" where that data is shown in the UI. (having the JWT token just lets them avoid the "annoyances" listed in the previous section)
If, instead, the third-party JWTs let you do more substantial things -- such as have full access to their cloud-storage data, send out messages on third-party platforms, read private messages on third-party platforms, etc, then having access to the JWTs is indeed substantially worse than just being able to "send authenticated commands".
This is because, when the attacker can't access the actual JWT, they have to route all commands through your 1st-party server. This has the following advantages:
Limited commands: Because all the commands are going through your server, attackers can only execute the subset of commands that your server was built to handle. For example, if your server only ever reads/writes from a specific folder in a user's cloud storage, then the attacker has the same limitation.
Easier detection: Because all the commands are going through your server, you may be able to notice (through logs, sudden uptick in commands, etc.) that someone has developed an XSS attack. This lets you potentially patch it more quickly. (if they had the JWTs themselves, they could silently be making calls to the 3rd-party platforms, without having to contact your servers at all)
More ways to identify the attacker: Because the commands are going through your server, you know exactly when the commands are being made, and what ip-address is being used to make them. In some cases, this could help you identify who is doing the attacks. The ip-address is the most obvious way, though admittedly most attackers capable of XSS attacks would be aware enough to use a proxy.
A more advanced identification approach might be to, say, have a special message pop up that is unique for each user (or, at least, split into buckets), of such a nature that the attacker (when he loads up the website from his own account) will see that message, and try to run a new command based on it. For example, you could link to a "fake developer blog post" talking about some "new API" you're introducing, which allows users to access even more of their private data; the sneaky part is that the URL for that "new API" is different per user viewing the blog post, such that when the API is attempted to be used (against the victim), you know exactly who did it. Of course, this relies on the idea that the attacker has a "real account" on the site alongside the victim, and could be tempted/fooled by this sort of approach (eg. it won't work if the attacker knows you're onto him), but it's an example of things you can do when you can intercept all authenticated commands.
More flexible controlling: Lets say that you've just discovered that someone deployed an XSS attack on your site.
If the attackers have the 3rd-party JWTs themselves, your options are limited: you have to globally disable/reset your OAuth/JWT configuration for all 3rd-party platforms. This causes serious disruption while you try to figure out the source of the XSS attack, as no one is able to access anything from those 3rd-party platforms. (including your own server, since the JWT tokens it may have stored are now invalid)
If the JWT tokens are instead protected in http-only cookies, you have more options: You can simply modify your server to "filter out" any reads/writes that are potentially dangerous. In some cases added this "filtering" is a quick and easy process, allowing your site to continue in "read-only"/"limited" mode without disrupting everything; in other cases, things may be complex enough that it's not worth trusting the filter code for security. The point though is that you have more options.
For example, maybe you don't know for sure that someone has deployed an XSS attack, but you suspect it. In this case, you may not want to invalidate the JWT tokens of every user (including those your server is using in the background) simply on the suspicion of an XSS attack (it depends on your suspicion level). Instead, you can just "make things read-only for a while" while you look into the issue more closely. If it turns out nothing is wrong, you can just flip a switch and re-enable writes, without everyone having to log back in and such.
Anyway, because of these four benefits, I've decided to always store third-party JWTs in "secured cookies" rather than local storage. While currently the third-party JWTs have very limited scopes (such that it's not so big a deal if they are stolen), it's good future-proofing to do this, in case I'd like my app to request access to more privileged functionalities in the future (eg. access to the user's cloud storage).
Note: The four benefits above (for storing third-party JWTs in secured cookies) may also partially apply for first-party JWTs, if the JWTs are used as authentication by multiple backend services, and the domains/ip-addresses of these other servers/services are public knowledge. In this case, they are "equivalent to third-party platforms", in the sense that "http-only cookies" restrict the XSS attacker from sending direct commands to those other servers, bringing part of the benefits of the four points above. (it's not exactly the same, since you do at least control those other servers, so you can activate read-only mode for them and such -- but it'll still generally be more work than making those changes in just one place)
I’m disturbed by all the answers that suggest not to store in local storage as this is susceptible to an XSS attack or a malicious library. Some of these even go into long-winded discussions, even though the answer is pretty small/straightforward, which I’ll get to shortly.
Suggesting that is the equivalent of saying “Don’t use a frying pan to cook your food because if you end up drunk one night and decide to fry, you’ll end up burning yourself and your house”.
If the jwt gets leaked due to an XSS attack or malicious library, then the site owner has a bigger problem: their site is susceptible to XSS attacks or is using a malicious library.
The answer: if you’re confident your site doesn’t have those vulnerabilities, go for it.
Ref: https://auth0.com/docs/security/data-security/token-storage#browser-local-storage-scenarios
It is not safe if you use CDN's:
Malicious JavaScript can be embedded on the page, and Web Storage is compromised. These types of XSS attacks can get everyone’s Web Storage that visits your site, without their knowledge. This is probably why a bunch of organizations advise not to store anything of value or trust any information in web storage. This includes session identifiers and tokens.
via stormpath
Any script you require from the outside could potentially be compromised and could grab any JWTS from your client's storage and send personal data back to the attacker's server.
Localstorage is designed to be accessible by javascript, so it doesn't provide any XSS protection. As mentioned in other answers, there is a bunch of possible ways to do an XSS attack, from which localstorage is not protected by default.
However, cookies have security flags which protect from XSS and CSRF attacks. HttpOnly flag prevents client side javascript from accessing the cookie, Secure flag only allows the browser to transfer the cookie through ssl, and SameSite flag ensures that the cookie is sent only to the origin. Although I just checked and SameSite is currently supported only in Opera and Chrome, so to protect from CSRF it's better to use other strategies. For example, sending an encrypted token in another cookie with some public user data.
So cookies are a more secure choice for storing authentication data.
Isn't neither localStorage or httpOnly cookie acceptable? In regards to a compromised 3rd party library, the only solution I know of that will reduce / prevent sensitive information from being stolen would be enforced Subresource Integrity.
Subresource Integrity (SRI) is a security feature that enables
browsers to verify that resources they fetch (for example, from a CDN)
are delivered without unexpected manipulation. It works by allowing
you to provide a cryptographic hash that a fetched resource must
match.
As long as the compromised 3rd party library is active on your website, a keylogger can start collecting info like username, password, and whatever else you input into the site.
An httpOnly cookie will prevent access from another computer but will do nothing to prevent the hacker from manipulating the user's computer.
There's a useful article written by Dr. Philippe De Ryck which gives an insight into the true impact of vulnerabilities particularly XSS.
This article is an eye opener!
In a nutshell, primary concern of the developer should be to protect the web application against XSS and shouldn't worry too much about what type of storage area is used.
Dr. Phillipe recommends the following 3 steps:
Don't worry too much about the storage area. Saving an access token in localStorage area will save the developer a massive amount of time for development of next phases of the application.
Review your app for XSS vulnerabilities. Perform a through code review and learn how to avoid XSS within the scope of your templating framework.
Build a defense-in-depth mechanism against XSS. Learn how you could further lock down your application. E.g. utilising Content Security Policy (CSP) and HTML5 sandboxing.
Remember that once you're vulnerable to XSS then its game over!
TLDR;
Both work, but using a cookie with httpOnly is way safer than using localStorage, as any malicious javascript code introduced by XSS can read localstorage.
I'm coming late to the discussion, but with the advantage of more mature and modern auth protocols like OpenID Connect.
TL;DR: The preferred method is to store your JWT Token in memory: not in a cookie, and not in localstorage.
Details
You want to decouple the responsibility of authenticating users from the rest of the work your app does. Auth is hard to get right, and the handful of teams that spend all their time thinking about this stuff can worry about the details you and I will never get right.
Establish a dedicated Identity Provider for your app, and use the OpenID Connect protocol to authenticate with it. This could be a provider like Google, Microsoft, or Okta, or it could be a lightweight Identity Server that federates to one or more of those other services.
Use the Authorization Code Flow to let the user authenticate and get the access token to your app. Use a respected client library to handle the OpenID Connect details, so you can just have the library notify your app when it has a valid token, when a new valid token has been obtained via refresh, or when the token cannot be refreshed (so the user needs to authenticate again). The library should be configured (probably by default) to avoid storing the token at all.
FAQ
What happens when someone refreshes the page? I don't want to make them log in again.
When the app first loads, it should always redirect the user to your Identity Provider. Based on how that identity provider handles things, there's a good chance the user won't have to log in. For example, if you're federating to an identity provider like Google or Microsoft, the user may have selected an option indicating that they are on a trusted device and they want to be remembered. If so, they won't need to log in again for a very long time, long after your auth token would have expired. This is much more convenient for your users.
Then again, if the user indicated they're on a shared device and shouldn't automatically be logged in in the future, you want to force another login: you cannot differentiate between someone who refreshed their browser window and someone who reopened a closed browser and navigated to a page stored in the browser's history.
Isn't the Identity Provider using cookies to keep the user logged in? What about CSRF or XSS attacks there?
Those implementation details are specific to the Identity Provider. If they're using cookies, it's their job to implement Anti-CSRF measures. They are far less likely than you are to use problematic third-party libraries, or import compromised external components, because their app only has one job.
Shouldn't I spend my time addressing XSS attacks instead? Isn't it "game over" if someone injects code into my app?
If it's an either/or proposition, and you have reason to believe your app has XSS or Code Injection vulnerabilities, then those should definitely take precedence. But good security involves following best-practices at multiple levels, for a kind of layered security.
Plus, using a trusted third-party library to connect to trusted third-party security providers should hopefully save you time that you would have spent dealing with a variety of other security-related issues.
It is safe to store your token in localStorage as long as you encrypt it. Below is a compressed code snippet showing one of many ways you can do it.
import SimpleCrypto from 'simple-crypto-js';
const saveToken = (token = '') => {
const encryptInit = new SimpleCrypto('PRIVATE_KEY_STORED_IN_ENV_FILE');
const encryptedToken = encryptInit.encrypt(token);
localStorage.setItem('token', encryptedToken);
}
Then, before using your token decrypt it using PRIVATE_KEY_STORED_IN_ENV_FILE
We have a lot of ajax calls from our web application and the thing is all are https request (our IT team mandates), yes we have opened up the headers to allow cross domain. But the problem is we have our own custom certificate used internally for all of our apps so basically I get error when we call ajax:
Failed to load resource: net::ERR_INSECURE_RESPONSE
If I open the URL in browser and accept the certificates the ajax calls works fine. So my question is, is there a way to handle this via Javascript? Or would adding trusted certificate would resolve this issue? Also even after we add trusted certificate would again I face any issue in ajax?
Note: We are testing all these in chrome browser
Yes, adding the certificate to the user's list of trusted certificates would solve your problem. As long as your server is set up to server CORS correctly (and it seems like it is, based on your success after accepting the certificate), the certificate is your only problem.
HTTPS provides two security benefits:
You know that the communication channel is secure between you and whoever you're talking to (e.g., when Alice talks to Bob, she know no one else can listen in)
You know who you're talking to is really who they claim to be (e.g., when Alice talks to bob.com she's knows it's really the server she knows as bob.com and not an impostor)
The first benefit is automatic. It is guaranteed by the cryptographic protocol and cannot be stripped away (except by very complex attacks on security holes that are usually fixed very quickly).
The second benefit is not strictly a technical benefit: it is a matter of trust. Servers use certificates to link their public key (which grants the first security component) with their own identity. Those public-key certificates are signed by user-trusted institutions called certificate authorities (CAs).
When you try to connect to bob.com, the site gives you a public key to perform secure communication. You're skeptical and say, "Sure, this public key will allow me to talk to you securely, but how do I know you're really bob.com?" The server then gives you a CA-signed certificate that says:
We, the VeriSign certificate authority, who are widely trusted to be thorough in our investigations, hereby attest that the following public key really does truly belong to bob.com:
Public key: ZGdlZGhydGhyaHJ0eWp5cmo...
Verifiably signed,
VeriSign
There are many major certificate authorities who are widely trusted (by operating systems and browsers) to issue a signed certificate if and only if they have reliably verified that a public key really does belong to a domain name. Because your OS already trusts these signatures, we can use them without any confirmation.
When you use a self-signed certificate, no trusted CA has verified that your cert really belongs to your domain name. Anyone can produce a document that says:
Hey, man, this is 100% definitely the public key for bob.com:
Public key: WGdoZmpodHlqa2p1aXl1eWk...
Just trust us, the guys who wrote this note, okay? We're DEFINITELY sure that's the key. The guys who wrote this note are the guys who run bob.com. We promise.
Signed,
The guys who wrote this note
This doesn't have a trusted signature on it. The user must decide if they wish to accept the certificate's attestation that the public key really belongs to bob.com.
Making this decision meaningfully is a difficult process -- you need to inspect the certificate and find a preexisting trusted record of the public key somewhere (or contact a site administrator or help desk representative). In effect, you need have your trust of the certificate come from somewhere, because it isn't coming from the trusted signature of a major certificate authority.
Allowing JavaScript to automatically make this determination of trust makes no sense. The entire point is that the user must decide whether or not the certificate is trustworthy and then make the appropriate modifications to the system's certificate store.
This could hypothetically be done for Ajax requests, but it wouldn't be pretty. You'd need to show the user a browser-UI screen asking if the user wants to trust the self-signed certificate for whatever domain the script is trying to access. Aside from being very disruptive to the browsing experience, this could be dangerously confusing: if I'm on example.com, and suddenly (due to the actions of a script I didn't know about) I'm asked to trust a certificate for bob.com, I might accept it purely based on my trust of example.com.
Either add the certificate to your systems' trusted certificate list, or get it signed by a CA that is already trusted by your systems.
Adding the trusted certificate should be enough if all the AJAX calls are made in the same domain, if they're not of the same domain, using JSONP to get the data with AJAX you wouldn't get the error.
If you would like to know about JSONP, here is an useful link: http://www.sitepoint.com/jsonp-examples/
Some odd requests appear on our logs since ~October 20, 2014. They've increased to about a few dozens a day so while not a big problem, it's still interesting to find out the reason.
Earlier ones:
REQUEST[/en/undefinedsf_main.jsp?clientVersion=null&dlsource=null&CTID=null&userId=userIdFail&statsReporter=false] REFERER[http://colnect.com/en/coins]
REQUEST[/fr/undefined/GoogleExtension/deals.html?url=http://colnect.com&subid=STERKLY&appName=HypeNet&pos=2&frameId=buaovbluurbavptkwyaybzjrqweypsbavwrviv] REFERER[http://colnect.com/fr]
REQUEST[/br/stamps/undefined49507173c45043eba6dfb9da540e52de&chnl=slmbBRex&evt=DailyPing&prd=vbates&seg=1&ext=1&rnd=65983fb77b62e25cc2a8ef15af18273d] REFERER[http://colnect.com/br/stamps/countries]
Some current ones:
REQ[/ru/collectors/collector/undefined] REF[http://colnect.com/ru/collectors/collector/jokitsos]
REQ[/th/collectors/collector/undefined] REF[http://colnect.com/th/collectors/collector/VRABEC]
REQUEST[/en/account/undefined] REFERER[http://colnect.com/en/account/request_password]
REQUEST[/pt/stamps/undefined] REFERER[http://colnect.com/pt/stamps/years]
Some requests are by logged in members and some not.
I'd guess some Javascript on their browser is trying to call a url by some uninitialized variable thus the "undefined".
Reasons may be similar to Odd requests to non-existing pages that all include "6_S3_" (perhaps malware) but I'm wondering if this might be a different reason.
I do doubt it's a bug on our client side Javascript as this would generate much more than a few dozens of such requests a day from about a million daily page views.
Any ideas? Is it worth pursuing?
This is a big concern, but it's not coming from you.
These are Javascript Injection attacks (client machine malware) using self-signed root certificate.
Specifically, sf_main.html, and deals.html have been linked to Superfish, which has been shipping with Lenovo's recently. As Lenovo has been pushing its new lines of PC's, reports of the attacks have blown up recently.
These Man-in-the-middle attacks start by hijacking the client's requests and and then injecting HTML and Javascript.
The reason there are so many undefined symbols is because Superfish, true to it's name, is fishing for plugins, extensions, and libraries it can take advantage of using their expected names, tokens, and paths. This is brute force XSS.
Oh, no, what can I do??
Little. Not much.
As the requests are being hijacked on the client machine and by http request hijacking, you won't know the difference. You could try to "fish" for certain kind of hostile "indicators" but now you are doing the work of Anti-Malware.
Lenovo claims that
SuperFish has completely disabled server side interactions (since
January) on all Lenovo products so that the software product is no
longer active, effectively disabling SuperFish for all products in the
market
While I trust the sincerity of China-based Lenovo, which has serious market interests in the Western world, I wouldn't trust the word of China based malware company Superfish.
These attacks are less a problem for you than your customers
Unless you work for a big bank or popular Social Networking site, it's highly unlikely that Malware like Superfish, has targeted you specifically. Your customer's bank and social network accounts are at risk, but not because of anything you did or can do to stop it.
As always, the cure for client side fishing attacks is good client side protection.
Any ideas?
There seem to be two different options here:
There are mistakes in your code causing incorrect URLs to get generated
There are (search) bots trying to parse your Javascript and failing to do it correctly.
(A client side extension is stirring up trouble)
To differentiate between the two you would need to set up more specific logging. For example adding the user agents to any log lines containing the string undefined will answer this question. If it's your code causing the problem you would also wish to log the referer header, as it will expose on which page the faulty URL's are being generated.
Another way to identify the issue is if you have an analytics solution running on your site such as Google Analytics you can quite easily limit your report to only url's containing undefined. If there are no such request you can conclude that it has to be a bot (as it wouldn't cause the client side analytics code to run), otherwise it provides all the information to identify where this problem is caused.
Lastly it might be a good idea to include a javascript error logging solution (in its simplest form a window.onerror handler with an ajax request to \log.something. If your code is generating undefined's it's quite likely that some errors are being triggered as well.
Is it worth pursuing?
If users are actually being served invalid pages, then yes, this is definitely something to be investigated.
Recently I installed fiddler and it did allow me to view (decrypt) my Ssl requests from any browser.
Although its not legal, some firewalls also allow installing some root certificates and then firewall can monitor and track https protocol.
We provide sensitive information via ssl and how check and prevent such interception via proxy or fiddler kind of tool.
Is there any JavaScript API? There should be something, that I can check on page load.
$( function() {
if(!isSSLValid())
{
alert('Your traffic is monitored...');
location.href = '/SSLInstructions.html';
}
});
Think of it as mobile or tab browser where in iOS there is no way to view certificate, just an icon.
(This is quite similar to this question on Security.SE.)
As a client, you can verify that your SSL/TLS connection was not intercepted by a MITM proxy (Fiddler or other) by checking its certificate. That's the entire purpose of having certificates to authenticate servers.
You were only able to allow Fiddler to look at the traffic because you chose to validate its certificate. Similarly, MITM proxy servers (mostly used in corporate environments) need to install their CA certificate on the client machines. In environment where this happens, the clients are not really in control of the machine they use anyway: they delegate their administration to whoever controls that proxy.
It's ultimately the sole responsibility of the client to check that (a) SSL/TLS is used and (b) it is used correctly (with a certificate they can trust for the machine they intended to communicate with in the first place). (See this longer explanation on Webmasters.SE for more details.)
How to verify that ssl was not intercepted via proxy etc in browser?
Tell your users not to ignore warnings. If there's a corporate proxy with matching CA certificates installed on their machines, they could in principle look at the details of the certificate. If they don't trust the machine they're using for this, they should use their own, from a network that allows them not to be intercepted.
Mobile devices are indeed quite poor for checking those details unfortunately, but as a server there's not much you can do.
One way to check whether the client received the same server certificate as the one the server sent is to require client-certificate authentication, which will make the client sign the handshake (including the server cert) with its own private key, so the server can check if the signature matches what it expects. This requires a bit more infrastructure to deal with client-certificates (and you'd need to show your users how to use them).
EDIT: About your comment.
That's flaw in ssl that we have no way to check if anyone is watching
or not, and it defetes whole purpose of it.
Not really, it's ultimately always the responsibility of the user to check what/who it's talking to, even in real life situation. If you fill the right forms to vote by proxy and delegate this voting power to someone, or if you give your ID and delivery slip to someone to pick up a parcel for you from the post office, it's up to you to make sure that you trust that person. If you give your bank passwords to someone and they phone your bank for you, your bank has no way of telling whether it's you or not: as far as it's concerned you're identified by these credentials.
Only the user is in a position to check that it's talking to the right server: if not, the legitimate server isn't in contact with the user, so it simply can't give any warning that something wrong is happening.
You simply will never be able to force the users to talk to the right server from your server, because you don't control what they do. They could give their passwords to anyone they want, you wouldn't know. (You can teach them not to do so, at best.)
What you can do is to prevent your server to give data to someone who isn't your user. Following the real-life analogies, this can be done by using mechanisms where you insist on the person to be present (you don't allow someone to act for someone else, even if they turn up with the other person's ID). This can be done with SSL/TLS when using client-certificates: only the holder of the private key can be authenticated, no intermediate party. (Of course, from a practical point of view, users would have to make sure they don't give their private keys.)
How can a user, using one of the major modern browsers, know for sure that he is running my unmodified javascript code even over an untrusted network?
Here is some more info about my situation:
I have a web application that deals with private information.
The login process is an implementation of a password-authenticated key agreement in JavaScript. Basically during login, a shared secret key is established between the client and the server. Once the user logs in all communication with the server is encrypted using the shared key. The system must be safe against ACTIVE man-in-the-middle attacks.
Assuming that my implementation is correct and the user is smart enough not to fall victim to a phishing attack there remains just one large hole in the system: an attacker can tamper with my application as it is being downloaded and inject code that steals the password. Basically the entire system relies on the fact that the user can trust the code running on his machine.
I want something similar to signed applets but I would prefer a pure javascript solution, if possible.
Maybe I am misunderstanding your problem, but my first thought is to use SSL. It is designed to ensure that you're talking to the server you think you are, and that no one has modified the content midstream. You do not even have to trust the network in this case, because of the nature of SSL.
The good thing about this approach is that you can fairly easily drop it into your existing web application. In most cases, you can basically configure your HTTP server to use SSL, and change your http:// requests to https://.
This is an old, open question but the answers seemed to not do this justice.
https:// provides integrity, not true identification nor non-repudiation.
I direct you to http://www.matasano.com/articles/javascript-cryptography/
Don't do crypto in JS, because a malicious injected script can easily grab passwords or alter the library. SJCL is neat, but it offer a blatantly false sense of security (their quote, and quoted by above)
Unfortunately, this is not as great as in desktop applications
because it is not feasible to completely protect against code
injection, malicious servers and side-channel attacks.
The long-term issue is that JavaScript lacks:
Uniformly working const
The ability to make objects deeply const and not reprototypable.
Code-signing
// codesign: cert:(hex fingerprint) signature:(hex MAC)
Certs would be managed similar to CA certs. MAC would be used with appropriate sign/verify constructions.
Crypto, clipboard stuff are reasons to have JavaScript native plugins (signed, of course)
Getting JavaScript engines to all implement a standard is another thing, but it's doable an it's absolutely necessary to end a large swath of malware.
You could have an external Javascript file which takes an MD5 hash of your login JS, and sends an Ajax request to the server to verify that it is correct and up-to-date. Use basic security or encryption practices here - public/private keys or some other method to be sure that the response came from your server.
You can then confidently display to the user that the client-side scripts are verified, and allow the login script to proceed.