I have a problem that people are cloning my website front and imitate calls to my API from their own domains to abuse my service. The solution I came up with is for Angular client to check the URL it works on, encrypt it and add as a header to API call. Obfuscate the JS code to prevent reverse engineering. This way API will receive an encrypted header and make sure that the domain is the proper one.
So on the client side
headers.append(`CustomHeader`, this.encryptDomain());
and on the server side
var domainEncrypted = Request.Content?.Headers?.GetValues("CustomHeader").FirstOrDefault();
var domainPlain = Decrypt(domainEncrypted);
if (domainPlain != myDomain)
{
return BadRequest();
}
Can you please help me with code samples to match JS and C# encrypt and decrypt algorithms? So that encryptDomain works on JS side and Decrypt works on the C# side. I am aware that this is not a perfect solution, but I want to try. And if anyone has a better idea, you are welcome.
Edit: apparently what I want to do is similar to JScrambler domain lock feature
TLDR
It is not possible to prevent communicate with your API through different (cloned) clients guaranteed way in cases when white-lists of IP addresses can't/shouldn't be used.
Why
Think about it that way. You have a server that has some identification rule - client should have some identifier that marks it as trusted. In your question it is a domain.
Domain is a public information that could be passed in HTTP header or in the body of your request, it is easy, but also it will be easy for clients to replace this information on their side.
And if you use any type of cryptography to provide more secured identification mechanism - you just making it harder to hack it and again pretend as trusted client, because every mechanism you use on the client side could be reverse-engineered by a hacker. Just look at this question.
One think you can use to guaranteed access restriction is to use white-list of IP addresses on server-side, because IP address is a part of TCP/IP transport level protocol and it has "handshake" process to identify communicated points to each other, and it is kind of hard to replace it. Check this question for details.
So what can you do?
CORS
Setup CORS policy is a first step to create a trusted client-server communication. Most of browsers are support CORS policies, but of course client may be not a browser. And the questions was not about browser-server communication, but I should mention that because browser is a client too.
Client-side encryption
You can use encryption, but I don't see any reason to do that because any request to server could be read through your legal client (website). So even if you encrypted it - any person has a key and a crypto algorithm on their side to pretend as trusted client. But if you want to...
You need to create unique key every your request to make life of pretenders little harder. To make it you need few ingredients:
Public key for key generation (encrypted) on the client side
Obfuscated key generation JS code
Private key for decrypt generated key on the server side
JS-side RSA crypto libraries could be googled easily (for example)
Obfuscation libraries could be found just using google too (like this)
Server-side decryption could be done with System.Security.Cryptography namespace if you use C# backend.
Basically, more complex key-generation algorithm you make and more obfuscated code you make - more hard for hacker to pretend himself as a trusted client. But as I said there is no guaranteed way to completely identify trusted client.
You cannot prevent people from copying your website's FE assets... They are supposed to be publicly available. You could try to make it a little harder by spliting your built app in more chunks (with angular's lazzy-loading or by manipulating webpack's config). Still, Browsers require code in plain text, so although this makes it a little harder it does not prevent copying.
When we build angular for production it already does code obfuscation through its optimizations (minification, tree-shaking and so on).
To mitigate the problem of people misusing your Server resources, you need to implement robust practices on Back-End request authorization and some miss-usage detection.
Configuring CORS would not work, as you reported attackers are using BE proxies.
Make sure your request's authentication is solid. A market standard approach is the use of a JWT payload embedded in the Authorization Header of each request. It is simple, reliable and resource-inexpensive.
I would also recommend the implementation of request throttling. But this is a separated question.
If your authentication is already solid, you would need to detect when your real users are misusing your system. There are many tools to monitor traffic (like azure's) but there is no umbrella definition for "unusual traffic". Detection of "unusual traffic" is what you would need to custom built for the specifics of your system. Once you have a network traffic tool in place that should help you getting started.
Couple of solutions for you. Firstly you can block by applying a CORS policy on server. If you still want to do from code then you can block on this basis of hostname in c# like this.
var hostname = requestContext.HttpContext.Request.Url.Host;
if (hostname != myDomain)
{
return BadRequest();
}
I'm currently building a single page application using ReactJS.
I read that one of the reasons for not using localStorage is because of XSS vulnerabilities.
Since React escapes all user input, would it now be safe to use localStorage?
In most of the modern single page applications, we indeed have to store the token somewhere on the client side (most common use case - to keep the user logged in after a page refresh).
There are a total of 2 options available: Web Storage (session storage, local storage) and a client side cookie. Both options are widely used, but this doesn't mean they are very secure.
Tom Abbott summarizes well the JWT sessionStorage and localStorage security:
Web Storage (localStorage/sessionStorage) is accessible through JavaScript on the same domain. This means that any JavaScript running on your site will have access to web storage, and because of this can be vulnerable to cross-site scripting (XSS) attacks. XSS, in a nutshell, is a type of vulnerability where an attacker can inject JavaScript that will run on your page. Basic XSS attacks attempt to inject JavaScript through form inputs, where the attacker puts <script>alert('You are Hacked');</script> into a form to see if it is run by the browser and can be viewed by other users.
To prevent XSS, the common response is to escape and encode all untrusted data. React (mostly) does that for you! Here's a great discussion about how much XSS vulnerability protection is React responsible for.
But that doesn't cover all possible vulnerabilities! Another potential threat is the usage of JavaScript hosted on CDNs or outside infrastructure.
Here's Tom again:
Modern web apps include 3rd party JavaScript libraries for A/B testing, funnel/market analysis, and ads. We use package managers like Bower to import other peoples’ code into our apps.
What if only one of the scripts you use is compromised? Malicious JavaScript can be embedded on the page, and Web Storage is compromised. These types of XSS attacks can get everyone’s Web Storage that visits your site, without their knowledge. This is probably why a bunch of organizations advise not to store anything of value or trust any information in web storage. This includes session identifiers and tokens.
Therefore, my conclusion is that as a storage mechanism, Web Storage does not enforce any secure standards during transfer. Whoever reads Web Storage and uses it must do their due diligence to ensure they always send the JWT over HTTPS and never HTTP.
Basically it's OK to store your JWT in your localStorage.
And I think this is a good way.
If we are talking about XSS, XSS using CDN, it's also a potential risk of getting your client's login/pass as well. Storing data in local storage will prevent CSRF attacks at least.
You need to be aware of both and choose what you want. Both attacks it's not all you are need to be aware of, just remember: YOUR ENTIRE APP IS ONLY AS SECURE AS THE LEAST SECURE POINT OF YOUR APP.
Once again storing is OK, be vulnerable to XSS, CSRF,... isn't
I know this is an old question but according what #mikejones1477 said, modern front end libraries and frameworks escape the text giving you protection against XSS. The reason why cookies are not a secure method using credentials is that cookies doesn't prevent CSRF when localStorage does (also remember that cookies are accessible by JavaScript too, so XSS isn't the big problem here), this answer resume why.
The reason storing an authentication token in local storage and manually adding it to each request protects against CSRF is that key word: manual. Since the browser is not automatically sending that auth token, if I visit evil.example and it manages to send a POST http://example.com/delete-my-account, it will not be able to send my authn token, so the request is ignored.
Of course httpOnly is the holy grail but you can't access from reactjs or any js framework beside you still have CSRF vulnerability. My recommendation would be localstorage or if you want to use cookies make sure implemeting some solution to your CSRF problem like Django does.
Regarding with the CDN's make sure you're not using some weird CDN, for example CDN like Google or bootstrap provide, are maintained by the community and doesn't contain malicious code, if you are not sure, you're free to review.
A way to look at this is to consider the level of risk or harm.
Are you building an app with no users, POC/MVP? Are you a startup who needs to get to market and test your app quickly? If yes, I would probably just implement the simplest solution and maintain focus on finding product-market-fit. Use localStorage as its often easier to implement.
Are you building a v2 of an app with many daily active users or an app that people/businesses are heavily dependent on. Would getting hacked mean little or no room for recovery? If so, I would take a long hard look at your dependencies and consider storing token information in an http-only cookie.
Using both localStorage and cookie/session storage have their own pros and cons.
As stated by first answer: If your application has an XSS vulnerability, neither will protect your user. Since most modern applications have a dozen or more different dependencies, it becomes increasingly difficult to guarantee that one of your application's dependencies is not XSS vulnerable.
If your application does have an XSS vulnerability and a hacker has been able to exploit it, the hacker will be able to perform actions on behalf of your user. The hacker can perform GET/POST requests by retrieving token from localStorage or can perform POST requests if token is stored in a http-only cookie.
The only down-side of the storing your token in local storage is the hacker will be able to read your token.
One thing to keep in mind is whether the JWTs are:
First party (ie. simply for accessing your own server commands)
Third party (ie. a JWT for Google, Facebook, Twitter, etc.)
If the JWT is first-party:
Then it doesn't matter that much whether you store the JWT in local storage, or a secured cookie (ie. HttpOnly, SameSite=strict, and secure) [assuming your site is already using HTTPS, which it should].
This is because, assuming an XSS attack succeeds (ie. an attacker was able to insert Javascript code through a JS dependency that is now running on all visitor browsers), it's "game over" anyway; all the commands which were meant to be secured by the "JWT token verifications", can now be executed by the attacker just by having the script they've inserted into the frontend JS call all the needed endpoints. Even though they can't read the JWT token itself (because of the cookie's http-only flag), it doesn't matter because they can just send all the needed commands, and the browser will happily send the JWT token along with those commands.
Now while the XSS-attack situation is arguably "game over" either way (whether local-storage or secured cookie), cookies are still a little better, because the attacker is only able to execute the attacks if/when the user has the website open in their browser.
This causes the following "annoyances" for the attacker:
"My XSS injection worked! Okay, time to collect private data on my boss and use it as blackmail. Dang it! He only ever logs in while I'm here at work. I'll have to prepare all my code ahead of time, and have it run within the three minutes he's on there, rather than getting to poke around into his data on the platform in a more gradual/exploratory way."
"My XSS injection worked! Now I can change the code to send all Bitcoin transfers to me instead! I don't have any particular target in mind, so I don't need to wait for anyone. Man though, I wish I could access the JWT token itself -- that way I could silently collect them all, then empty everyone's wallets all at once. With these cookie-protected JWTs, I may only be able to hijack a few dozen visitors before the devs find out and suspend transfers..."
"My XSS injection worked! This'll give me access to even the data that only the admins can see. Hmmm, unfortunately I have to do everything through the user's browser. I'm not sure there's a realistic way for me to download those 3gb files using this; I start the download, but there are memory issues, and the user always closes the site before it's done! Also, I'm concerned that client-side retransfers of this size might get detected by someone."
If the JWT is third-party:
In this case, it really depends on what the third-party JWTs allow the holder to do.
If all they do is let someone "access basic profile information" on each user, then it's not that bad if attackers can access it; some emails may leak, but the attacker could probably get that anyway by navigating to the user's "account page" where that data is shown in the UI. (having the JWT token just lets them avoid the "annoyances" listed in the previous section)
If, instead, the third-party JWTs let you do more substantial things -- such as have full access to their cloud-storage data, send out messages on third-party platforms, read private messages on third-party platforms, etc, then having access to the JWTs is indeed substantially worse than just being able to "send authenticated commands".
This is because, when the attacker can't access the actual JWT, they have to route all commands through your 1st-party server. This has the following advantages:
Limited commands: Because all the commands are going through your server, attackers can only execute the subset of commands that your server was built to handle. For example, if your server only ever reads/writes from a specific folder in a user's cloud storage, then the attacker has the same limitation.
Easier detection: Because all the commands are going through your server, you may be able to notice (through logs, sudden uptick in commands, etc.) that someone has developed an XSS attack. This lets you potentially patch it more quickly. (if they had the JWTs themselves, they could silently be making calls to the 3rd-party platforms, without having to contact your servers at all)
More ways to identify the attacker: Because the commands are going through your server, you know exactly when the commands are being made, and what ip-address is being used to make them. In some cases, this could help you identify who is doing the attacks. The ip-address is the most obvious way, though admittedly most attackers capable of XSS attacks would be aware enough to use a proxy.
A more advanced identification approach might be to, say, have a special message pop up that is unique for each user (or, at least, split into buckets), of such a nature that the attacker (when he loads up the website from his own account) will see that message, and try to run a new command based on it. For example, you could link to a "fake developer blog post" talking about some "new API" you're introducing, which allows users to access even more of their private data; the sneaky part is that the URL for that "new API" is different per user viewing the blog post, such that when the API is attempted to be used (against the victim), you know exactly who did it. Of course, this relies on the idea that the attacker has a "real account" on the site alongside the victim, and could be tempted/fooled by this sort of approach (eg. it won't work if the attacker knows you're onto him), but it's an example of things you can do when you can intercept all authenticated commands.
More flexible controlling: Lets say that you've just discovered that someone deployed an XSS attack on your site.
If the attackers have the 3rd-party JWTs themselves, your options are limited: you have to globally disable/reset your OAuth/JWT configuration for all 3rd-party platforms. This causes serious disruption while you try to figure out the source of the XSS attack, as no one is able to access anything from those 3rd-party platforms. (including your own server, since the JWT tokens it may have stored are now invalid)
If the JWT tokens are instead protected in http-only cookies, you have more options: You can simply modify your server to "filter out" any reads/writes that are potentially dangerous. In some cases added this "filtering" is a quick and easy process, allowing your site to continue in "read-only"/"limited" mode without disrupting everything; in other cases, things may be complex enough that it's not worth trusting the filter code for security. The point though is that you have more options.
For example, maybe you don't know for sure that someone has deployed an XSS attack, but you suspect it. In this case, you may not want to invalidate the JWT tokens of every user (including those your server is using in the background) simply on the suspicion of an XSS attack (it depends on your suspicion level). Instead, you can just "make things read-only for a while" while you look into the issue more closely. If it turns out nothing is wrong, you can just flip a switch and re-enable writes, without everyone having to log back in and such.
Anyway, because of these four benefits, I've decided to always store third-party JWTs in "secured cookies" rather than local storage. While currently the third-party JWTs have very limited scopes (such that it's not so big a deal if they are stolen), it's good future-proofing to do this, in case I'd like my app to request access to more privileged functionalities in the future (eg. access to the user's cloud storage).
Note: The four benefits above (for storing third-party JWTs in secured cookies) may also partially apply for first-party JWTs, if the JWTs are used as authentication by multiple backend services, and the domains/ip-addresses of these other servers/services are public knowledge. In this case, they are "equivalent to third-party platforms", in the sense that "http-only cookies" restrict the XSS attacker from sending direct commands to those other servers, bringing part of the benefits of the four points above. (it's not exactly the same, since you do at least control those other servers, so you can activate read-only mode for them and such -- but it'll still generally be more work than making those changes in just one place)
I’m disturbed by all the answers that suggest not to store in local storage as this is susceptible to an XSS attack or a malicious library. Some of these even go into long-winded discussions, even though the answer is pretty small/straightforward, which I’ll get to shortly.
Suggesting that is the equivalent of saying “Don’t use a frying pan to cook your food because if you end up drunk one night and decide to fry, you’ll end up burning yourself and your house”.
If the jwt gets leaked due to an XSS attack or malicious library, then the site owner has a bigger problem: their site is susceptible to XSS attacks or is using a malicious library.
The answer: if you’re confident your site doesn’t have those vulnerabilities, go for it.
Ref: https://auth0.com/docs/security/data-security/token-storage#browser-local-storage-scenarios
It is not safe if you use CDN's:
Malicious JavaScript can be embedded on the page, and Web Storage is compromised. These types of XSS attacks can get everyone’s Web Storage that visits your site, without their knowledge. This is probably why a bunch of organizations advise not to store anything of value or trust any information in web storage. This includes session identifiers and tokens.
via stormpath
Any script you require from the outside could potentially be compromised and could grab any JWTS from your client's storage and send personal data back to the attacker's server.
Localstorage is designed to be accessible by javascript, so it doesn't provide any XSS protection. As mentioned in other answers, there is a bunch of possible ways to do an XSS attack, from which localstorage is not protected by default.
However, cookies have security flags which protect from XSS and CSRF attacks. HttpOnly flag prevents client side javascript from accessing the cookie, Secure flag only allows the browser to transfer the cookie through ssl, and SameSite flag ensures that the cookie is sent only to the origin. Although I just checked and SameSite is currently supported only in Opera and Chrome, so to protect from CSRF it's better to use other strategies. For example, sending an encrypted token in another cookie with some public user data.
So cookies are a more secure choice for storing authentication data.
Isn't neither localStorage or httpOnly cookie acceptable? In regards to a compromised 3rd party library, the only solution I know of that will reduce / prevent sensitive information from being stolen would be enforced Subresource Integrity.
Subresource Integrity (SRI) is a security feature that enables
browsers to verify that resources they fetch (for example, from a CDN)
are delivered without unexpected manipulation. It works by allowing
you to provide a cryptographic hash that a fetched resource must
match.
As long as the compromised 3rd party library is active on your website, a keylogger can start collecting info like username, password, and whatever else you input into the site.
An httpOnly cookie will prevent access from another computer but will do nothing to prevent the hacker from manipulating the user's computer.
There's a useful article written by Dr. Philippe De Ryck which gives an insight into the true impact of vulnerabilities particularly XSS.
This article is an eye opener!
In a nutshell, primary concern of the developer should be to protect the web application against XSS and shouldn't worry too much about what type of storage area is used.
Dr. Phillipe recommends the following 3 steps:
Don't worry too much about the storage area. Saving an access token in localStorage area will save the developer a massive amount of time for development of next phases of the application.
Review your app for XSS vulnerabilities. Perform a through code review and learn how to avoid XSS within the scope of your templating framework.
Build a defense-in-depth mechanism against XSS. Learn how you could further lock down your application. E.g. utilising Content Security Policy (CSP) and HTML5 sandboxing.
Remember that once you're vulnerable to XSS then its game over!
TLDR;
Both work, but using a cookie with httpOnly is way safer than using localStorage, as any malicious javascript code introduced by XSS can read localstorage.
I'm coming late to the discussion, but with the advantage of more mature and modern auth protocols like OpenID Connect.
TL;DR: The preferred method is to store your JWT Token in memory: not in a cookie, and not in localstorage.
Details
You want to decouple the responsibility of authenticating users from the rest of the work your app does. Auth is hard to get right, and the handful of teams that spend all their time thinking about this stuff can worry about the details you and I will never get right.
Establish a dedicated Identity Provider for your app, and use the OpenID Connect protocol to authenticate with it. This could be a provider like Google, Microsoft, or Okta, or it could be a lightweight Identity Server that federates to one or more of those other services.
Use the Authorization Code Flow to let the user authenticate and get the access token to your app. Use a respected client library to handle the OpenID Connect details, so you can just have the library notify your app when it has a valid token, when a new valid token has been obtained via refresh, or when the token cannot be refreshed (so the user needs to authenticate again). The library should be configured (probably by default) to avoid storing the token at all.
FAQ
What happens when someone refreshes the page? I don't want to make them log in again.
When the app first loads, it should always redirect the user to your Identity Provider. Based on how that identity provider handles things, there's a good chance the user won't have to log in. For example, if you're federating to an identity provider like Google or Microsoft, the user may have selected an option indicating that they are on a trusted device and they want to be remembered. If so, they won't need to log in again for a very long time, long after your auth token would have expired. This is much more convenient for your users.
Then again, if the user indicated they're on a shared device and shouldn't automatically be logged in in the future, you want to force another login: you cannot differentiate between someone who refreshed their browser window and someone who reopened a closed browser and navigated to a page stored in the browser's history.
Isn't the Identity Provider using cookies to keep the user logged in? What about CSRF or XSS attacks there?
Those implementation details are specific to the Identity Provider. If they're using cookies, it's their job to implement Anti-CSRF measures. They are far less likely than you are to use problematic third-party libraries, or import compromised external components, because their app only has one job.
Shouldn't I spend my time addressing XSS attacks instead? Isn't it "game over" if someone injects code into my app?
If it's an either/or proposition, and you have reason to believe your app has XSS or Code Injection vulnerabilities, then those should definitely take precedence. But good security involves following best-practices at multiple levels, for a kind of layered security.
Plus, using a trusted third-party library to connect to trusted third-party security providers should hopefully save you time that you would have spent dealing with a variety of other security-related issues.
It is safe to store your token in localStorage as long as you encrypt it. Below is a compressed code snippet showing one of many ways you can do it.
import SimpleCrypto from 'simple-crypto-js';
const saveToken = (token = '') => {
const encryptInit = new SimpleCrypto('PRIVATE_KEY_STORED_IN_ENV_FILE');
const encryptedToken = encryptInit.encrypt(token);
localStorage.setItem('token', encryptedToken);
}
Then, before using your token decrypt it using PRIVATE_KEY_STORED_IN_ENV_FILE
In order to connect to a third party application, I have to give my users the capability to select one of their installed SSL client certificates and transfer it to the third party which is used by the application server. (My web application does not require SSL, it is the third party that require SSL certificates).
It seems to me that access to this list of certificates is only possible by the browser itself when connecting to a service that require SSL. Is it possible to launch the same dialog box through Javascript or is there any way for a web application to browse the SSL store of the end-user ?
If it is not possible, can I simply open a file dialog box and upload the client certificate as any standard file ?
I have to support any browser from IE9 and no plug-ins are allowed in our application.
Thanks.
If it is not possible, can I simply open a file dialog box and upload the client certificate as any standard file ?
Firstly, that's not the way SSL/TLS client authentication works at all. It's simply not a matter of uploading the certificate. The private key matching the certificate is used to sign some content (in the CertificateVerify TLS message) during the TLS handshake. That's what performs the authentication.
Coming back to your main question, for security reasons, the SSL/TLS stack is handled outside the scope of the JavaScript code. Selecting the client certificate is part of that.
You could potentially have some sort of API to let the JavaScript code access some of the cryptographic features of the browser (and there has been work in this area). However, there would be security considerations to take into account.
Even if certificates only contain public information to some extent, that doesn't mean it's public information that is to be distributed to anyone in the world, at least not necessarily in conjunction with the act of browsing any website.
If you had the ability to list the user's list of certificate from the JavaScript code sent by your server, you'd certainly have the ability to send that list back to yourself almost transparently with an Ajax call. While some people are concerned about the privacy implications of being tracked by cookies, being tracked by which client certs you may have takes this to another level (e.g. Subject DN with CN=John Smith and Issuer DN with CN=Department/Ministry of Health/Defence: that would be a bit of a giveaway).
My web application does not require SSL, it is the third party that require SSL certificates.
Here, you're not saying whether that third party is accessed directly by the user's browsers, or if you expect the users to delegate their credentials for you to interact with that third party (without direct user involvement).
If the users have direct access to that third party (via another request), their browser should prompt them for the certificate they with to use.
If it's about credential delegations, that's another problem entirely, since users you never give you the private key for their own client certificate to be able to sign in their name. (It's might be technically possible for users to just give you their PKCS#12 file, for example, but it defeats the point of putting up in place this sort of authentication in the first place).
There has been work done about authentication delegation with certificates using proxy certificates (RFC 3820). Essentially, your EEC (End-Entity Certificate) is used as a mini-CA, despite not having the CA flags, to issue a short-lived certificate with the remote party will accept. This sort of mechanism is generally not well integrated in browsers.
Another, more realistic approach, would be to look into the world of SSO, SAML and Shibboleth, for example. That does work with existing browsers, but the overall architecture is a bit different (so you'll need to discuss that with the third party).
The certificate isn't part of the DOM, so no, this won't be possible.
In a browser environment, is it possible to obtain list of SSL certificates in JavaScript?
The WebCrypto API allows you to discover some things, like shared and derived keys. But looking at their charter and use cases, its not clear to me if they allow enumeration and discovery of certificates.
I see it was discussed in the past and an issue was raised. Here's the discussion: Crypto-ISSUE-15: Discovering certificates associated with (private) keys. But I can't find anything on Issue 15 in the WebCrypto Tracker.
Also see Will the WebCrypto API allow discovery/enumeration of certificates? question on the WebCrypto Mailing list. Hopefully there will be a simple, YES/NO answer.
But don't be surprised if its not available through WebCrypto. The browser security engineers have a particular way of looking at things, and that usually does not include client certificates. Client certificates would effectively stop MitM attacks (see, for example, Origin Bound Certificates), and browsers don't make stopping MitM a priority. Instead, they are OK with mishandling credentials like passwords; and they opt for a One Time Password (OTP) using U2F.
In a reality stranger than fiction, the browsers will even (1) use Public Key Pinning for HTTP, and then (2) break a known good pinset because the user was phished! You can't make this stuff up...
It's not that I don't have access to javascript, of course. In most of my CS Web Development courses, we are taught a little bit about server-side validation, and then as soon as javascript is introduced, server-side validation is thrown out the window.
I choose not to just rely on javascript, as the client-side is never a secure place. I have gotten into the habit of writing both the client and server-side code for such things. However, for a web application that I am writing that has optional AJAX, I do not want the password to be send plaintext over the wire if someone has javascript turned off.
I realize I may be asking a catch-22 situation, so let me just ask this: how do we know our users' passwords will be secure (enough) from malicious users on the same network when all we can rely on is server-side scripting. On that first request from the login page, is there any way to have the browser encrypt a data field?
SSL Solves this problem. For the record, passwords should never be "encrypted" or "encoded", this employs that there is a method of "Decoding" or "Decrypting" which is a clear violation if CWE-257. Passwords must be hashed, SHA-256 is a great choice, but this is not meant for transmission, only storage. When you transit secrets there is a long list of things that can go wrong, SSL is by far the best choice for solving these issues.
If the attacker can sniff the traffic then they will be able to see the session id and use it immediately, so its a moot point. You have to use SSL to protect the authenticated session anyway.
The easy solution is SSL.
I think you're mixing up a couple of concepts. The browser does not encrypt individual fields. Client-side scripting, server-side scripting and AJAX are not means to defend against eavesdropping.
As others have said, SSL is the technology that encrypts the data. The entire request and response, including the fields and scripts are contained within the SSL session.
You can also use Digest HTTP Authentication.