How to create a secure web app (safe to enter SSN)? - javascript

I would like to create a web app that:
takes in personal data (including SSN) using a csv
processes and manipulates the data in the browser javascript
outputs a pdf
Since everything happens in the browser and none of the data is stored in a database at the web host does an SSL certificate provide sufficient security?

The SSL certificate only helps that the script is securely send from your server to the clients browser and that no manipulation can be done in between. It does not help against
Manipulation at the server side. It is actually very common that servers gets hacked and the stored applications gets modified for delivery of malware and similar.
Client-Side Cross-Site-Scripting attacks, i.e. DOM XSS. Since what you do is fairly complex chances are high that you fail to protect against such an attack.
Other Cross-Site-Scripting: if the script you serve to the user depends on the user, cookies or whatever (that is not a static script) you might be vulnerable to other XSS attacks too.
So in general the answer is that SSL is not sufficient alone.

Related

Progressive Web Apps and Private SQL Credentials

I am tasked with converting a PHP application into a progressive web app. This entails converting the existing PHP logic into JavaScript that runs client-side.
However, the PHP application contains sensitive information, including SQL credentials, which must never be leaked. This complicates the conversion because one of the biggest requirements of a progressive web app is Offline First, or the ability to operate without an Internet connection and/or not slow down even if an Internet connection is available.
Encrypting the JavaScript code is not an option because, no matter how strong the encryption, the decryption code must be shipped alongside it, and thus, determined hackers will always be able to crack the encryption. HTTPS cannot prevent hackers from jailbreaking their phones.
On the other hand, sending an Ajax request to a proxy server that holds the sensitive credentials will slow down the application, defeating the whole point of progressive web applications.
I have spent hours looking up solutions online, yet nothing I found is relevant enough. So how should developers go about ensuring that SQL credentials and other sensitive information are never exposed in the progressive web app?
EDIT: I should clarify that, while I understand that synchronizing local data with server data is the preferred behavior of progressive web apps, I am explicitly forbidden from doing so in this particular case. The data must be kept confidential.
To answer your original question on how to store your DB passwords safely in client side, "you can't". Anything at client side is not for sensitive information like server side DB password.
PWA is a web application end of the day with new features. But those doesn't gives you any added security to perform server side like operations which you can hide from users. Even if you use HTTPS, it will only encrypt data over network.
What if you use: If you store "DB password" in a PWA app or any web app for that matter, user can get the password using Chrome Dev tools for example and use that to connect to DB directly to get all the data in it, not just his.
Solution: PHP is a server side scripting language. When you convert that to HTML/JS, server side code from it will be remaining for your to put it again in server side itself and expose the data using web services to PWA.
On Downloading data: Caching is not plainly equivalent to downloading. Read more on here and if you still don't want caching, you "Network only" mode as explained in the same link and make use of other PWA aspects..like notifications, install to home screen.

SSL alternative for secure handshake?

I'm curious if there's any way that a server can validate a client without knowing that the client is entirely "friendly" code that isn't monitoring 1) the user's input or 2) network requests.
The only way I could conceive of this is if browsers have a built-in, secure, isolated shell / scope that can hash and send data (which can be verified with a complimenting server unhashing / lookup script).
Is there any browser-supported (non-DOM) input/hashing method that can also be installed on the server to identify the authenticity or user input? I want to avoid Chrome Extensions and potential keylogging in general, but I'm not sure any browser supports this feature.
Thanks
EDIT
I think some form of 2-step auth in a separate window would be the closest, but I don't have SSL, and I don't like the presentation of random "popup" windows
If I understand your question correctly you are asking for a proof that the data entered into a form are neither manipulated nor generated by malicious software. But you (as operator of the server) don't have control of the client.
This is impossible as long as you don't have control of the client because it is impossible to distinguish user generated data from software generated data on the network level, and that's all you get at the server. Even the output generated by a browser extension can be faked.
I think some form of 2-step auth would be the closest
2FA is relevant for authentication of the client only and provides no way of making user generated data tamper resistant.
SSL alternative for secure handshake?
SSL only protects the transport and does not prevent modification of the user input within a malicious browser extension or similar. It also does not protect against malicious man in the middle on the clients machine (i.e. Superfish or similar).

How to secure EmberJS or any Javascript MVC framework?

I'm looking forward to start using Ember.js in a real life project, but as someone coming from a Java background, I always care about Security.
And when I say to my fellow Java developers, I start using a JavaScript MVC, they start saying it's not secure enough, as JavaScript is all about client-side there is always a way to hack around your JavaScript code, and know backdoors to your services and APIs.
So is there any good practice that can help prevent this kind of attacks, or at least trying to make it less effective?
There is always a way to hack around your javascript code, and know backdoors to your services and APIs.
JavaScript presents few new problems for services and APIs that are already exposed to the web. Your server/service shouldn't blindly trust requests from the web, so using javascript doesn't alter your security posture, but it can lull people into a false sense of security by making them think they control the user-agent.
The basic rule of client/server security is still the same: don't trust the user-agent ; place the server and the client on different sides of a trust boundary.
A trust boundary can be thought of as line drawn through a program. On one side of the line, data is untrusted. On the other side of the line, data is assumed to be trustworthy. The purpose of validation logic is to allow data to safely cross the trust boundary--to move from untrusted to trusted.
Validate everything that the server receives from the client, and, because XSS vulnerabilities are common and allow clients to access information sent to the client, send as little sensitive information to the client as possible.
When you do need to round-trip data from the server to the client and back, you can use a variety of techiniques.
Cryptographically sign the data so that the server can verify that it was not tampered with.
Store the data in a side-table and only send an opaque, unguessable identifier.
When you do need to send sensitive data to the client, you also have a variety of strategies:
Store the data in HTTP-only cookies which get attached to requests but which are not readable by JavaScript
Divide your client into iframes on separate origins, and sequester the sensitive information in an iframe that is especially carefully reviewed, and has minimal excess functionality.
Signing
Signing solves the problem of verifying that data you received from an untrusted source is data that you previously verified. It does not solve the problem of eavesdropping (for that you need encryption) or of a client that decides not to return data or that decides to substituting different data signed with the same key.
Cryptographic signing of "pass-through" data is explained well by the Django docs which also outline how their APIs can be used.
The golden rule of Web application security is to never trust data from untrusted sources. Sometimes it can be useful to pass data through an untrusted medium. Cryptographically signed values can be passed through an untrusted channel safe in the knowledge that any tampering will be detected.
Django provides both a low-level API for signing values and a high-level API for setting and reading signed cookies, one of the most common uses of signing in Web applications.
You may also find signing useful for the following:
Generating “recover my account” URLs for sending to users who have lost their password.
Ensuring data stored in hidden form fields has not been tampered with.
Generating one-time secret URLs for allowing temporary access to a protected resource, for example a downloadable file that a user has paid for.
Opaque Identifiers
Opaque identifiers and side tables solve the same problem as signing, but require server-side storage, and requires that the machine that stored the data have access to the same DB as the machine that receives the identifier back.
Side tables with opaque, unguessable, identifiers can be easily understood by looking at this diagram
Server DB Table
+--------------------+---------------------+
| Opaque Primary Key | Sensitive Info |
+--------------------+---------------------+
| ZXu4288a37b29AA084 | The king is a fink! |
| ... | ... |
+--------------------+---------------------+
You generate the key using a random or secure pseudo-random number generator and send the key to the client.
If the client has the key, all they know is that they possess a random number, and possibly that it is the same as some other random number they received from you, but cannot derive content from it.
If they tamper (send back a different key) then that will not be in your table (with very high likelihood) so you will have detected tampering.
If you send multiple keys to the same misbehaving client, they can of course substitute one for the other.
HTTP-only cookies
When you're sending a per-user-agent secret that you don't want it to accidentally leak to other user-agents, then HTTP-only cookies are a good option.
If the HttpOnly flag (optional) is included in the HTTP response header, the cookie cannot be accessed through client side script (again if the browser supports this flag). As a result, even if a cross-site scripting (XSS) flaw exists, and a user accidentally accesses a link that exploits this flaw, the browser (primarily Internet Explorer) will not reveal the cookie to a third party.
HttpOnly cookies are widely supported on modern browsers.
Sequestered iframes
Dividing your application into multiple iframes is the best current way to allow your rich client to manipulate sensitive data while minimizing the risk of accidental leakage.
The basic idea is to have small programs (secure kernels) within a larger program so that your security sensitive code can be more carefully reviewed than the program as a whole. This is the same way that qmail's cooperating suspicious processes worked.
Security Pattern: Compartmentalization [VM02]
Problem
A security failure in one part of a system allows
another part of the system to be exploited.
Solution
Put each part in a separate security domain. Even when
the security of one part is compromised, the other parts
remain secure.
"Cross-frame communication the HTML5 way" explains how iframes can communicate. Since they're on different domains, the security sensitive iframe knows that code on other domains can only communicate with it through these narrow channels.
HTML allows you to embed one web page inside another, in the element. They remain essentially separated. The container web site is only allowed to talk to its web server, and the iframe is only allowed to talk to its originating server. Furthermore, because they have different origins, the browser disallows any contact between the two frames. That includes function calls, and variable accesses.
But what if you want to get some data in between the two separate windows? For example, a zwibbler document might be a megabyte long when converted to a string. I want the containing web page to be able to get a copy of that string when it wants, so it can save it. Also, it should be able to access the saved PDF, PNG, or SVG image that the user produces. HTML5 provides a restricted way to communicate between different frames of the same window, called window.postMessage().
Now the security-sensitive frame can use standard verification techniques to vet data from the (possibly-compromised) less-sensitive frame.
You can have a large group of programmers working efficiently at producing a great application, while a much smaller group works on making sure that the sensitive data is properly handled.
Some downsides:
Starting up an application is more complex because there is no single "load" event.
Iframe communication requires passing strings that need to be parsed, so the security-sensitive frame still needs to use secure parsing methods (no eval of the string from postMessage.)
Iframes are rectangular, so if the security-sensitive frame needs to present a UI, then it might not easily fit neatly into the larger application's UI.

How do I encode passwords in web forms without javascript?

It's not that I don't have access to javascript, of course. In most of my CS Web Development courses, we are taught a little bit about server-side validation, and then as soon as javascript is introduced, server-side validation is thrown out the window.
I choose not to just rely on javascript, as the client-side is never a secure place. I have gotten into the habit of writing both the client and server-side code for such things. However, for a web application that I am writing that has optional AJAX, I do not want the password to be send plaintext over the wire if someone has javascript turned off.
I realize I may be asking a catch-22 situation, so let me just ask this: how do we know our users' passwords will be secure (enough) from malicious users on the same network when all we can rely on is server-side scripting. On that first request from the login page, is there any way to have the browser encrypt a data field?
SSL Solves this problem. For the record, passwords should never be "encrypted" or "encoded", this employs that there is a method of "Decoding" or "Decrypting" which is a clear violation if CWE-257. Passwords must be hashed, SHA-256 is a great choice, but this is not meant for transmission, only storage. When you transit secrets there is a long list of things that can go wrong, SSL is by far the best choice for solving these issues.
If the attacker can sniff the traffic then they will be able to see the session id and use it immediately, so its a moot point. You have to use SSL to protect the authenticated session anyway.
The easy solution is SSL.
I think you're mixing up a couple of concepts. The browser does not encrypt individual fields. Client-side scripting, server-side scripting and AJAX are not means to defend against eavesdropping.
As others have said, SSL is the technology that encrypts the data. The entire request and response, including the fields and scripts are contained within the SSL session.
You can also use Digest HTTP Authentication.

JavaScript Code Signing

How can a user, using one of the major modern browsers, know for sure that he is running my unmodified javascript code even over an untrusted network?
Here is some more info about my situation:
I have a web application that deals with private information.
The login process is an implementation of a password-authenticated key agreement in JavaScript. Basically during login, a shared secret key is established between the client and the server. Once the user logs in all communication with the server is encrypted using the shared key. The system must be safe against ACTIVE man-in-the-middle attacks.
Assuming that my implementation is correct and the user is smart enough not to fall victim to a phishing attack there remains just one large hole in the system: an attacker can tamper with my application as it is being downloaded and inject code that steals the password. Basically the entire system relies on the fact that the user can trust the code running on his machine.
I want something similar to signed applets but I would prefer a pure javascript solution, if possible.
Maybe I am misunderstanding your problem, but my first thought is to use SSL. It is designed to ensure that you're talking to the server you think you are, and that no one has modified the content midstream. You do not even have to trust the network in this case, because of the nature of SSL.
The good thing about this approach is that you can fairly easily drop it into your existing web application. In most cases, you can basically configure your HTTP server to use SSL, and change your http:// requests to https://.
This is an old, open question but the answers seemed to not do this justice.
https:// provides integrity, not true identification nor non-repudiation.
I direct you to http://www.matasano.com/articles/javascript-cryptography/
Don't do crypto in JS, because a malicious injected script can easily grab passwords or alter the library. SJCL is neat, but it offer a blatantly false sense of security (their quote, and quoted by above)
Unfortunately, this is not as great as in desktop applications
because it is not feasible to completely protect against code
injection, malicious servers and side-channel attacks.
The long-term issue is that JavaScript lacks:
Uniformly working const
The ability to make objects deeply const and not reprototypable.
Code-signing
// codesign: cert:(hex fingerprint) signature:(hex MAC)
Certs would be managed similar to CA certs. MAC would be used with appropriate sign/verify constructions.
Crypto, clipboard stuff are reasons to have JavaScript native plugins (signed, of course)
Getting JavaScript engines to all implement a standard is another thing, but it's doable an it's absolutely necessary to end a large swath of malware.
You could have an external Javascript file which takes an MD5 hash of your login JS, and sends an Ajax request to the server to verify that it is correct and up-to-date. Use basic security or encryption practices here - public/private keys or some other method to be sure that the response came from your server.
You can then confidently display to the user that the client-side scripts are verified, and allow the login script to proceed.

Categories