I want to create a Javascript widget that my users can put on their websites.
The widget is capable of creating audio, which in turn costs my users' money.
For the sake of illustration, let's say that every time a widget, placed on my user's site, is loaded by anyone on the internet (i.e. my users' users), I bill my user $1.
The widget is a Javascript code wrapped around an HTML audio player. The JS code makes a request to my backend API every time it is loaded, and upon receiving the response from my backend API, the player is constructed.
Diagram:
My concern is malicious usage by people who are not my users.
Let's say someone takes the widget's source code they found on a website that belongs to one of my users, and they put it on their site. They will, therefore, use my service but not pay for it. Instead, my actual user will pay for it (assuming I use a public API key as a way of distinguishing my users).
Usually, this is prevented by having a server-side library be responsible for any usages that might spend money. For example, I use Pusher as my WebSockets IaaS, and whenever I want to publish messages, I have to do it server-side, using their PHP SDK, with both private and public API keys.
In my use case, it's mandatory not to have a server-side library.
Question: how do I make sure that API requests I receive are legitimate?
I considered using the hostname where the widget is placed as a legitimacy measure. During the widget set-up, I could ask my users to whitelist certain (sub)domains and reject all requests that don't match the criteria, but this could be easily spoofed by, for example, a custom local domain or a CURL-crafted request.
I understand this may not be possible.
It seems like what you're asking is closely related to the topic of client side encryption. In most cases, the answer would be no, its not possible. However, in this case, it may be possible to implement something along the lines of the following. If you can get your clients to install a plugin (which you would build), you could encrypt your JS code after finishing it, and have your server serve this encrypted file. Normally, where this falls short, is that if you're sending an encrypted file, there needs to be a way for the client to decrypt it. This would require you to also serve an unecrypted JS file which would do the decoding, but by serving the unencrypted decoder you undo any security gained by encrypting your main JS file (the decryption file could be easily used to reverse engineer your encryption method/ just straight up run for people other than your intended users). Now, this is where having those API users (and the ability to communicate with them through means outside of server-client connections) comes in handy. If you build a decryption plugin, and give it to the API users (you could issue a unique decryption key for each user, but without server access implementing unique user keys would be very difficult/impossible), the plugin could then decrypt your served file in their browser, essentially guaranteeing that only users you have given the 'key' to can access your software. However, this approach has a few caveats. It implies that you trust your users enough that they wouldn't distribute the plugin (it would be against their intrest to distribute it anyway, as it could lead to higher chargers if people impersonate them). There are also probably a couple of other security concerns with this approach, however, I can't think of them right now. If any come to mind, I'll edit this post and add them.
apparently, I don't have enough reputation to comment yet, hence the post...
But in response to your post, I think that method seems much better than the one I suggested; I didn't realize you could control the API's response to the server.
I don't quite understand which of the following you mean:
a) Send a JS file to the user, with the sole purpose of determining if the player should also be sent (ie upon arriving, it pings the server with the client's API key/ url) and then the server would serve the file (in which case your approach seems safe to me, but others may find security problems with it).
or
b) Send a file with the JS and the audio player, which upon arriving, determines if the URL and API key are correct, and then allows the audio player to function normally (sending the API key to the server to track usage, not as a security feature).
If using option b, this would not improve security. If your code relies on security that runs on the client-side, and the security system was sent by the same means as the code, then almost without exception, the system designed is flawed and inherently unsafe.
I hope this makes helps, and if you disagree / have more questions, feel free to comment!
How about sending the following parameters from JavaScript widget to API backend:
Public API key (e.g. bbbe3b259f881cfc796f468619eb9d)
Current URL (e.g. https://example.com/articles/chiang-mai-thailand-january-2016-june-2016)
I will use the API key as a way of distinguishing my user and the current URL as a way of knowing which audio file to create (my widget will create an audio file based on the URL).
Furthermore, and this is crucial, I will have a user whitelist their domains and subdomains on my central site, where my users will get their widget code.
This is the same as what FB does for their integrations:
So if for example, my backend API receives the aforementioned sample URL, and the user has set up the widget to only allow URLs that belong to foo.com and bar.baz.com, I will reject the audio creation process and display an error.
Do you see any issues with this approach?
Related
I'm building a Chrome extension using the Remember the Milk web API. In order to call methods in this API, I need to sign my requests using an API key and a "shared secret" key.
My concern is that any user could just crack open the extension and pull out these values if I include them in the published extension. This may or may not pose a security rise for the user, but he or she could certainly use/abuse my API key and maybe get it revoked.
Is this something I should be concerned about? Are there any best practices for protecting this type of information in published JavaScript applications?
Ultimately you can't truly hide anything within a JS application that's run in the browser; you can obfuscate or minify the code, which will distract casual users from snooping around, but in the end its always going to be possible to grab your plaintext secret.
If you really need to prevent this from happening, then one option is to pass calls from your extension to a server you have access to. Your server can add any paramters required for signing, forward the call on to the relevant API, and pass the API's response back to the user. Of course this adds bandwidth / uptime constraints which you may not want.
I'm looking forward to start using Ember.js in a real life project, but as someone coming from a Java background, I always care about Security.
And when I say to my fellow Java developers, I start using a JavaScript MVC, they start saying it's not secure enough, as JavaScript is all about client-side there is always a way to hack around your JavaScript code, and know backdoors to your services and APIs.
So is there any good practice that can help prevent this kind of attacks, or at least trying to make it less effective?
There is always a way to hack around your javascript code, and know backdoors to your services and APIs.
JavaScript presents few new problems for services and APIs that are already exposed to the web. Your server/service shouldn't blindly trust requests from the web, so using javascript doesn't alter your security posture, but it can lull people into a false sense of security by making them think they control the user-agent.
The basic rule of client/server security is still the same: don't trust the user-agent ; place the server and the client on different sides of a trust boundary.
A trust boundary can be thought of as line drawn through a program. On one side of the line, data is untrusted. On the other side of the line, data is assumed to be trustworthy. The purpose of validation logic is to allow data to safely cross the trust boundary--to move from untrusted to trusted.
Validate everything that the server receives from the client, and, because XSS vulnerabilities are common and allow clients to access information sent to the client, send as little sensitive information to the client as possible.
When you do need to round-trip data from the server to the client and back, you can use a variety of techiniques.
Cryptographically sign the data so that the server can verify that it was not tampered with.
Store the data in a side-table and only send an opaque, unguessable identifier.
When you do need to send sensitive data to the client, you also have a variety of strategies:
Store the data in HTTP-only cookies which get attached to requests but which are not readable by JavaScript
Divide your client into iframes on separate origins, and sequester the sensitive information in an iframe that is especially carefully reviewed, and has minimal excess functionality.
Signing
Signing solves the problem of verifying that data you received from an untrusted source is data that you previously verified. It does not solve the problem of eavesdropping (for that you need encryption) or of a client that decides not to return data or that decides to substituting different data signed with the same key.
Cryptographic signing of "pass-through" data is explained well by the Django docs which also outline how their APIs can be used.
The golden rule of Web application security is to never trust data from untrusted sources. Sometimes it can be useful to pass data through an untrusted medium. Cryptographically signed values can be passed through an untrusted channel safe in the knowledge that any tampering will be detected.
Django provides both a low-level API for signing values and a high-level API for setting and reading signed cookies, one of the most common uses of signing in Web applications.
You may also find signing useful for the following:
Generating “recover my account” URLs for sending to users who have lost their password.
Ensuring data stored in hidden form fields has not been tampered with.
Generating one-time secret URLs for allowing temporary access to a protected resource, for example a downloadable file that a user has paid for.
Opaque Identifiers
Opaque identifiers and side tables solve the same problem as signing, but require server-side storage, and requires that the machine that stored the data have access to the same DB as the machine that receives the identifier back.
Side tables with opaque, unguessable, identifiers can be easily understood by looking at this diagram
Server DB Table
+--------------------+---------------------+
| Opaque Primary Key | Sensitive Info |
+--------------------+---------------------+
| ZXu4288a37b29AA084 | The king is a fink! |
| ... | ... |
+--------------------+---------------------+
You generate the key using a random or secure pseudo-random number generator and send the key to the client.
If the client has the key, all they know is that they possess a random number, and possibly that it is the same as some other random number they received from you, but cannot derive content from it.
If they tamper (send back a different key) then that will not be in your table (with very high likelihood) so you will have detected tampering.
If you send multiple keys to the same misbehaving client, they can of course substitute one for the other.
HTTP-only cookies
When you're sending a per-user-agent secret that you don't want it to accidentally leak to other user-agents, then HTTP-only cookies are a good option.
If the HttpOnly flag (optional) is included in the HTTP response header, the cookie cannot be accessed through client side script (again if the browser supports this flag). As a result, even if a cross-site scripting (XSS) flaw exists, and a user accidentally accesses a link that exploits this flaw, the browser (primarily Internet Explorer) will not reveal the cookie to a third party.
HttpOnly cookies are widely supported on modern browsers.
Sequestered iframes
Dividing your application into multiple iframes is the best current way to allow your rich client to manipulate sensitive data while minimizing the risk of accidental leakage.
The basic idea is to have small programs (secure kernels) within a larger program so that your security sensitive code can be more carefully reviewed than the program as a whole. This is the same way that qmail's cooperating suspicious processes worked.
Security Pattern: Compartmentalization [VM02]
Problem
A security failure in one part of a system allows
another part of the system to be exploited.
Solution
Put each part in a separate security domain. Even when
the security of one part is compromised, the other parts
remain secure.
"Cross-frame communication the HTML5 way" explains how iframes can communicate. Since they're on different domains, the security sensitive iframe knows that code on other domains can only communicate with it through these narrow channels.
HTML allows you to embed one web page inside another, in the element. They remain essentially separated. The container web site is only allowed to talk to its web server, and the iframe is only allowed to talk to its originating server. Furthermore, because they have different origins, the browser disallows any contact between the two frames. That includes function calls, and variable accesses.
But what if you want to get some data in between the two separate windows? For example, a zwibbler document might be a megabyte long when converted to a string. I want the containing web page to be able to get a copy of that string when it wants, so it can save it. Also, it should be able to access the saved PDF, PNG, or SVG image that the user produces. HTML5 provides a restricted way to communicate between different frames of the same window, called window.postMessage().
Now the security-sensitive frame can use standard verification techniques to vet data from the (possibly-compromised) less-sensitive frame.
You can have a large group of programmers working efficiently at producing a great application, while a much smaller group works on making sure that the sensitive data is properly handled.
Some downsides:
Starting up an application is more complex because there is no single "load" event.
Iframe communication requires passing strings that need to be parsed, so the security-sensitive frame still needs to use secure parsing methods (no eval of the string from postMessage.)
Iframes are rectangular, so if the security-sensitive frame needs to present a UI, then it might not easily fit neatly into the larger application's UI.
I have an API (1) on which I have build an web application with its own AJAX API (2). The reason for this is not to expose the source API.
However, the web application uses AJAX (through JQuery) go get new data from its AJAX API, the data retrieved is currently in XML.
Lately I have secured the main API (1) with an authorization algorithm. However, I would like to secure the web application as well so it cannot be parsed. Currently it is being parsed to get the hash used to call the AJAX API, which returns XML.
My question: How can I improve the security and decrease the possibility of others able to parse my web application.
The only ideas I have are: stop sending XML, but send HTML instead. Use flash (yet, this is not an option).
I understand that since the site is public, and no login can be implemented, it can be hard to refuse access to bots (non legitimate users). Also, Flash is not an option... it never is ;)
edit
The Web Application I am referring to: https://bikemap.appified.net/
This is somewhat of an odd request; you wish to lock down a system that your own web application depends on to work. This is almost always a recipe for disaster.
Web applications should always expect to be sidelined, so the real security must come from the server; tarpitting, session tokens, throttling, etc.
If that's already in place, I don't see any reason why should jump through hoops in your own web application to give robots a tougher time ... unless you really want to separate humans from robots ;-)
One way to reduce the refactoring pain on your side is to wrap the $.ajax function in a piece of code that could sign the outgoing requests (or somehow add fields to it) ... then minify / obscurify that code and hope it won't get decoded so fast.
I'm building a Chrome extension using the Remember the Milk web API. In order to call methods in this API, I need to sign my requests using an API key and a "shared secret" key.
My concern is that any user could just crack open the extension and pull out these values if I include them in the published extension. This may or may not pose a security rise for the user, but he or she could certainly use/abuse my API key and maybe get it revoked.
Is this something I should be concerned about? Are there any best practices for protecting this type of information in published JavaScript applications?
Ultimately you can't truly hide anything within a JS application that's run in the browser; you can obfuscate or minify the code, which will distract casual users from snooping around, but in the end its always going to be possible to grab your plaintext secret.
If you really need to prevent this from happening, then one option is to pass calls from your extension to a server you have access to. Your server can add any paramters required for signing, forward the call on to the relevant API, and pass the API's response back to the user. Of course this adds bandwidth / uptime constraints which you may not want.
Theoretically JS runs in the browser, then after the first download can be easily copied and made to run directly from the local, without going through the remote server. Because I need to sell an application * js (pay-as-you-use) I need to check each request and make it available ONLY if required by that particular site and, of course, only if he paid.
It doesn't work. As soon as someone downloaded a copy of the JavaScript file, he or she can always save a copy of it and even redistribute it.
Thus you cannot protect the JavaScript itself - but assuming you rely on some client-server interaction (i.e. AJAX), the server would not respond to requests coming from non-authorized sources, thus rendering the client-side worthless.
If you need to protect your business logic, don't put it into JavaScript. Alternatively, sue everybody who uses your scripts without having obtained a license (not sure if this is practical, though ...).
I wouldn't make the JS file that you plan to sell available directly on a URL like
yourdomain.com/yourfile.js
I would offer it on a URL like
yourdomain.com/getfile
Where /getfile is a URL that is processed by a PHP/Java etc server-side language where you can check whatever credentials you need to check, be it requesting domain name, IP address, some token or something else.
if your application is made in java you can use a ServletFilter to check if the request is valid (if the IP is correct, or maybe you can use a ticket like the facebook, twitter, whatyouwant rest API), and if isn't valid don't show nothing
if you aren't using java I think that something similar can be made with every programming language
It may be a little more trouble than it's worth. Yes, you could require clients to provide a token and whitelist certain domains, etc. But they can still open any site that uses that particular JavaScript -- even someone else's -- and just Save As... .
A better bet is controlling the script's interaction with your server. If it makes any AJAX calls a server you control, then take that chance to authenticate. If it doesn't depend on data from you in that way, I think you'll just have to face the problem that anyone dedicated enough will be able to download your script and will be able to use it with a little bit of playing around.
Your best bet is, in addition to the above, keep track of domains that have paid and search every once in a while to find if anyone's taking your code.