I'm researching a possibility of using some cloud storage directly from client-side JavaScript. However, I ran into two problems:
Security - the architecture is usually build on per cloud client basis, so there is one API key (for example). This is problematic, since I need a security per my user. I can't give the same API key to all my users.
Cross-domain AJAX. There are HTTP headers that browsers can use to be able to do cross domain requests, but this means that I would have to be able to set them on the cloud-side. But, the only thing I need for this to work is to be able to add a custom HTTP response header: Access-Control-Allow-Origin: otherdomain.com.
My scenario involves a lots of simple queue messages from JS client and I thought I would use cloud to get rid of this traffic from my main hosting provider. Windows Azure has this Queue Service part, which seems quite near to what I need, except that I don't know if these problems can be solved.
Any thoughts? It seems to me that JavaScript clients for cloud services are unavoidable scenarios in the near future.
So, is there some cloud storage with REST API that offers management of clients' authentication and does not give the API key to them?
Windows Azure Blob Storage has the notion of a Shared Access Signature (SAS) which could be issued on the server-side and is essentially a special URL that a client could write to without having direct access to the storage account API key. This is the only mechanism in Windows Azure Storage that allows writing data without access to the storage account key.
A SAS can be expired (e.g., give user 10 minutes to use the SAS URL for an upload) and can be set up to allow for canceling access even after issue. Further, a SAS can be useful for time-limited read access (e.g., give user 1 day to watch this video).
If your JavaScript client is also running in a browser, you may indeed have cross-domain issues. I have two thoughts - neither tested! One thought is JSONP-style approach (though this will be limited to HTTP GET calls). The other (more promising) thought is to host the .js files in blob storage along with your data files so they are on same domain (hopefully making your web browser happy).
The "real" solution might be Cross-Origin Resource Sharing (CORS) support, but that is not available in Windows Azure Blob Storage, and still emerging (along with other HTML 5 goodness) in browsers.
Yes you can do this but you wouldn't want your azure key available on the client side for the javascript to be able to access the queue directly.
I would have the javascript talking to a web service which could check access rights for the user and allow/disallow the posting of a message to the queue.
So the javascript would only ever talk to the web services and leave the web services to handle talking to the queues.
Its a little too big a subject to post sample code but hopefully this is enough to get you started.
I think that the existing service providers do not allow you to query storage directly from the client. So in order to resolve the issues:
you can write a simple Server and expose REST apis which authenticate based on the APIKey passed on as a request param and get your specific data back to your client.
Have an embedded iframe and make the call to 2nd domain from the iframe. Get the returned JSON/XML on the parent frame and process the data.
Update:
Looks like Google already solves your problem. Check this out.
On https://developers.google.com/storage/docs/json_api/v1/libraries check the Google Cloud Storage JSON API client libraries section.
This can be done with Amazon S3, but not Azure at the moment I think. The reason for this is that S3 supports CORS.
http://aws.amazon.com/about-aws/whats-new/2012/08/31/amazon-s3-announces-cross-origin-resource-sharing-CORS-support/
but Azure does not (yet). Also, from your question it sounds like a queuing solution is what you want which suggests Amazon SQS, but SQS does not support CORS either.
If you need any complex queue semantics (like message expiry or long polling) then S3 is probably not the solution for you. However, if your queuing requirements are simple then S3 could be suitable.
You would have to have a web service called from the browser with the desired S3 object URL as a parameter. The role of the service is to authenticate and authorize the request, and if successful, generate and return a URL that gives temporary access to the S3 object using query string authentication.
http://docs.aws.amazon.com/AmazonS3/latest/dev/S3_QSAuth.html
A neat way might be have the service just redirect to the query string authentication URL.
For those wondering why this is a Good Thing, it means that you don't have to stream all the S3 object content through your compute tier. You just generate a query string authenticated URL (essentially just a signed string) which is a very cheap operation and then rely on the massive scalability provided by S3 for the actual upload/download.
Update: As of November this year, Azure now supports CORS on table, queue and blob storage
http://msdn.microsoft.com/en-us/library/windowsazure/dn535601.aspx
With Amazon S3 and Amazon IAM you can generate very fine grained API keys for users (not only clients!); however the full would be PITA to use from Javascript, even if possible.
However, with CORS headers and little server scripting, you can make uploads directly to the S3 from HTML5 forms; this works by generating an upload link on the server side; the link will have an embedded policy document on, that tells what the upload form is allowed to upload and with which kind of prefix ("directories"), content-type and so forth.
Related
First, let me start with, I understand JavaScript can be tampered with so I'm not looking for a fool-proof solution. I have a public API that takes requests from external web applications. Sometimes the web applications are directly hitting our API and other times they jump through another API offered by some of our Partners.
In a partner scenario, we want to ensure the API requests are ultimately coming from specific URLs. My idea is this:
We're allowed to offer a script that the webapps can add to their sites so I was thinking we can set up an API Endpoint whose job is to capture the request, verify the origin (URL), and spit out a token that they must ultimately send later with the real API request through the partner API.
Is there a better approach or am I just really limited to the origin header to find out the website? I was hoping there were additional data points I can leverage on the client side to verify the traffic is coming from a specific URL
I need to send a particular header parameter in all ajax calls which is a very confidential information. I don't want from the end user to see any of the requests made in network tab of any browser. Is there any way to prevent it? or is it possible to make ajax calls directly from node server which doesn't go through browser?
Any call made on the client side cannot be hidden, as it's "client" side of the website. Even if you'd success to hide it in browser, any software could monitor it with tools such as network sniffers / monitors, WireShark for instance.
So the answer is no
When you go to a restaurent and order something, can the waiter subsequently make you forget your last instruction/order? The answer is NO, same as the answer to this question.
It all starts with client making a request to the server, hence client is the driving force of the whole interaction. Server just serves as per the instructions from client (and maliciously does some extra work on its own, say auditing, database update, cookie addition etc.).
Hence there is no way a 'server' can restrict client to see its own instructions.
Just simply don't send sensitive information directly via headers. Encrypt them via your client side code and add them within cookies or any other HTTP header(s).
Quoting from internet:
Client/server architecture is a producer/consumer computing
architecture where the server acts as the producer and the client as a
consumer. The server houses and provides high-end, computing-intensive
services to the client on demand. These services can include
application access, storage, file sharing, printer access and/or
direct access to the server’s raw computing power.
Client/server architecture works when the client computer sends a
resource or process request to the server over the network connection,
which is then processed and delivered to the client. A server computer
can manage several clients simultaneously, whereas one client can be
connected to several servers at a time, each providing a different set
of services. In its simplest form, the internet is also based on
client/server architecture where web servers serve many simultaneous
users with website data.
Never trust to client. Ever. Never ever. Doesn't matter what you do assume its been cracked. Hackers have all the tools and complete control of the client and all software running on it. Assume they've written their own network stack, their own TLS implementation, their own browser, their own operating system...
If you need to keep it secure, keep it on your servers. If you need to communicate 'privileged' information (assuming you remember that once you've sent it to a client they can access it) don't, tokenise it on your server and send them the token. And if you're generating tokens make sure they're very random and utterly opaque - don't encrypt anything in the token because you should assume they can crack that too, regardless how secure you think the library you are using is (assume it'll one day be cracked).
Never expose the confidential data on the client-side.
The best practice is to encrypt your confidential data on the server-side, send it to the client, and decrypt on the server end when the client sends you back.
If you don't want encryption or this confidential information is result of user actions itself then make a key-value pair in a database, where the key is something which can be exposed to the client (let's say username) and value is the confidential information. Hence now we have 1-1 mapping, so fetch this confidential information on server-side from database using the key we are getting from the frontend.
I hope this will help.
Good Luck!!
I want to create a Javascript widget that my users can put on their websites.
The widget is capable of creating audio, which in turn costs my users' money.
For the sake of illustration, let's say that every time a widget, placed on my user's site, is loaded by anyone on the internet (i.e. my users' users), I bill my user $1.
The widget is a Javascript code wrapped around an HTML audio player. The JS code makes a request to my backend API every time it is loaded, and upon receiving the response from my backend API, the player is constructed.
Diagram:
My concern is malicious usage by people who are not my users.
Let's say someone takes the widget's source code they found on a website that belongs to one of my users, and they put it on their site. They will, therefore, use my service but not pay for it. Instead, my actual user will pay for it (assuming I use a public API key as a way of distinguishing my users).
Usually, this is prevented by having a server-side library be responsible for any usages that might spend money. For example, I use Pusher as my WebSockets IaaS, and whenever I want to publish messages, I have to do it server-side, using their PHP SDK, with both private and public API keys.
In my use case, it's mandatory not to have a server-side library.
Question: how do I make sure that API requests I receive are legitimate?
I considered using the hostname where the widget is placed as a legitimacy measure. During the widget set-up, I could ask my users to whitelist certain (sub)domains and reject all requests that don't match the criteria, but this could be easily spoofed by, for example, a custom local domain or a CURL-crafted request.
I understand this may not be possible.
It seems like what you're asking is closely related to the topic of client side encryption. In most cases, the answer would be no, its not possible. However, in this case, it may be possible to implement something along the lines of the following. If you can get your clients to install a plugin (which you would build), you could encrypt your JS code after finishing it, and have your server serve this encrypted file. Normally, where this falls short, is that if you're sending an encrypted file, there needs to be a way for the client to decrypt it. This would require you to also serve an unecrypted JS file which would do the decoding, but by serving the unencrypted decoder you undo any security gained by encrypting your main JS file (the decryption file could be easily used to reverse engineer your encryption method/ just straight up run for people other than your intended users). Now, this is where having those API users (and the ability to communicate with them through means outside of server-client connections) comes in handy. If you build a decryption plugin, and give it to the API users (you could issue a unique decryption key for each user, but without server access implementing unique user keys would be very difficult/impossible), the plugin could then decrypt your served file in their browser, essentially guaranteeing that only users you have given the 'key' to can access your software. However, this approach has a few caveats. It implies that you trust your users enough that they wouldn't distribute the plugin (it would be against their intrest to distribute it anyway, as it could lead to higher chargers if people impersonate them). There are also probably a couple of other security concerns with this approach, however, I can't think of them right now. If any come to mind, I'll edit this post and add them.
apparently, I don't have enough reputation to comment yet, hence the post...
But in response to your post, I think that method seems much better than the one I suggested; I didn't realize you could control the API's response to the server.
I don't quite understand which of the following you mean:
a) Send a JS file to the user, with the sole purpose of determining if the player should also be sent (ie upon arriving, it pings the server with the client's API key/ url) and then the server would serve the file (in which case your approach seems safe to me, but others may find security problems with it).
or
b) Send a file with the JS and the audio player, which upon arriving, determines if the URL and API key are correct, and then allows the audio player to function normally (sending the API key to the server to track usage, not as a security feature).
If using option b, this would not improve security. If your code relies on security that runs on the client-side, and the security system was sent by the same means as the code, then almost without exception, the system designed is flawed and inherently unsafe.
I hope this makes helps, and if you disagree / have more questions, feel free to comment!
How about sending the following parameters from JavaScript widget to API backend:
Public API key (e.g. bbbe3b259f881cfc796f468619eb9d)
Current URL (e.g. https://example.com/articles/chiang-mai-thailand-january-2016-june-2016)
I will use the API key as a way of distinguishing my user and the current URL as a way of knowing which audio file to create (my widget will create an audio file based on the URL).
Furthermore, and this is crucial, I will have a user whitelist their domains and subdomains on my central site, where my users will get their widget code.
This is the same as what FB does for their integrations:
So if for example, my backend API receives the aforementioned sample URL, and the user has set up the widget to only allow URLs that belong to foo.com and bar.baz.com, I will reject the audio creation process and display an error.
Do you see any issues with this approach?
I want to create an API at www.MyDomain.com that is accessible from public websites www.Customer1.com and www.Customer2.com. These public websites display each customers inventory and do not have any login features. They will use AJAX calls to read data from my API.
How can I secure the API so that it can be accessed via AJAX from different domains but no one can access the API to be able to scrape all of my customers data and all of their inventory?
I have tried thinking of different solutions on my own but they would all either require people to login to the public websites (which isn't an option) or it would require some secret "key" to be displayed publicly in the browser source code which could then be easily stolen.
Any ideas would be greatly appreciated.
Thanks!
P.S. Are their any obstacles that I am going to run into using Javascript & CORS that I need to look into now?
Anything that is accessible without authentication from a browser is by definition insecure, so you can't stop that. Your best bet is to have to have a relationship with the owner of customer1.com and customer2.com - the server apps for those two websites would make an HTTP call to you and authenticate with your service. Going this way also avoids the CORS issues you're talking about.
If you've already designed the client functionality, you can still probably do it without much change to the javascript - have it point to customer1.com for its AJAX call instead of your API, and customer1.com would accept this request and just act as a proxy to your API. Aside from the authentication, the rest of the request and response could just be pass-throughs to your API.
You can use Microsoft.AspNet.WebApi.Cors.
It's just need add ONE line at webapi config to use CORS in ASP.NET WEB API:
config.EnableCors("*","*","*");
View this for detail.
The simplest way to provide a minimum security here is to provide some kind of token system. Each app has its own token, or combination of tokens which it must pass to the server to be verified. How you generate this tokens is up to you and other than being linked to app's access, doesn't have to mean anything.
Provide a way for each API implementer to open an account with you. This way you will know who is accessing what and in some cases you can block/stop service.
For instance, a token can just be an MD5 hash:
7f138a09169b250e9dcb378140907378
In the database, this hash is linked to their account. On each request, they send this token with what they want. It is verified first to be valid, then the request is fore filled. If the token is invalid, then you can decide how to deal with it. Either don't return anything or return an "access denied" (or anything you want).
One thing to avoid is having a single token for everyone, though this can be a starting point. The reason for this is if some unauthorized app gets a hold of this token and exploits it, you have to change the token for everyone, not just the app that somehow leaked the token. You also can't control if someone has access to something or not.
Since you listed ASP.NET, I can also point you to WCF, which is fairly complex but has all the tools that you need to setup a comprehensive web service to service both you and your clients.
I hope this gives you a starting point!
EDIT:
There are security concerns here in the case that someone leaks their token key somehow. Make sure that you setup a way in which the app/your service do not expose the the token in anyway. Also have a flexible way of blocking a token, both by your clients in you, if it so happens that a token is exploited.
I like the way Google Maps' api is consumed, using a script include, but I'm worried:
My api is "semi-private", that is, accessible over the internet but should allow for secure transmission of data and some kind of authentication. The data should remain private over the wire, and one consumer shouldn't be able to get at another's data.
How can I use SSL and some kind of authentication to keep the data secure, but still accessible "horizontally" from a plain HTML page with no server-side proxy required? Do I need to manage keys? How will the keys be posted to the server without being intercepted? Can I use OpenId (or some other 3rd-party authentication) to authenticate api users, or do I have to create my own authentication mechanism? I've been all over Google and can't find a good guide to designing and deploying my API securely.
Right now I'm using REST and AJAX to consume them, but cross-domain calls are impossible. Any help or a pointer in the right direction would be much appreciated.
I'd probably use a dynamically-generated script tag with an SSL URL that included a key in the query string that was public-key encrypted. The server would use the private key to decrypt the query string parameter and return script that included the relevant information (or didn't, if the key was invalid). Or something along those lines. But I'll admit that I haven't actually had to do it in practice.
I'd also look for prior art, like Amazon's S3 service.
So:
User provides secret
Client-side code uses public key to encrypt the secret
JavaScript appends a script tag that includes the URL
Server handles the script request, decrypts the secret, checks it, and sends back the relevant response.
You may well need two cycles, because otherwise the request to the server could be re-used via a man-in-the-middle attack. That would be:
JavaScript appends a script tag that requests a unique key (probably with some confounding information, like the source IP and some random further key)
Server responds with a one-time key tied to that IP
User provides secret
Client-side code uses public key to encrypt the secret, including the unique key from #1
JavaScript appends a script tag that includes the URL
Server handles the script request, decrypts the secret, checks it, and sends back the relevant response.
The response could well be encrypted (to some degree) using the random key included in #1
None of which I've actually done. (Or have I? BWAa-ha-ha-ha...) FWIW.
OAuth might help with this situation by having the user login to the 3rd-party application and allowing your application to access the 3rd-party on their behalf by using a request token when you make xhr requests. http://oauth.net/documentation/getting-started/
========
The reason for using a server-side proxy boils down to the Same-origin policy built into web browsers: http://en.wikipedia.org/wiki/Same_origin_policy
Essentially the browser only allows requests to be made to the address in which the page comes from (e.g. facebook.com can only make requests to facebook.com URIs). A server-side proxy solves this issue by making requests to servers outside the current origin. Server-side proxies are also the best practice for making requests like this.
Check out the opensource javascript Forge project. It provides a javascript TLS implementation that allows secure cross-domain xhr requests. It might be of use to you:
http://digitalbazaar.com/2010/07/20/javascript-tls-1/
http://digitalbazaar.com/2010/07/20/javascript-tls-2/
https://github.com/digitalbazaar/forge
One potential solution:
Set up an Apache server to run your site.
Get an SSL certificate for your site.
Install the apache mod that comes with Forge to setup a cross-domain policy that allows other sites to access yours.
Host Forge's TLS implementation on your site along with your site's certificate in PEM format.
Tell other sites to include the javascript from your site and use it to make secure calls to your site to do whatever it is you want to.
(3rd party) Page uses OAUTH or something similar to authenticate the user and get a token from your server.
Page loads an IFRAME from your server via SSL passing the token along for authentication.
The IFRAME can communicate securely to your server via SSL
Use easyXDM or something similar to communicate between the IFRAME and the 3rd party page, using some limited RPC-like or socket-like API you create.
Or if you really don't trust the third party - do your authentication inside the iframe (no need for oauth then, just use a plain html form) and communicate anything the outer page needs to know about the user using easyXDM.
Not too sure of what the question is exactly, I take it you're attempting to do a jsonp-like call to [https://secure.com] in order to process/display data on [http://regular.com]?
Can the two servers talk to each other? How about something like this:
User logs in on [https://secure.com]
Upon authentication, secure.com generates an token (lets call it syntoken) and passes it directly to regular.com (server-to-server), maybe like a session_id, some arbitrary message, and an otp cipher (lets call it syncipher).
Broswer receives a session_id cookie, and Secure.com then redirects the browser to http://regular.com/setcookieandredirect?session_id=blabla&otpencryptedsynmessage=blabla
Regular.com looks up otp cipher using session_id as a key, and decrypts otpencryptedmessage "blabla."
If decrypted message matches the original message in the syntoken, we can verify user is logged in [regular.com] and regular.com generates another token (lets call it acktoken, lolz) and passes it directly to [secure.com], consisting of session_id, some arbitrary ack message, and a different otp cipher (lets call it ackcipher).
Regular.com then sends the browser a cookie consisting of otpencryptedackmessage (let's name this cookie "verified_session").
Finish loading the page.
From there, you can do jsonp-like calls to
https://secure.com/getscript.js?query=dataname&verifiedtoken=(verified_sessions_cookie_value)
where secure.com/getscript.js will take the verifiedtoken, lookup the ackcipher based on the original cookie session_id sent by [secure.com] as the key, and decrypt the otpencrypedackmessage. If the decrypted message matches the ack message, render the script file.
It's kinda like a 3-way handshake. The secret sauce is that the servers have to be able to talk to each other directly to pass secret keys discretely. You don't have to use the same session_id for both servers, I was just using that as an easy point of reference to find a way to access the syn/ack otp ciphers. The ciphers must be completely hidden from public.