Save data on S3 using Javascript or Jquery - javascript

I want to collect data entered by the user in a browser and save to Amazon S3. Is this something I can do with Javascript/jQuery?

I know this is an old question, but I had the same issue and think I've found a solution. S3 has a REST interface to which you can POST data directly, without exposing your AWS Secret Key. So, you can construct an AJAX POST request to your S3 bucket endpoint using Javascript or jQuery. You can specify an access policy in the request as well, which restricts upload access to only certain buckets and certain directories.
Amazon verifies the authenticity of your requests using an HMAC signature which you provide in the request. The signature is constructed using details about the request and your AWS Secret Key, which only you and Amazon know, so fraudulent requests can't be made without someone having a valid signature.

Yes it is possible, and as I already pointed in the comments of the accepted answer there are legitimate and useful uses to do so without compromising security and credentials.
You can post objects to S3 directly from the browser:
http://docs.amazonwebservices.com/AmazonS3/latest/API/RESTObjectPOST.html

Bad idea:
1) Think of how much fun people could have with emptying your bank account when they find your S3 credentials embedded in your Javascript code.
2) The javascript would be loaded from your server and trying to talk to Amazon's servers - that's forbidden as it's cross-domain communication.
Something like this you'd want to handle on the server. You could easily whip up an AJAX interface to send the data client browser -> your server -> amazon. That way your S3 credentials are store on your server and not transmitted willy-nilly to everyone using your site.

Maybe have a look at node.js, and try the aws-sdk package by:
npm install aws-sdk
There are blog and doc I found about how to upload files to S3:
this blog. and
aws doc.

There are a variety of issues with attempting to access S3 via client-side code:
There is no way to secure your credentials.
Many responses are in XML instead of JSON, and the XML parsing engine in JavaScript is heavy and slow.
Authenticating the requests would require JavaScript implementations of HMAC-SHA1.
There are issues with making cross-domain requests from JavaScript without routing through a proxy.
All-in-all, there are no feasible solutions for client-side JavaScript at the moment. If you're interested in server-side JavaScript, there are some S3 classes floating around GitHub for Node.js.

Related

Best way to handle Google Drive uploads from a back-end server

I was wondering what could be the best way to handle Google Drive uploads from my web application.
In this web application the Google Drive access tokens are available with the back-end server.
I have two simple ways in my mind currently:
Upload directly via front-end:
The front-end app can make an authenticated request for a drive access token to my back-end. My back-end can then return the access token to the front-end. Frontend can now then directly upload the file to Google Drive.
With this approach its relatively simple, there is only single point of failure in the upload process. Also my back-end doesn't have to deal with all upload related logic. On the other hand with this approach, I have to expose the drive access token to the front-end which may not be very bad.
Proxying the upload via my back-end:
With this approach my front-end will first upload the file to my back-end and then my back-end going to upload the file to Google Drive. With this I don't have to expose the access token to client. But this has got a lot of disadvantages for e.g. in the upload process there are now multiple points of failure. My back-end needs to implement all that upload logic and also needs to deal large file uploads. That's why I am not so comfortable with this approach.
Is there any better / standard way of handling this thing?
I would suggest uploading files directly from the frontend. It's how it's usually done, it's more efficient and it doesn't have to involve less security. If you have any doubts about this, you can open a new question explaining the technologies you use and the purpose of the web app.

Download azure blob using purely javascript and no Nodejs?

I have a cordova application which downloads a zip file as blob from azure. Since I am very new to azure, I would like to know that is it okay security wise to access azure blob with SAS url from the cordova application ?
My point is that I would need to append the shared access signature (SAS) token to the blob url, something like below.
https://myazureportal.container.blobs/myblob?MY_SAS
This way my javascript code will have SAS hard-coded. What is the correct approach since I would prefer to access blob using javascript only and preferably without writing any server side code if possible ?
if I use SAS inside javascript files of my cordova application, is it a security flaw ? If so, any approach to implement the same using purely javascript only ?
Things I tried:
Created a back-end WEB-API service in ASP.NET Core and this way, I would be able to download the blob file but I am looking for is a pure javascript approach.
Apart from the point mentioned by Eric about code being decompiled, there are a few other things you would need to worry about.
If you are embedding the SAS URL in your application, you will have to make them long-lived i.e. with an expiry date far out in future. That's a security risk and is against best practices.
A shared access signature is created using an account key and becomes invalid the moment you regenerate your account key. If you're embedding SAS URL in your application and have to regenerate your account key for any reason, your SAS URL becomes essentially useless.
You can learn more about the best practices for SAS Token here: https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview#best-practices-when-using-sas.
Yes it is a security flaw as your app can be decompiled and your code inspected. If you want to keep this approach, at least have a login connected to a back-end that sends the SAS back to your front-end.
Ideally you would do everything in the back-end and return the blob to your front-end.

How to pass credentials to AWS STS GetSessionToken

I wrote a javascript file manager to manage user files on an Amazon S3. It uses the AWS Javascript API. Developed it using hard-coded IAM user credentials, and now for production want to use temporary credentials Instead.
My plan is for our PHP server to generate the temp credentials from the IAM credentials, via AJAX callback from the JS code to PHP via STS GetSessionToken. Seems simple enough, but I can't seem to find any documentation on how to pass the IAM key/secret to GetSessionToken in the URL. The examples in the AWS docs all show something like:
https://sts.amazonaws.com/?Version=2011-06-15&Action=GetSessionToken&DurationSeconds=3600&AUTHPARAMS
Where I guess "AUTHPARAMS" is so obvious that I should not need any further explanation. But sadly, I do need further explanation. All I need from the PHP side of things is this one little call, so I didn't really want to install the whole AWS PHP SDK just for this. If I can just find out how to build the URL for this one call, then I can send it off via CURL and be all set. At least that was the plan.
Is there a way to call GetSessionToken directly via the REST api, and pass it the IAM key/secret, or is it really more complicated than that?
You need to learn about AWS API request signing. More specifics here. Note that AWS regions opened after January 30, 2014 require v4 signing, while earlier regions accept v2 or v4 signing.
Or just use the PHP SDK which makes it all much simpler.

How to secure an API used only from front-end (Ajax call)

Well, I created an API to manage for our websites some attachments uploads and store into Amazon S3 buckets
The scenario : Once visitor / user in the form and wants to submit it with attachment, once the file is selected then button clicked an Ajax request fire to the micro service API so it can store the file into S3 do some processing then return the direct link or identifier.
The question is : how can we authenticate the user using for example a short live token or something like that without being hijacked, mis-usage of the token..
In Javascript everything is visible to the visitor, and we try to not integrate any heavy process in the backend
If I got your question straight, you have a web interface in which files are uploaded to an S3 bucket and you need to make sure that in a certain back end API (such as REST) all file upload commands will have authentication and authorization.
The answer is highly dependent on your architecture but generally speaking, all Javascript calls are nothing but HTTP calls. So you need HTTP authentication/authorization. In general, the most straightforward method for REST over HTTP is the basic authentication, in which the client sends a credential in every single request. This may sound odd at first but it is quite standard since HTTP is supposed to be stateless.
So the short answer, at least for the scenario I just described, would be to ask the user to provide the credentials that Javascript will keep in the client side, then send basic authentication which the REST interface can understand. The server-side processes will then get such information and decide whether a certain file can be written in a certain S3 bucket.

Cloud API with JavaScript (Amazon, Azure)

I'm researching a possibility of using some cloud storage directly from client-side JavaScript. However, I ran into two problems:
Security - the architecture is usually build on per cloud client basis, so there is one API key (for example). This is problematic, since I need a security per my user. I can't give the same API key to all my users.
Cross-domain AJAX. There are HTTP headers that browsers can use to be able to do cross domain requests, but this means that I would have to be able to set them on the cloud-side. But, the only thing I need for this to work is to be able to add a custom HTTP response header: Access-Control-Allow-Origin: otherdomain.com.
My scenario involves a lots of simple queue messages from JS client and I thought I would use cloud to get rid of this traffic from my main hosting provider. Windows Azure has this Queue Service part, which seems quite near to what I need, except that I don't know if these problems can be solved.
Any thoughts? It seems to me that JavaScript clients for cloud services are unavoidable scenarios in the near future.
So, is there some cloud storage with REST API that offers management of clients' authentication and does not give the API key to them?
Windows Azure Blob Storage has the notion of a Shared Access Signature (SAS) which could be issued on the server-side and is essentially a special URL that a client could write to without having direct access to the storage account API key. This is the only mechanism in Windows Azure Storage that allows writing data without access to the storage account key.
A SAS can be expired (e.g., give user 10 minutes to use the SAS URL for an upload) and can be set up to allow for canceling access even after issue. Further, a SAS can be useful for time-limited read access (e.g., give user 1 day to watch this video).
If your JavaScript client is also running in a browser, you may indeed have cross-domain issues. I have two thoughts - neither tested! One thought is JSONP-style approach (though this will be limited to HTTP GET calls). The other (more promising) thought is to host the .js files in blob storage along with your data files so they are on same domain (hopefully making your web browser happy).
The "real" solution might be Cross-Origin Resource Sharing (CORS) support, but that is not available in Windows Azure Blob Storage, and still emerging (along with other HTML 5 goodness) in browsers.
Yes you can do this but you wouldn't want your azure key available on the client side for the javascript to be able to access the queue directly.
I would have the javascript talking to a web service which could check access rights for the user and allow/disallow the posting of a message to the queue.
So the javascript would only ever talk to the web services and leave the web services to handle talking to the queues.
Its a little too big a subject to post sample code but hopefully this is enough to get you started.
I think that the existing service providers do not allow you to query storage directly from the client. So in order to resolve the issues:
you can write a simple Server and expose REST apis which authenticate based on the APIKey passed on as a request param and get your specific data back to your client.
Have an embedded iframe and make the call to 2nd domain from the iframe. Get the returned JSON/XML on the parent frame and process the data.
Update:
Looks like Google already solves your problem. Check this out.
On https://developers.google.com/storage/docs/json_api/v1/libraries check the Google Cloud Storage JSON API client libraries section.
This can be done with Amazon S3, but not Azure at the moment I think. The reason for this is that S3 supports CORS.
http://aws.amazon.com/about-aws/whats-new/2012/08/31/amazon-s3-announces-cross-origin-resource-sharing-CORS-support/
but Azure does not (yet). Also, from your question it sounds like a queuing solution is what you want which suggests Amazon SQS, but SQS does not support CORS either.
If you need any complex queue semantics (like message expiry or long polling) then S3 is probably not the solution for you. However, if your queuing requirements are simple then S3 could be suitable.
You would have to have a web service called from the browser with the desired S3 object URL as a parameter. The role of the service is to authenticate and authorize the request, and if successful, generate and return a URL that gives temporary access to the S3 object using query string authentication.
http://docs.aws.amazon.com/AmazonS3/latest/dev/S3_QSAuth.html
A neat way might be have the service just redirect to the query string authentication URL.
For those wondering why this is a Good Thing, it means that you don't have to stream all the S3 object content through your compute tier. You just generate a query string authenticated URL (essentially just a signed string) which is a very cheap operation and then rely on the massive scalability provided by S3 for the actual upload/download.
Update: As of November this year, Azure now supports CORS on table, queue and blob storage
http://msdn.microsoft.com/en-us/library/windowsazure/dn535601.aspx
With Amazon S3 and Amazon IAM you can generate very fine grained API keys for users (not only clients!); however the full would be PITA to use from Javascript, even if possible.
However, with CORS headers and little server scripting, you can make uploads directly to the S3 from HTML5 forms; this works by generating an upload link on the server side; the link will have an embedded policy document on, that tells what the upload form is allowed to upload and with which kind of prefix ("directories"), content-type and so forth.

Categories