Storing AWS credentials in the frontend - javascript

I am trying to get hold of my image objects from S3, from my JavaScript frontend application.
Acording to the documentation, these are the steps required:
import * as AWS from "aws-sdk";
AWS.config.update({accesKeyId, secretAccesKey, region});
let s3 = new AWS.S3();
And then, you can get the objects like so:
function listObjects(bucketName, folderName) {
return new Promise((resolve) => {
s3.listObjects({Bucket: bucketName, Prefix: folderName}).promise()
.then((data) => {
resolve(data.Contents);
})
});
}
All seems to work correctly, but what worries me is that I also need to keep the accessKeyId and the secretAccessKey in my frontend application, in order to access the bucket.
How does one secure the bucket, or access the objects without providing these confidential data?

You're right to worry. Any one will be able to take the credentials out of your app. There's a few approaches to this:
if the objects aren't actually sensitive, then there's nothing lost if the credential can only take the actions you wish to allow everyone. For that matter you should be able to get rid of the need for credentials all together if you set the permissions on your bucket properly .. I think that includes list permissions if necessary.
if the objects are sensitive, then you already have some sort of authentication system for your users. IF you're using Oauth accounts to auth ( google, amazon, facebook ,etc ) then you can use AWS Cognito to generate short lived AWS credentials that are associated to that user, which would allow you to differentiate permissions between users ... it's pretty slick and a great fit if already using oauth. IF you're not using oauth, consider whether you should be. It's far more secure than having to hande your own auth creds layer for your users. https://aws.amazon.com/cognito/
if you don't want to or can't user cognito, you can still assume an AWS role from the backend and generate temporary credentials that automatically expire in anywhere from 15 minutes to 1 hour or more and then pass those credentials to the front end. I'd call it "poor man's cognito" but I think it's probably actually more expensive to run the infra to provide the service than cognito costs.
Or, as #Tomasz Swinder suggests, you can simply proxy the requests through your application, resolving the asset the user requests to an s3 resource and pulling it in your backend and then serving to your user. This is an inferior solution in most cases because your servers are farther away from the end user than s3's endpoints are likely to be. And, you have to run infrastructure to proxy. But, that having been said, it has it's place.
Finally, pre-signed s3 urls may be a good fit for your application. Typically a backend would sign the s3 urls directly before providing them to the user. The signature is enough to authorize the operation ( which can be PUT or GET) but doesn't itself contain the private key used to sign - in other words, presigned urls provide an authorized URL but not the credentials used to authorize them, so they're a great way to provide ad hoc authorization to s3.
Overall it's really awesome to have a backend-free application, and for that you're going to need a 3rd party auth and something like cognito. but once you start using it, you can then use all sorts of aws services to provide what would otherwise be done by a backend. Just be careful with permissions because aws is all pay as you go and there's usually no capability to limit the calls to a service to make sure a cruel internet user strives to drive up your AWS bill by making tons of calls with the temporary creds you've provided them. One notable exception to that is API Gateway, which does allow per user rate limits and therefore is a great fit for a cognito-authorized serverless backend.
Also bear in mind that LISTing s3 objects is both much slower, and much more expensive ( still cheap per op, but 10x ) than GETing s3 objects, so it's usually best to avoid calling lIST whenever possible. I'm just throwing that out there, I suspect you're just doing that to test out the s3 connection.

Could you request that through your server? or is it a static site?
If this is a static site then, You can create IAM User for s3 that can only read the content that you are going to use and show in the fronted anyway.

One method we use is to store the credentials in a .env file and use dotenv (https://github.com/motdotla/dotenv) to read in the variables. These can then be accessed through process.env. For example, the .env file would contain:
AWSKEY=1234567abcdefg
AWSSECRET=hijklmn7654321
REGION=eu-west
Then in your code you would then call require('dotenv').load() to read the environment variables. You then access them as:
AWS.config.update({process.env.AWSKEY, process.env.AWSSECRET, process.env.REGION});
Make sure that the .env file is not committed into your repo. If you want you could have a env.example and instructions on how to create a .env when creating either a dev or production install.
As for securing the bucket, you can do so by restricting the read/write access to an IAM user that owns the AWS key/secret pair.

Related

Authenticate a Google Kubernetes cluster certificate over https?

I'm a little lost trying to get acccess to a Kubernetes cluster hosted on Google Kubernetes Engine.
I would like to use a cluster certificate to authenticate to the provided kubernetes endpoint, so that I can run API requests to the kubernetes api, like creating deployments for example.
I'm trying to create deployments from an external API (from a NodeJS app, hosted on google app engine), where I will automatically create deployments on various different kubernetes clusters, not necessarily in the same google project.
I used to use basic auth to authenticate to the kubernetes api, this was trivial, all I needed was the username and password for a cluster, and then to base64 encode the two and put it in an Authentication header. I did this using axios and had no problems at all.
Now I'd like to switch over to using client certificates and I think I lack some understanding.
I guess I need to get the provided endpoint ip of the cluster, download the cluster certificate provided by google... that looks something like this:
...possibly base64 encode it and save it as a .crt, .cert, or ??.pem file and point axios to the file using a httpagent? (I tried saving the raw data as a .crt and .cert file, setting it as a httpagent and this unsurprisingly didn't work).
Do I need some kind of client/server key pair for the certificate, or maybe an API key?
I also read something about setting a Bearer token as an Authorization header, I guess this needs to be paired with the certificate, but I'm unsure where I can find/generate this token?
If there's anyone who can help with this obscure issue I'd be very grateful,
Thanks in advance!
P.S. I've been trying to decipher the K8s docs and I think I'm close, but I'm still not sure I'm looking at the right docs: https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/
I've resolved the issue and I thought I'd post an answer in case someone is in my situation and needs a reference.
Alberto's answer is a great one, and I think this is the most secure mindset to have, its definitely preferable to keep everything internal to the google environment. But in my case, even if just for testing, I really needed to interface with the kubernetes API over https from another domain.
It turns out that it was simpler than I anticipated, instead of using the CA certificate, what I needed to do was to fetch a service account token from google cloud shell and place this in the Authorization header (very similar to basic auth). (In addition I should check the validity of the kubernetes endpoint by using the service account's client certificate, but I did not so this.)
Once in the google cloud shell and authenticated to the relevant cluster run the following:
kubectl get serviceaccount
kubectl get serviceaccount default -o yaml
kubectl get secret-default-token-some_id_string -o yaml
kubectl create clusterrolebinding default-serviceaccount-admin --clusterrole=cluster-admin --serviceaccount=default:default
Point 1 displays whether there is in fact a default service account present, this should be the case with a cluster created on Google Kubernetes Engine.
Point 2 prints out which service account is the default and should provide a service account name, something like "secret-default-token-abcd"
Point 3 prints out the token as well as its corresponding certificate. The token is printed as a base64 encoded string and needs to be decoded before it can be used in a http request.
Point 4 assigns the default service account a role (part of the Kubernetes RBAC thing) so that it has the correct permissions to create and delete deployments in the cluster.
Then in postman or wherever you're making the Kubernetes api call, set an Authorization header as follows:
Authorization: Bearer <base64 decoded token received in point 3>
Now you can make calls directly to the kubernetes api, for example, to create or delete deployments.
As mentioned above the certificate also provided as a base64 encoded string in Point 3, can be used to verify the kubernetes endpoint, but I haven't tested this (definitely more secure way of doing things).
In addition I was making the http call in a nodejs script and I had to set an environment variable process.env.NODE_TLS_REJECT_UNAUTHORIZED = "0"; to override the certificate check... definitely not a safe way to go and not recommended.
I'm not sure if using manual authentication is the best for this case, as one of the good features of Kubernetes Engine is that it will provide authentication configuration for kubectl for you automatically, via the ‘gcloud container clusters get-credentials’ command.
There are also some resources (like connecting to another cluster) that are controlled via GCP IAM permissions instead of inside Kubernetes.
I would advise you to take a deep look at this page:
https://cloud.google.com/kubernetes-engine/docs/how-to/api-server-authentication
This should also handle the certificate for you.
In general I would advise you to check the GKE documentation first before the kubernetes one, as GKE is not only just Kubernetes but it also makes managing some things way easier.

How to show objects from an s3 bucket that is not public

I am trying to show objects from an S3 bucket that is not public. In order to do this I would have to provide the access and secret keys to AWS.
I have this fiddle (without the keys) but it does not work when I enter the correct keys: http://jsfiddle.net/jsp3wzbu/
<section ng-app data-ng-controller="myCtrl">
<img ng-src="{{s3url}}" id="myimg">
</section>
also, how is security handled? I would not want to store the access/secret keys in my client code because users will see it. My server code keeps these keys in environment variables and I fear that if I share them with my client side JS code, then they will be exposed. Is there any other way for me to show the S3 object on the browser? ....Can the server provide the images as base64 json and the client side code renders it?
There are multiple approaches you can follow to achieve this.
Use S3 Signed Urls.
Use AWS STS to generate temporary access credentials from Backend.
Use AWS Cognito Federated Identities to Generate temporary access credentials.
Use CloudFront Signed Urls or Cookies.
Note: Storing or sending permanent IAM credentials to the client side is not recommended.
Here is how I handle providing access to the contents of a private S3 bucket.
I use IAM roles for my EC2 instances. I do not store AWS credentials on the EC2 instance.
I require the user to login. You can use a home brew login setup (database), Cognito or another IDP such as Google or Facebook.
From my back-end code I generate presigned URLs that expire in 15 minutes. If the URLs are for large files, I adjust the timeout to be longer based upon the size of the file assuming a slow Internet connection.
In the JavaScript for my HTML pages, I refresh the URLs before the 15 minutes expires (usually every 5 minutes via AJAX). This can be done via a simple refresh page or (better) by using AJAX to just refresh the URLs. This handles users that leave a page open for a long period of time.

Securing API with Node

I'm trying to build my first API to be consumed by a mobile application built with Ionic.
Before starting I'm looking into the architecture and I can not understand exactly how to make secure my API routes.
Let's say I have an endpoint like http://myapi/v1/get-items and my application doesn't need an user to be authenticated to view those items in the mobile app.
How should I protect that route from external queries, using Postman for example?
I wish that route to be not accessible unless is not requested by the application.
Looking on Google I can find many solution using basic authentication but all of those require an user to log in... What if my app doesn't have users to log in?
I'm a bit confused but I think there is a solution and I don't know it yet...
I hope you can help me to understand it.
EDIT:
My Question is totally different from the following: How to implement a secure REST API with node.js
I'm looking for solution that DO NOT require a User Authentication.
If you don't want to use User Auth through something like Passport then you can institute a whitelist in your Node API instead. express-ipfilter is an express middleware module that allows you to filter requests based on the request IP.
Requiring a login would be the cleanest and safest way to make sure your api remains private. However, if you want to keep external users out of your services without requiring a login, you will need to "sign" your requests. By that I mean doing something like encrypting a current timestamp on the client using a key known to both the server and the client app, adding that encrypted string as a header, receiving that header in your server, decrypting it and checking that it's not too old of a timestamp before you return a response.
It's not really safe (if someone can see the code they can see the encryption key) but it's an obstacle and it down't require logging in. See this for an example on encryption/decryption

Can users manipulate javascript to overwrite S3 crossdomain.xml?

I allow users to upload to s3 directly without the need to go through the server ; Everything works perfectly; my only worry is the security.
In my javascript code I check for file extensions . However I know in javascript code Users can manipulate script- in my case to allow upload of xml files - (since client side upload) and thus would'nt they be able to replace the crossdomain.xml in my bucket and accordingly be able to control my bucket?
Note: I am using the bucket owner access key and secret key.
Update:
Any possible approaches to overcome this issue...?
If you are not adverse to running additional resources, you can accomplish this by running a Token Vending Machine.
Here's the gist:
Your token vending machine (TVM) runs as a reduced privileged user.
Your client code still uploads directly to S3, but it needs to contact your TVM to get temporary, user-specific token to access your bucket when the user logs in
The TVM calls the Amazon Security Token Service to create temporary credentials for your user to access S3
The S3 API uses the temporary credentials when it makes requests to upload/download
You can define polices on your buckets to limit what areas of the bucket each user can access
A simple example of creating a "dropbox"-like service using per-user access is detailed here.

Cloud API with JavaScript (Amazon, Azure)

I'm researching a possibility of using some cloud storage directly from client-side JavaScript. However, I ran into two problems:
Security - the architecture is usually build on per cloud client basis, so there is one API key (for example). This is problematic, since I need a security per my user. I can't give the same API key to all my users.
Cross-domain AJAX. There are HTTP headers that browsers can use to be able to do cross domain requests, but this means that I would have to be able to set them on the cloud-side. But, the only thing I need for this to work is to be able to add a custom HTTP response header: Access-Control-Allow-Origin: otherdomain.com.
My scenario involves a lots of simple queue messages from JS client and I thought I would use cloud to get rid of this traffic from my main hosting provider. Windows Azure has this Queue Service part, which seems quite near to what I need, except that I don't know if these problems can be solved.
Any thoughts? It seems to me that JavaScript clients for cloud services are unavoidable scenarios in the near future.
So, is there some cloud storage with REST API that offers management of clients' authentication and does not give the API key to them?
Windows Azure Blob Storage has the notion of a Shared Access Signature (SAS) which could be issued on the server-side and is essentially a special URL that a client could write to without having direct access to the storage account API key. This is the only mechanism in Windows Azure Storage that allows writing data without access to the storage account key.
A SAS can be expired (e.g., give user 10 minutes to use the SAS URL for an upload) and can be set up to allow for canceling access even after issue. Further, a SAS can be useful for time-limited read access (e.g., give user 1 day to watch this video).
If your JavaScript client is also running in a browser, you may indeed have cross-domain issues. I have two thoughts - neither tested! One thought is JSONP-style approach (though this will be limited to HTTP GET calls). The other (more promising) thought is to host the .js files in blob storage along with your data files so they are on same domain (hopefully making your web browser happy).
The "real" solution might be Cross-Origin Resource Sharing (CORS) support, but that is not available in Windows Azure Blob Storage, and still emerging (along with other HTML 5 goodness) in browsers.
Yes you can do this but you wouldn't want your azure key available on the client side for the javascript to be able to access the queue directly.
I would have the javascript talking to a web service which could check access rights for the user and allow/disallow the posting of a message to the queue.
So the javascript would only ever talk to the web services and leave the web services to handle talking to the queues.
Its a little too big a subject to post sample code but hopefully this is enough to get you started.
I think that the existing service providers do not allow you to query storage directly from the client. So in order to resolve the issues:
you can write a simple Server and expose REST apis which authenticate based on the APIKey passed on as a request param and get your specific data back to your client.
Have an embedded iframe and make the call to 2nd domain from the iframe. Get the returned JSON/XML on the parent frame and process the data.
Update:
Looks like Google already solves your problem. Check this out.
On https://developers.google.com/storage/docs/json_api/v1/libraries check the Google Cloud Storage JSON API client libraries section.
This can be done with Amazon S3, but not Azure at the moment I think. The reason for this is that S3 supports CORS.
http://aws.amazon.com/about-aws/whats-new/2012/08/31/amazon-s3-announces-cross-origin-resource-sharing-CORS-support/
but Azure does not (yet). Also, from your question it sounds like a queuing solution is what you want which suggests Amazon SQS, but SQS does not support CORS either.
If you need any complex queue semantics (like message expiry or long polling) then S3 is probably not the solution for you. However, if your queuing requirements are simple then S3 could be suitable.
You would have to have a web service called from the browser with the desired S3 object URL as a parameter. The role of the service is to authenticate and authorize the request, and if successful, generate and return a URL that gives temporary access to the S3 object using query string authentication.
http://docs.aws.amazon.com/AmazonS3/latest/dev/S3_QSAuth.html
A neat way might be have the service just redirect to the query string authentication URL.
For those wondering why this is a Good Thing, it means that you don't have to stream all the S3 object content through your compute tier. You just generate a query string authenticated URL (essentially just a signed string) which is a very cheap operation and then rely on the massive scalability provided by S3 for the actual upload/download.
Update: As of November this year, Azure now supports CORS on table, queue and blob storage
http://msdn.microsoft.com/en-us/library/windowsazure/dn535601.aspx
With Amazon S3 and Amazon IAM you can generate very fine grained API keys for users (not only clients!); however the full would be PITA to use from Javascript, even if possible.
However, with CORS headers and little server scripting, you can make uploads directly to the S3 from HTML5 forms; this works by generating an upload link on the server side; the link will have an embedded policy document on, that tells what the upload form is allowed to upload and with which kind of prefix ("directories"), content-type and so forth.

Categories