I receive images for processing on my site and wanna omit uploading to local server and further adding to Dropbox folder from it by implementing direct upload to Dropbox from web browser.
Dropbox API is pretty simple, but has one problem - I need to expose API key to end user, and that key allows to download all pics from my account.
Then if I use Dropbox API - one user can download images uploaded by others, and it's not acceptable scenario.
Is there any way to bypass limitation with exposing full access API key to website end users?
Unfortunately, no, in order to upload directly to Dropbox via the Dropbox API, the client needs the access token. Further, the Dropbox API doesn't offer an upload-only permission.
Fundamentally, clients can't keep secrets, so any access token that exists client-side (in this case, in the browser) can be extracted and abused.
Edit:
The Dropbox API now offers some new pieces of functionality that may be useful here:
A) The Dropbox API now offers "scopes", which you can use to configure an app or access token to only a limited set of functionality, such as the ability to write but not read files.
You can find more information about the release in our blog post here:
https://dropbox.tech/developers/now-available--scoped-apps-and-enhanced-permissions
B) The Dropbox API now offers the /2/files/get_temporary_upload_link endpoint, which can be used to get a URL that a client can POST to in order to upload to a Dropbox account without an access token:
https://www.dropbox.com/developers/documentation/http/documentation#files-get_temporary_upload_link
Related
I am trying to show objects from an S3 bucket that is not public. In order to do this I would have to provide the access and secret keys to AWS.
I have this fiddle (without the keys) but it does not work when I enter the correct keys: http://jsfiddle.net/jsp3wzbu/
<section ng-app data-ng-controller="myCtrl">
<img ng-src="{{s3url}}" id="myimg">
</section>
also, how is security handled? I would not want to store the access/secret keys in my client code because users will see it. My server code keeps these keys in environment variables and I fear that if I share them with my client side JS code, then they will be exposed. Is there any other way for me to show the S3 object on the browser? ....Can the server provide the images as base64 json and the client side code renders it?
There are multiple approaches you can follow to achieve this.
Use S3 Signed Urls.
Use AWS STS to generate temporary access credentials from Backend.
Use AWS Cognito Federated Identities to Generate temporary access credentials.
Use CloudFront Signed Urls or Cookies.
Note: Storing or sending permanent IAM credentials to the client side is not recommended.
Here is how I handle providing access to the contents of a private S3 bucket.
I use IAM roles for my EC2 instances. I do not store AWS credentials on the EC2 instance.
I require the user to login. You can use a home brew login setup (database), Cognito or another IDP such as Google or Facebook.
From my back-end code I generate presigned URLs that expire in 15 minutes. If the URLs are for large files, I adjust the timeout to be longer based upon the size of the file assuming a slow Internet connection.
In the JavaScript for my HTML pages, I refresh the URLs before the 15 minutes expires (usually every 5 minutes via AJAX). This can be done via a simple refresh page or (better) by using AJAX to just refresh the URLs. This handles users that leave a page open for a long period of time.
I want to build a static website hosted by github which lets anyone make notes and saves them to a specific dropbox account. Then later users could access these files.
My only concern is that if the site is opensource then others can the token for the dropbox account and use it elsewhere.
Can I make it so my dropbox account only allows API calls from my website?Thanks in advance
No. This is fundamentally impossible. Whatever mechanism you used (e.g. the referer header) could just be faked by a malicious user.
Well, I created an API to manage for our websites some attachments uploads and store into Amazon S3 buckets
The scenario : Once visitor / user in the form and wants to submit it with attachment, once the file is selected then button clicked an Ajax request fire to the micro service API so it can store the file into S3 do some processing then return the direct link or identifier.
The question is : how can we authenticate the user using for example a short live token or something like that without being hijacked, mis-usage of the token..
In Javascript everything is visible to the visitor, and we try to not integrate any heavy process in the backend
If I got your question straight, you have a web interface in which files are uploaded to an S3 bucket and you need to make sure that in a certain back end API (such as REST) all file upload commands will have authentication and authorization.
The answer is highly dependent on your architecture but generally speaking, all Javascript calls are nothing but HTTP calls. So you need HTTP authentication/authorization. In general, the most straightforward method for REST over HTTP is the basic authentication, in which the client sends a credential in every single request. This may sound odd at first but it is quite standard since HTTP is supposed to be stateless.
So the short answer, at least for the scenario I just described, would be to ask the user to provide the credentials that Javascript will keep in the client side, then send basic authentication which the REST interface can understand. The server-side processes will then get such information and decide whether a certain file can be written in a certain S3 bucket.
I allow users to upload to s3 directly without the need to go through the server ; Everything works perfectly; my only worry is the security.
In my javascript code I check for file extensions . However I know in javascript code Users can manipulate script- in my case to allow upload of xml files - (since client side upload) and thus would'nt they be able to replace the crossdomain.xml in my bucket and accordingly be able to control my bucket?
Note: I am using the bucket owner access key and secret key.
Update:
Any possible approaches to overcome this issue...?
If you are not adverse to running additional resources, you can accomplish this by running a Token Vending Machine.
Here's the gist:
Your token vending machine (TVM) runs as a reduced privileged user.
Your client code still uploads directly to S3, but it needs to contact your TVM to get temporary, user-specific token to access your bucket when the user logs in
The TVM calls the Amazon Security Token Service to create temporary credentials for your user to access S3
The S3 API uses the temporary credentials when it makes requests to upload/download
You can define polices on your buckets to limit what areas of the bucket each user can access
A simple example of creating a "dropbox"-like service using per-user access is detailed here.
I'm researching a possibility of using some cloud storage directly from client-side JavaScript. However, I ran into two problems:
Security - the architecture is usually build on per cloud client basis, so there is one API key (for example). This is problematic, since I need a security per my user. I can't give the same API key to all my users.
Cross-domain AJAX. There are HTTP headers that browsers can use to be able to do cross domain requests, but this means that I would have to be able to set them on the cloud-side. But, the only thing I need for this to work is to be able to add a custom HTTP response header: Access-Control-Allow-Origin: otherdomain.com.
My scenario involves a lots of simple queue messages from JS client and I thought I would use cloud to get rid of this traffic from my main hosting provider. Windows Azure has this Queue Service part, which seems quite near to what I need, except that I don't know if these problems can be solved.
Any thoughts? It seems to me that JavaScript clients for cloud services are unavoidable scenarios in the near future.
So, is there some cloud storage with REST API that offers management of clients' authentication and does not give the API key to them?
Windows Azure Blob Storage has the notion of a Shared Access Signature (SAS) which could be issued on the server-side and is essentially a special URL that a client could write to without having direct access to the storage account API key. This is the only mechanism in Windows Azure Storage that allows writing data without access to the storage account key.
A SAS can be expired (e.g., give user 10 minutes to use the SAS URL for an upload) and can be set up to allow for canceling access even after issue. Further, a SAS can be useful for time-limited read access (e.g., give user 1 day to watch this video).
If your JavaScript client is also running in a browser, you may indeed have cross-domain issues. I have two thoughts - neither tested! One thought is JSONP-style approach (though this will be limited to HTTP GET calls). The other (more promising) thought is to host the .js files in blob storage along with your data files so they are on same domain (hopefully making your web browser happy).
The "real" solution might be Cross-Origin Resource Sharing (CORS) support, but that is not available in Windows Azure Blob Storage, and still emerging (along with other HTML 5 goodness) in browsers.
Yes you can do this but you wouldn't want your azure key available on the client side for the javascript to be able to access the queue directly.
I would have the javascript talking to a web service which could check access rights for the user and allow/disallow the posting of a message to the queue.
So the javascript would only ever talk to the web services and leave the web services to handle talking to the queues.
Its a little too big a subject to post sample code but hopefully this is enough to get you started.
I think that the existing service providers do not allow you to query storage directly from the client. So in order to resolve the issues:
you can write a simple Server and expose REST apis which authenticate based on the APIKey passed on as a request param and get your specific data back to your client.
Have an embedded iframe and make the call to 2nd domain from the iframe. Get the returned JSON/XML on the parent frame and process the data.
Update:
Looks like Google already solves your problem. Check this out.
On https://developers.google.com/storage/docs/json_api/v1/libraries check the Google Cloud Storage JSON API client libraries section.
This can be done with Amazon S3, but not Azure at the moment I think. The reason for this is that S3 supports CORS.
http://aws.amazon.com/about-aws/whats-new/2012/08/31/amazon-s3-announces-cross-origin-resource-sharing-CORS-support/
but Azure does not (yet). Also, from your question it sounds like a queuing solution is what you want which suggests Amazon SQS, but SQS does not support CORS either.
If you need any complex queue semantics (like message expiry or long polling) then S3 is probably not the solution for you. However, if your queuing requirements are simple then S3 could be suitable.
You would have to have a web service called from the browser with the desired S3 object URL as a parameter. The role of the service is to authenticate and authorize the request, and if successful, generate and return a URL that gives temporary access to the S3 object using query string authentication.
http://docs.aws.amazon.com/AmazonS3/latest/dev/S3_QSAuth.html
A neat way might be have the service just redirect to the query string authentication URL.
For those wondering why this is a Good Thing, it means that you don't have to stream all the S3 object content through your compute tier. You just generate a query string authenticated URL (essentially just a signed string) which is a very cheap operation and then rely on the massive scalability provided by S3 for the actual upload/download.
Update: As of November this year, Azure now supports CORS on table, queue and blob storage
http://msdn.microsoft.com/en-us/library/windowsazure/dn535601.aspx
With Amazon S3 and Amazon IAM you can generate very fine grained API keys for users (not only clients!); however the full would be PITA to use from Javascript, even if possible.
However, with CORS headers and little server scripting, you can make uploads directly to the S3 from HTML5 forms; this works by generating an upload link on the server side; the link will have an embedded policy document on, that tells what the upload form is allowed to upload and with which kind of prefix ("directories"), content-type and so forth.