JavaScript : Hosting User Images in S3 Bucket For Application - javascript

The "Background" outlines the problem in depth- I added it to make this question a good guide for using S3 to host images(like a profile image)
You can skip right to "HERE I'M HAVING TROUBLE" to help directly.
-----------------------------------------------Background------------------------------------------------------------------
Note: Feel Free to Critique My Assumptions on how to Host Images Properly for future readers.
So for a quick prototype- I'm hosting user avatar images in an AWS S3 Bucket,but I want to model roughly how it is done in production.
Here are my 3 assumptions on how to model industry standard image hosting.(based off sites I've studied):
For Reading - you can use public endpoints(no tokens needed)
To Secure Reading Access - use hashing to store the resources(image).The application will give the hashed URL to users with access.
For Example 2 hashes (1 for file path and 1 for image):
https://myapp.com/images/1024523452345233232000/f567457645674242455c32cbc7.jpg
^The above can be done with S3 using a hashed "Prefix" and a hashed file name.
Write Permissions - The user should be logged into the app and be given temp credentials to write to the storage(ie add an image).
So with those 3 assumptions this is what I'm doing and the problem:
(A Simple Write - Use Credentials)
<script type="text/javascript">
AWS.config.credentials = ...;
AWS.config.region = 'us-west-2';
var bucket = new AWS.S3({params: {Bucket: 'myBucket'}});
...
var params = {Key: file.name, ContentType: file.type, Body: file};
bucket.upload(params, function (err, data) {
//Do Something Amazing!!
});
</script>
---------------------------------HERE I'M HAVING TROUBLE----------------------------------------------------------(A Simple Read -Give The User a Signed URL) Error 403 Permissions
<script type="text/javascript">
AWS.config.credentials..
AWS.config.region = 'us-west-2';
var s3 = new AWS.S3();
var params = {Bucket: 'myBucket', Key: 'myKey.jpg'};
document.addEventListener('DOMContentLoaded', function () {
s3.getSignedUrl('getObject', params, function (err, url) {
// Getting a 403 Permissions Error!!!
});
});
</script>
I figure the signed URL isn't needed, but I thought it would get me around the permission error but I have to set the permission manually to public to read the image.
QUESTION:
So how should I make the endpoints completely public(read-able) for those who have gained access to the URL, but only write-able when the user has credentials?

To make an object publicly-downloadable when you are uploading it, apply the canned (predefined) ACL called "public-read" with the putObject request.
var params = {
Key: file.name,
ContentType: file.type,
Body: file,
ACL: 'public-read'
};
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property

Related

Uploading Image to S3 bucket stores it as application/octet-stream, but I want to store it as an image to use it (JavaScript / Node)

I want my users to upload profile pictures. Right now I can choose a file and pass it into a POST request body. I'm working in Node.js + JavaScript.
I am using DigitalOcean's Spaces object storage service to store my images, which is S3-compatible.
Ideally, my storage stores the file as an actual image. Instead, it is storing as a strange file with Content-Type application/octet-stream. I don't know how I'm supposed to work with this -- normally to display the image, I simply reference the URL that hosts the image, but in this case the URL is pointing to this strange file. It is named something like VMEDFS3Q65JV4B7YQKLS (no extension). The size of the file is 14kb which seems right and it appears to hold the file data. It looks like this:
etc...
I know I'm grabbing the right image and I know the database is hooked up properly as it's posting to the exact right place, I'm just unhappy with the file type.
Request on front end:
fetch('/api/image/profileUpload', {
method: 'PUT',
body: { file: event.target.files[0] },
'Content-Type': 'image/jpg',
})
Code in backend:
const AWS = require('aws-sdk')
let file = req.body;
file = JSON.stringify(file);
AWS.config.update({
region: 'nyc3',
accessKeyId: process.env.SPACES_KEY,
secretAccessKey: process.env.SPACES_SECRET,
});
const s3 = new AWS.S3({
endpoint: new AWS.Endpoint('nyc3.digitaloceanspaces.com')
});
const uploadParams = {
Bucket: process.env.SPACES_BUCKET,
Key: process.env.SPACES_KEY,
Body: file,
ACL: "public-read",
ResponseContentType: 'image/jpg'
};
s3.upload(uploadParams, function(err, data) {
if (err) console.log(err, err.stack);
else console.log(data);
});
What I've tried:
Adding content-type header in request and content-type parameters in back
Other methods of fetching the data in the backend -- they all result in the same thing
Not stringifying the file after grabbing it from the req.body
Changing POST request to PUT request
Would appreciate any insight into either a) converting this octet-stream file into an image or b) getting this image to upload as an image. Thank you
I solved this problem somewhat -- I changed my parameters to this:
const uploadParams = {
Bucket: 'oscarexpert',
Key: 'asdff',
Body: image,
ContentType: "image/jpeg",
ACL: "public-read",
};
^^added ContentType.
It still doesn't store the actual image but that's sort of a different issue.

Saving an S3 buffer to file through AWS Javascript SDK

I'm using the AWS Javascript SDK to download a file from S3
var s3 = new AWS.S3();
var params = {
Bucket: "MYBUCKET",
Key: file
};
s3.getObject(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else {
//code to save file from data's byte array here
}
});
This feels like it should be easier than I'm making it out to be. Basically I want to trigger the native file download for the browser. Every resource I've found on the internet is for node's file system. I can't just use the file's URL to download as it is stored encrypted via KMS, so that is why I am going about it this way.
Thanks for the help!
I ended up changing how I was storing files. Instead of encrypting them with KMS, I moved them to a private bucket and then based the retrieval off of the logged in cognito user's ID. Then, I switched to using getSignedURL to appropriately pass in the cognito user ID.
var s3 = new AWS.S3();
var params = {
Bucket: "MYBUCKET",
Key: cognitoUser.username + "/" + file
};
var url = s3.getSignedUrl('getObject', params);
window.open(url);

Getting 403 (Forbidden) when uploading to S3 with a signed URL

I'm trying to generate a pre-signed URL then upload a file to S3 through a browser. My server-side code looks like this, and it generates the URL:
let s3 = new aws.S3({
// for dev purposes
accessKeyId: 'MY-ACCESS-KEY-ID',
secretAccessKey: 'MY-SECRET-ACCESS-KEY'
});
let params = {
Bucket: 'reqlist-user-storage',
Key: req.body.fileName,
Expires: 60,
ContentType: req.body.fileType,
ACL: 'public-read'
};
s3.getSignedUrl('putObject', params, (err, url) => {
if (err) return console.log(err);
res.json({ url: url });
});
This part seems to work fine. I can see the URL if I log it and it's passing it to the front-end. Then on the front end, I'm trying to upload the file with axios and the signed URL:
.then(res => {
var options = { headers: { 'Content-Type': fileType } };
return axios.put(res.data.url, fileFromFileInput, options);
}).then(res => {
console.log(res);
}).catch(err => {
console.log(err);
});
}
With that, I get the 403 Forbidden error. If I follow the link, there's some XML with more info:
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>
The request signature we calculated does not match the signature you provided. Check your key and signing method.
</Message>
...etc
Your request needs to match the signature, exactly. One apparent problem is that you are not actually including the canned ACL in the request, even though you included it in the signature. Change to this:
var options = { headers: { 'Content-Type': fileType, 'x-amz-acl': 'public-read' } };
Receiving a 403 Forbidden error for a pre-signed s3 put upload can also happen for a couple of reasons that are not immediately obvious:
It can happen if you generate a pre-signed put url using a wildcard content type such as image/*, as wildcards are not supported.
It can happen if you generate a pre-signed put url with no content type specified, but then pass in a content type header when uploading from the browser. If you don't specify a content type when generating the url, you have to omit the content type when uploading. Be conscious that if you are using an upload tool like Uppy, it may attach a content type header automatically even when you don't specify one. In that case, you'd have to manually set the content type header to be empty.
In any case, if you want to support uploading any file type, it's probably best to pass the file's content type to your api endpoint, and use that content type when generating your pre-signed url that you return to your client.
For example, generating a pre-signed url from your api:
const AWS = require('aws-sdk')
const uuid = require('uuid/v4')
async function getSignedUrl(contentType) {
const s3 = new AWS.S3({
accessKeyId: process.env.AWS_KEY,
secretAccessKey: process.env.AWS_SECRET_KEY
})
const signedUrl = await s3.getSignedUrlPromise('putObject', {
Bucket: 'mybucket',
Key: `uploads/${uuid()}`,
ContentType: contentType
})
return signedUrl
}
And then sending an upload request from the browser:
import Uppy from '#uppy/core'
import AwsS3 from '#uppy/aws-s3'
this.uppy = Uppy({
restrictions: {
allowedFileTypes: ['image/*'],
maxFileSize: 5242880, // 5 Megabytes
maxNumberOfFiles: 5
}
}).use(AwsS3, {
getUploadParameters(file) {
async function _getUploadParameters() {
let signedUrl = await getSignedUrl(file.type)
return {
method: 'PUT',
url: signedUrl
}
}
return _getUploadParameters()
}
})
For further reference also see these two stack overflow posts: how-to-generate-aws-s3-pre-signed-url-request-without-knowing-content-type and S3.getSignedUrl to accept multiple content-type
If you're trying to use an ACL, make sure that your Lambda IAM role has the s3:PutObjectAcl for the given Bucket and also that your bucket allows for the s3:PutObjectAcl for the uploading Principal (user/iam/account that's uploading).
This is what fixed it for me after double checking all my headers and everything else.
Inspired by this answer https://stackoverflow.com/a/53542531/2759427
1) You might need to use S3V4 signatures depending on how the data is transferred to AWS (chunk versus stream). Create the client as follows:
var s3 = new AWS.S3({
signatureVersion: 'v4'
});
2) Do not add new headers or modify existing headers. The request must be exactly as signed.
3) Make sure that the url generated matches what is being sent to AWS.
4) Make a test request removing these two lines before signing (and remove the headers from your PUT). This will help narrow down your issue:
ContentType: req.body.fileType,
ACL: 'public-read'
Had the same issue, here is how you need to solve it,
Extract the filename portion of the signed URL.
Do a print that you are extracting your filename portion correctly with querystring parameters. This is critical.
Encode to URI Encoding of the filename with query string parameters.
Return the url from your lambda with encoded filename along with other path or from your node service.
Now post from axios with that url, it will work.
EDIT1:
Your signature will also be invalid, if you pass in wrong content type.
Please ensure that the content-type you have you create the pre-signed url is same as the one you are using it for put.
Hope it helps.
As others have pointed out the solution is to add the signatureVerision.
const s3 = new AWS.S3(
{
apiVersion: '2006-03-01',
signatureVersion: 'v4'
}
);
There is very detailed discussion around the same take a look https://github.com/aws/aws-sdk-js/issues/468
This code was working with credentials and a bucket I created several years ago, but caused a 403 error on recently created credentials/buckets:
const s3 = new AWS.S3({
region: region,
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_KEY,
})
The fix was simply to add signatureVersion: 'v4'.
const s3 = new AWS.S3({
signatureVersion: 'v4',
region: region,
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_KEY,
})
Why? I don't know.
TLDR: Check that your bucket exists and is accessible by the AWS Key that is generating the Signed URL..
All of the answers are very good and most likely are the real solution, but my issue actually stemmed from S3 returning a Signed URL to a bucket that didn't exist.
Because the server didn't throw any errors, I had assumed that it must be the upload that was causing the problems without realizing that my local server had an old bucket name in it's .env file that used to be the correct one, but has since been moved.
Side note: This link helped https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-403/
It was while checking the uploading users IAM policies that I discovered that the user had access to multiple buckets, but only 1 of those existed anymore.
Did you add the CORS policy to the S3 bucket? This fixed the problem for me.
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
I encountered the same error twice with different root causes / solutions:
I was using generate_presigned_url.
The solution for me was switching to generate_presigned_post (doc) which returns a host of essential information such as
"url":"https://xyz.s3.amazonaws.com/",
"fields":{
"key":"filename.ext",
"AWSAccessKeyId":"ASIAEUROPRSWEDWOMM",
"x-amz-security-token":"some-really-long-string",
"policy":"another-long-string",
"signature":"the-signature"
}
Add these fields to your request headers, don't forget to keep file last!
That time I forgot to give proper permissions to the Lambda. Interestingly, Lambda can create good looking signed upload URLs which you won't have permission to use. The solution is to enrich the policy with S3 actions:
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::my-own-bucket/*"
]
}
using python boto3 when you upload a file the permissions are private by default. you can make the object public using ACL='public-read'
s3.put_object_acl(
Bucket='gid-requests', Key='potholes.csv', ACL='public-read')
I did all that's mentioned here and allowed these permissions for it to work:

AWS S3 Bucket, can't get head-object from file

I was faced with the following problem :
When I upload new file use the signed url and then try get head-object from uploaded file S3, using aws-sdk, I get error Forbidden, but if I upload new file use AWS console, I can get head-object. Does anyone know what the problem ?
Make sure you specify correct ACL in presigned POST url.
For example set (bucket-owner-full-control):
var s3 = new AWS.S3();
var params = { Bucket: req.body.bucketname, ACL: 'bucket-owner-full-control', Key: req.body.name, ContentType: req.body.type };
s3.getSignedUrl('putObject', params, function (err, url) ....

Creating signed S3 and Cloudfront URLs via the AWS SDK

Has anyone successfully used the AWS SDK to generate signed URLs to objects in an S3 bucket which also work over CloudFront? I'm using the JavaScript AWS SDK and it's really simple to generate signed URLs via the S3 links. I just created a private bucket and use the following code to generate the URL:
var AWS = require('aws-sdk')
, s3 = new AWS.S3()
, params = {Bucket: 'my-bucket', Key: 'path/to/key', Expiration: 20}
s3.getSignedUrl('getObject', params, function (err, url) {
console.log('Signed URL: ' + url)
})
This works great but I also want to expose a CloudFront URL to my users so they can get the increased download speeds of using the CDN. I setup a CloudFront distribution which modified the bucket policy to allow access. However, after doing this any file could be accessed via the CloudFront URL and Amazon appeared to ignore the signature in my link. After reading some more on this I've seen that people generate a .pem file to get signed URLs working with CloudFront but why is this not necessary for S3? It seems like the getSignedUrl method simply does the signing with the AWS Secret Key and AWS Access Key. Has anyone gotten a setup like this working before?
Update:
After further research it appears that CloudFront handles URL signatures completely different from S3 [link]. However, I'm still unclear as to how to create a signed CloudFront URL using Javascript.
Update: I moved the signing functionality from the example code below into the aws-cloudfront-sign package on NPM. That way you can just require this package and call getSignedUrl().
After some further investigation I found a solution which is sort of a combo between this answer and a method I found in the Boto library. It is true that S3 URL signatures are handled differently than CloudFront URL signatures. If you just need to sign an S3 link then the example code in my initial question will work just fine for you. However, it gets a little more complicated if you want to generate signed URLs which utilize your CloudFront distribution. This is because CloudFront URL signatures are not currently supported in the AWS SDK so you have to create the signature on your own. In case you also need to do this, here are basic steps. I'll assume you already have an S3 bucket setup:
Configure CloudFront
Create a CloudFront distribution
Configure your origin with the following settings
Origin Domain Name: {your-s3-bucket}
Restrict Bucket Access: Yes
Grant Read Permissions on Bucket: Yes, Update Bucket Policy
Create CloudFront Key Pair. Should be able to do this here.
Create Signed CloudFront URL
To great a signed CloudFront URL you just need to sign your policy using RSA-SHA1 and include it as a query param. You can find more on custom policies here but I've included a basic one in the sample code below that should get you up and running. The sample code is for Node.js but the process could be applied to any language.
var crypto = require('crypto')
, fs = require('fs')
, util = require('util')
, moment = require('moment')
, urlParse = require('url')
, cloudfrontAccessKey = '<your-cloudfront-public-key>'
, expiration = moment().add('seconds', 30) // epoch-expiration-time
// Define your policy.
var policy = {
'Statement': [{
'Resource': 'http://<your-cloudfront-domain-name>/path/to/object',
'Condition': {
'DateLessThan': {'AWS:EpochTime': '<epoch-expiration-time>'},
}
}]
}
// Now that you have your policy defined you can sign it like this:
var sign = crypto.createSign('RSA-SHA1')
, pem = fs.readFileSync('<path-to-cloudfront-private-key>')
, key = pem.toString('ascii')
sign.update(JSON.stringify(policy))
var signature = sign.sign(key, 'base64')
// Finally, you build the URL with all of the required query params:
var url = {
host: '<your-cloudfront-domain-name>',
protocol: 'http',
pathname: '<path-to-s3-object>'
}
var params = {
'Key-Pair-Id=' + cloudfrontAccessKey,
'Expires=' + expiration,
'Signature=' + signature
}
var signedUrl = util.format('%s?%s', urlParse.format(url), params.join('&'))
return signedUrl
For my code to work with Jason Sims's code, I also had to convert policy to base64 and add it to the final signedUrl, like this:
sign.update(JSON.stringify(policy))
var signature = sign.sign(key, 'base64')
var policy_64 = new Buffer(JSON.stringify(policy)).toString('base64'); // ADDED
// Finally, you build the URL with all of the required query params:
var url = {
host: '<your-cloudfront-domain-name>',
protocol: 'http',
pathname: '<path-to-s3-object>'
}
var params = {
'Key-Pair-Id=' + cloudfrontAccessKey,
'Expires=' + expiration,
'Signature=' + signature,
'Policy=' + policy_64 // ADDED
}
AWS includes some built in classes and structures to assist in the creation of signed URLs and Cookies for CloudFront. I utilized these alongside the excellent answer by Jason Sims to get it working in a slightly different pattern (which appears to be very similar to the NPM package he created).
Namely, the AWS.CloudFront.Signer type description which abstracts the process of creating signed URLs and Cookies.
export class Signer {
/**
* A signer object can be used to generate signed URLs and cookies for granting access to content on restricted CloudFront distributions.
*
* #param {string} keyPairId - The ID of the CloudFront key pair being used.
* #param {string} privateKey - A private key in RSA format.
*/
constructor(keyPairId: string, privateKey: string);
....
}
And either an options with a policy JSON string or without a policy with a url and expiration time.
export interface SignerOptionsWithPolicy {
/**
* A CloudFront JSON policy. Required unless you pass in a url and an expiry time.
*/
policy: string;
}
export interface SignerOptionsWithoutPolicy {
/**
* The URL to which the signature will grant access. Required unless you pass in a full policy.
*/
url: string
/**
* A Unix UTC timestamp indicating when the signature should expire. Required unless you pass in a full policy.
*/
expires: number
}
Sample implementation:
import aws, { CloudFront } from 'aws-sdk';
export async function getSignedUrl() {
// https://abc.cloudfront.net/my-resource.jpg
const url = <cloud front url/resource>;
// Create signer object - requires a public key id and private key value
const signer = new CloudFront.Signer(<public-key-id>, <private-key>);
// Setup expiration time (one hour in the future, in this case)
const expiration = new Date();
expiration.setTime(expiration.getTime() + 1000 * 60 * 60);
const expirationEpoch = expiration.valueOf();
// Set options (Without policy in this example, but a JSON policy string can be substituted)
const options = {
url: url,
expires: expirationEpoch
};
return new Promise((resolve, reject) => {
// Call getSignedUrl passing in options, to be handled either by callback or synchronously without callback
signer.getSignedUrl(options, (err, url) => {
if (err) {
console.error(err.stack);
reject(err);
}
resolve(url);
});
});
}

Categories