Is Transfer Acceleration working? response location and object URL is different - javascript

I'm uploading a video to S3 using aws-sdk in a reaction environment.
And Use an accelerated endpoint for faster data transfers.
the endpoint is bucket-name.s3-accelerate.amazonaws.com
And I changed the option to 'Enabled' for accelerating transmission in bucket properties.
Below is my code for uploading file objects to s3.
import AWS from "aws-sdk";
require("dotenv").config();
const AWS_ACCESS_KEY = process.env.REACT_APP_AWS_ACCESS_KEY;
const AWS_SECRET_KEY = process.env.REACT_APP_AWS_SECRET_KEY;
const BUCKET = process.env.REACT_APP_BUCKET_NAME;
const REGION = process.env.REACT_APP_REGION;
AWS.config.update({
accessKeyId: AWS_ACCESS_KEY,
secretAccessKey: AWS_SECRET_KEY,
region: REGION,
useAccelerateEndpoint: true, //----> the options is here.
});
async function uploadFileToS3(file) {
const params = {
Bucket: BUCKET,
Key: file.name,
ContentType: 'multipart/form-data',
Body: file
};
const accelerateOption = {
Bucket: BUCKET,
AccelerateConfiguration: { Status: 'Enabled'},
ExpectedBucketOwner: process.env.REACT_APP_BUCKET_OWNER_ID,
};
const s3 = new AWS.S3();
try {
s3.putBucketAccelerateConfiguration(accelerateOption, (err, data) => {
if (err) console.log(err)
else console.log(data, 'data put accelerate') //----> this is just object {}
});
s3.upload(params)
.on("httpUploadProgress", progress => {
const { loaded, total } = progress;
const progressPercentage = parseInt((loaded / total) * 100);
console.log(progressPercentage);
})
.send((err, data) => {
console.log(data, 'data data');
});
} catch (err) {
console.log(err);
}
}
There is definitely s3-accelerate in url in the location property of the data object. (in console.log)
{
Bucket: "newvideouploada2f16b1e1d6a4671947657067c79824b121029-dev"
ETag: "\"d0b40e4afca23137bee89b54dd2ebcf3-8\""
Key: "Raw Run __ Race Against the Storm.mp4"
Location: "https://newvideouploada2f16b1e1d6a4671947657067c79824b121029-dev.s3-accelerate.amazonaws.com/Raw+Run+__+Race+Aga
}
However, the object URL of the property of the video I uploaded does not exist.
Is this how it's supposed to be?
Am I using Transfer Acceleration wrong way?
I saw the documentation and AWS said using putBucketAccelerateConfiguration.
But When I console.log there is noting responsed.
Please let me know How to use Transfer Acceleration in Javascript awd-sdk.

If you are running this code on some AWS compute (EC2, ECS, EKS, Lambda), and the bucket is in the same region as your compute, then consider using VPC Gateway endpoints for S3. More information here. If the compute and the bucket are in different regions, consider using VPC Endpoints for S3 with inter region VPC peering. Note: VPC Gateway endpoints are free while VPC Endpoints are not.
After you enable BucketAccelerate it takes at least half an hour to take effect. You don't need to call this every time you upload a file unless you are also suspending bucket acceleration after you are done.
Bucket acceleration helps when you want to use the AWS backbone network to upload data faster(may be user is in region 'A' and bucket is in region 'B' or you want to upload bigger files in which case it goes to the nearest edge location and then uses the AWS backbone network). You can use this tool to check the potential improvement in terms of speed for various regions.
Also there is additional cost when you use this feature. Check the Data Transfer section on the S3 pricing page.

Related

AWS SDK JS S3 getObject Stream Metadata

I have code similar to the following to pipe an S3 object back to the client as the response using Express, which is working perfectly.
const s3 = new AWS.S3();
const params = {
Bucket: 'myBucket',
Key: 'myImageFile.jpg'
};
s3.getObject(params).createReadStream().pipe(res);
Problem is, I want to be able to access some of the properties in the response I get back from S3, such as LastModified, ContentLength, ETag, and more. I want to use those properties to send as headers in the response to the client, as well as for logging information.
Due to the fact that it is creating a stream I can't figure out how to get those properties.
I have thought about doing a separate s3.headObject call to get the data I need, but that seems really wasteful, and will end up adding a large amount of cost at scale.
I have also considered ditching the entire stream idea and just having it return a buffer, but again, that seems really wasteful to be using extra memory when I don't need it, and will probably add more cost at scale due to the extra servers needed to handle requests and the extra memory needed to process each request.
How can I get back a stream using s3.getObject along with all the metadata and other properties that s3.getObject normally gives you back?
Something like the following would be amazing:
const s3 = new AWS.S3();
const params = {
Bucket: 'myBucket',
Key: 'myImageFile.jpg',
ReturnType: 'stream'
};
const s3Response = await s3.getObject(params).promise();
s3Response.Body.pipe(res); // where `s3Response.Body` is a stream
res.set("ETag", s3Response.ETag);
res.set("Content-Type", s3Response.ContentType);
console.log("Metadata: ", s3Response.Metadata);
But according to the S3 documentation it doesn't look like that is possible. Is there another way to achieve something like that?
I've found and tested the following. it works.
as per: Getting s3 object metadata then creating stream
function downloadfile (key,res) {
let stream;
const params = { Bucket: 'xxxxxx', Key: key }
const request = s3.getObject(params);
request.on('httpHeaders', (statusCode, httpHeaders) => {
console.log(httpHeaders);
stream.pipe(res)
stream.on('end', () => {
console.log('were done')
})
})
stream = request.createReadStream()
}

AWS rekognition and s3 bucket region

I am getting the following error when trying to access my s3 bucket with aws rekognition:
message: 'Unable to get object metadata from S3. Check object key, region and/or access permissions.',
My hunch is it has something to do with the region.
Here is the code:
const config = require('./config.json');
const AWS = require('aws-sdk');
AWS.config.update({region:config.awsRegion});
const rekognition = new AWS.Rekognition();
var params = {
"CollectionId": config.awsFaceCollection
}
rekognition.createCollection(params, function(err, data) {
if (err) {
console.log(err, err.stack);
}
else {
console.log('Collection created'); // successful response
}
});
And here is my config file:
{
"awsRegion":"us-east-1",
"s3Bucket":"daveyman123",
"lexVoiceId":"justin",
"awsFaceCollection":"raspifacecollection6"
}
I have given almost all the permissions to the user I can think of. Also the region for the s3 bucket appears to be in a place that can work with rekognition. What can I do?
Was having the same issue and the solution was to to use the same region for the Rekognition API and S3 bucket, and if using a Role make sure that it has proper permissions to access both S3 and Rekognition.
I had the same problem, resolved by choosing the specific aws recommended region for rekognition.

Getting 403 (Forbidden) when uploading to S3 with a signed URL

I'm trying to generate a pre-signed URL then upload a file to S3 through a browser. My server-side code looks like this, and it generates the URL:
let s3 = new aws.S3({
// for dev purposes
accessKeyId: 'MY-ACCESS-KEY-ID',
secretAccessKey: 'MY-SECRET-ACCESS-KEY'
});
let params = {
Bucket: 'reqlist-user-storage',
Key: req.body.fileName,
Expires: 60,
ContentType: req.body.fileType,
ACL: 'public-read'
};
s3.getSignedUrl('putObject', params, (err, url) => {
if (err) return console.log(err);
res.json({ url: url });
});
This part seems to work fine. I can see the URL if I log it and it's passing it to the front-end. Then on the front end, I'm trying to upload the file with axios and the signed URL:
.then(res => {
var options = { headers: { 'Content-Type': fileType } };
return axios.put(res.data.url, fileFromFileInput, options);
}).then(res => {
console.log(res);
}).catch(err => {
console.log(err);
});
}
With that, I get the 403 Forbidden error. If I follow the link, there's some XML with more info:
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>
The request signature we calculated does not match the signature you provided. Check your key and signing method.
</Message>
...etc
Your request needs to match the signature, exactly. One apparent problem is that you are not actually including the canned ACL in the request, even though you included it in the signature. Change to this:
var options = { headers: { 'Content-Type': fileType, 'x-amz-acl': 'public-read' } };
Receiving a 403 Forbidden error for a pre-signed s3 put upload can also happen for a couple of reasons that are not immediately obvious:
It can happen if you generate a pre-signed put url using a wildcard content type such as image/*, as wildcards are not supported.
It can happen if you generate a pre-signed put url with no content type specified, but then pass in a content type header when uploading from the browser. If you don't specify a content type when generating the url, you have to omit the content type when uploading. Be conscious that if you are using an upload tool like Uppy, it may attach a content type header automatically even when you don't specify one. In that case, you'd have to manually set the content type header to be empty.
In any case, if you want to support uploading any file type, it's probably best to pass the file's content type to your api endpoint, and use that content type when generating your pre-signed url that you return to your client.
For example, generating a pre-signed url from your api:
const AWS = require('aws-sdk')
const uuid = require('uuid/v4')
async function getSignedUrl(contentType) {
const s3 = new AWS.S3({
accessKeyId: process.env.AWS_KEY,
secretAccessKey: process.env.AWS_SECRET_KEY
})
const signedUrl = await s3.getSignedUrlPromise('putObject', {
Bucket: 'mybucket',
Key: `uploads/${uuid()}`,
ContentType: contentType
})
return signedUrl
}
And then sending an upload request from the browser:
import Uppy from '#uppy/core'
import AwsS3 from '#uppy/aws-s3'
this.uppy = Uppy({
restrictions: {
allowedFileTypes: ['image/*'],
maxFileSize: 5242880, // 5 Megabytes
maxNumberOfFiles: 5
}
}).use(AwsS3, {
getUploadParameters(file) {
async function _getUploadParameters() {
let signedUrl = await getSignedUrl(file.type)
return {
method: 'PUT',
url: signedUrl
}
}
return _getUploadParameters()
}
})
For further reference also see these two stack overflow posts: how-to-generate-aws-s3-pre-signed-url-request-without-knowing-content-type and S3.getSignedUrl to accept multiple content-type
If you're trying to use an ACL, make sure that your Lambda IAM role has the s3:PutObjectAcl for the given Bucket and also that your bucket allows for the s3:PutObjectAcl for the uploading Principal (user/iam/account that's uploading).
This is what fixed it for me after double checking all my headers and everything else.
Inspired by this answer https://stackoverflow.com/a/53542531/2759427
1) You might need to use S3V4 signatures depending on how the data is transferred to AWS (chunk versus stream). Create the client as follows:
var s3 = new AWS.S3({
signatureVersion: 'v4'
});
2) Do not add new headers or modify existing headers. The request must be exactly as signed.
3) Make sure that the url generated matches what is being sent to AWS.
4) Make a test request removing these two lines before signing (and remove the headers from your PUT). This will help narrow down your issue:
ContentType: req.body.fileType,
ACL: 'public-read'
Had the same issue, here is how you need to solve it,
Extract the filename portion of the signed URL.
Do a print that you are extracting your filename portion correctly with querystring parameters. This is critical.
Encode to URI Encoding of the filename with query string parameters.
Return the url from your lambda with encoded filename along with other path or from your node service.
Now post from axios with that url, it will work.
EDIT1:
Your signature will also be invalid, if you pass in wrong content type.
Please ensure that the content-type you have you create the pre-signed url is same as the one you are using it for put.
Hope it helps.
As others have pointed out the solution is to add the signatureVerision.
const s3 = new AWS.S3(
{
apiVersion: '2006-03-01',
signatureVersion: 'v4'
}
);
There is very detailed discussion around the same take a look https://github.com/aws/aws-sdk-js/issues/468
This code was working with credentials and a bucket I created several years ago, but caused a 403 error on recently created credentials/buckets:
const s3 = new AWS.S3({
region: region,
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_KEY,
})
The fix was simply to add signatureVersion: 'v4'.
const s3 = new AWS.S3({
signatureVersion: 'v4',
region: region,
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_KEY,
})
Why? I don't know.
TLDR: Check that your bucket exists and is accessible by the AWS Key that is generating the Signed URL..
All of the answers are very good and most likely are the real solution, but my issue actually stemmed from S3 returning a Signed URL to a bucket that didn't exist.
Because the server didn't throw any errors, I had assumed that it must be the upload that was causing the problems without realizing that my local server had an old bucket name in it's .env file that used to be the correct one, but has since been moved.
Side note: This link helped https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-403/
It was while checking the uploading users IAM policies that I discovered that the user had access to multiple buckets, but only 1 of those existed anymore.
Did you add the CORS policy to the S3 bucket? This fixed the problem for me.
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
I encountered the same error twice with different root causes / solutions:
I was using generate_presigned_url.
The solution for me was switching to generate_presigned_post (doc) which returns a host of essential information such as
"url":"https://xyz.s3.amazonaws.com/",
"fields":{
"key":"filename.ext",
"AWSAccessKeyId":"ASIAEUROPRSWEDWOMM",
"x-amz-security-token":"some-really-long-string",
"policy":"another-long-string",
"signature":"the-signature"
}
Add these fields to your request headers, don't forget to keep file last!
That time I forgot to give proper permissions to the Lambda. Interestingly, Lambda can create good looking signed upload URLs which you won't have permission to use. The solution is to enrich the policy with S3 actions:
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::my-own-bucket/*"
]
}
using python boto3 when you upload a file the permissions are private by default. you can make the object public using ACL='public-read'
s3.put_object_acl(
Bucket='gid-requests', Key='potholes.csv', ACL='public-read')
I did all that's mentioned here and allowed these permissions for it to work:

JavaScript : Hosting User Images in S3 Bucket For Application

The "Background" outlines the problem in depth- I added it to make this question a good guide for using S3 to host images(like a profile image)
You can skip right to "HERE I'M HAVING TROUBLE" to help directly.
-----------------------------------------------Background------------------------------------------------------------------
Note: Feel Free to Critique My Assumptions on how to Host Images Properly for future readers.
So for a quick prototype- I'm hosting user avatar images in an AWS S3 Bucket,but I want to model roughly how it is done in production.
Here are my 3 assumptions on how to model industry standard image hosting.(based off sites I've studied):
For Reading - you can use public endpoints(no tokens needed)
To Secure Reading Access - use hashing to store the resources(image).The application will give the hashed URL to users with access.
For Example 2 hashes (1 for file path and 1 for image):
https://myapp.com/images/1024523452345233232000/f567457645674242455c32cbc7.jpg
^The above can be done with S3 using a hashed "Prefix" and a hashed file name.
Write Permissions - The user should be logged into the app and be given temp credentials to write to the storage(ie add an image).
So with those 3 assumptions this is what I'm doing and the problem:
(A Simple Write - Use Credentials)
<script type="text/javascript">
AWS.config.credentials = ...;
AWS.config.region = 'us-west-2';
var bucket = new AWS.S3({params: {Bucket: 'myBucket'}});
...
var params = {Key: file.name, ContentType: file.type, Body: file};
bucket.upload(params, function (err, data) {
//Do Something Amazing!!
});
</script>
---------------------------------HERE I'M HAVING TROUBLE----------------------------------------------------------(A Simple Read -Give The User a Signed URL) Error 403 Permissions
<script type="text/javascript">
AWS.config.credentials..
AWS.config.region = 'us-west-2';
var s3 = new AWS.S3();
var params = {Bucket: 'myBucket', Key: 'myKey.jpg'};
document.addEventListener('DOMContentLoaded', function () {
s3.getSignedUrl('getObject', params, function (err, url) {
// Getting a 403 Permissions Error!!!
});
});
</script>
I figure the signed URL isn't needed, but I thought it would get me around the permission error but I have to set the permission manually to public to read the image.
QUESTION:
So how should I make the endpoints completely public(read-able) for those who have gained access to the URL, but only write-able when the user has credentials?
To make an object publicly-downloadable when you are uploading it, apply the canned (predefined) ACL called "public-read" with the putObject request.
var params = {
Key: file.name,
ContentType: file.type,
Body: file,
ACL: 'public-read'
};
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property

Creating signed S3 and Cloudfront URLs via the AWS SDK

Has anyone successfully used the AWS SDK to generate signed URLs to objects in an S3 bucket which also work over CloudFront? I'm using the JavaScript AWS SDK and it's really simple to generate signed URLs via the S3 links. I just created a private bucket and use the following code to generate the URL:
var AWS = require('aws-sdk')
, s3 = new AWS.S3()
, params = {Bucket: 'my-bucket', Key: 'path/to/key', Expiration: 20}
s3.getSignedUrl('getObject', params, function (err, url) {
console.log('Signed URL: ' + url)
})
This works great but I also want to expose a CloudFront URL to my users so they can get the increased download speeds of using the CDN. I setup a CloudFront distribution which modified the bucket policy to allow access. However, after doing this any file could be accessed via the CloudFront URL and Amazon appeared to ignore the signature in my link. After reading some more on this I've seen that people generate a .pem file to get signed URLs working with CloudFront but why is this not necessary for S3? It seems like the getSignedUrl method simply does the signing with the AWS Secret Key and AWS Access Key. Has anyone gotten a setup like this working before?
Update:
After further research it appears that CloudFront handles URL signatures completely different from S3 [link]. However, I'm still unclear as to how to create a signed CloudFront URL using Javascript.
Update: I moved the signing functionality from the example code below into the aws-cloudfront-sign package on NPM. That way you can just require this package and call getSignedUrl().
After some further investigation I found a solution which is sort of a combo between this answer and a method I found in the Boto library. It is true that S3 URL signatures are handled differently than CloudFront URL signatures. If you just need to sign an S3 link then the example code in my initial question will work just fine for you. However, it gets a little more complicated if you want to generate signed URLs which utilize your CloudFront distribution. This is because CloudFront URL signatures are not currently supported in the AWS SDK so you have to create the signature on your own. In case you also need to do this, here are basic steps. I'll assume you already have an S3 bucket setup:
Configure CloudFront
Create a CloudFront distribution
Configure your origin with the following settings
Origin Domain Name: {your-s3-bucket}
Restrict Bucket Access: Yes
Grant Read Permissions on Bucket: Yes, Update Bucket Policy
Create CloudFront Key Pair. Should be able to do this here.
Create Signed CloudFront URL
To great a signed CloudFront URL you just need to sign your policy using RSA-SHA1 and include it as a query param. You can find more on custom policies here but I've included a basic one in the sample code below that should get you up and running. The sample code is for Node.js but the process could be applied to any language.
var crypto = require('crypto')
, fs = require('fs')
, util = require('util')
, moment = require('moment')
, urlParse = require('url')
, cloudfrontAccessKey = '<your-cloudfront-public-key>'
, expiration = moment().add('seconds', 30) // epoch-expiration-time
// Define your policy.
var policy = {
'Statement': [{
'Resource': 'http://<your-cloudfront-domain-name>/path/to/object',
'Condition': {
'DateLessThan': {'AWS:EpochTime': '<epoch-expiration-time>'},
}
}]
}
// Now that you have your policy defined you can sign it like this:
var sign = crypto.createSign('RSA-SHA1')
, pem = fs.readFileSync('<path-to-cloudfront-private-key>')
, key = pem.toString('ascii')
sign.update(JSON.stringify(policy))
var signature = sign.sign(key, 'base64')
// Finally, you build the URL with all of the required query params:
var url = {
host: '<your-cloudfront-domain-name>',
protocol: 'http',
pathname: '<path-to-s3-object>'
}
var params = {
'Key-Pair-Id=' + cloudfrontAccessKey,
'Expires=' + expiration,
'Signature=' + signature
}
var signedUrl = util.format('%s?%s', urlParse.format(url), params.join('&'))
return signedUrl
For my code to work with Jason Sims's code, I also had to convert policy to base64 and add it to the final signedUrl, like this:
sign.update(JSON.stringify(policy))
var signature = sign.sign(key, 'base64')
var policy_64 = new Buffer(JSON.stringify(policy)).toString('base64'); // ADDED
// Finally, you build the URL with all of the required query params:
var url = {
host: '<your-cloudfront-domain-name>',
protocol: 'http',
pathname: '<path-to-s3-object>'
}
var params = {
'Key-Pair-Id=' + cloudfrontAccessKey,
'Expires=' + expiration,
'Signature=' + signature,
'Policy=' + policy_64 // ADDED
}
AWS includes some built in classes and structures to assist in the creation of signed URLs and Cookies for CloudFront. I utilized these alongside the excellent answer by Jason Sims to get it working in a slightly different pattern (which appears to be very similar to the NPM package he created).
Namely, the AWS.CloudFront.Signer type description which abstracts the process of creating signed URLs and Cookies.
export class Signer {
/**
* A signer object can be used to generate signed URLs and cookies for granting access to content on restricted CloudFront distributions.
*
* #param {string} keyPairId - The ID of the CloudFront key pair being used.
* #param {string} privateKey - A private key in RSA format.
*/
constructor(keyPairId: string, privateKey: string);
....
}
And either an options with a policy JSON string or without a policy with a url and expiration time.
export interface SignerOptionsWithPolicy {
/**
* A CloudFront JSON policy. Required unless you pass in a url and an expiry time.
*/
policy: string;
}
export interface SignerOptionsWithoutPolicy {
/**
* The URL to which the signature will grant access. Required unless you pass in a full policy.
*/
url: string
/**
* A Unix UTC timestamp indicating when the signature should expire. Required unless you pass in a full policy.
*/
expires: number
}
Sample implementation:
import aws, { CloudFront } from 'aws-sdk';
export async function getSignedUrl() {
// https://abc.cloudfront.net/my-resource.jpg
const url = <cloud front url/resource>;
// Create signer object - requires a public key id and private key value
const signer = new CloudFront.Signer(<public-key-id>, <private-key>);
// Setup expiration time (one hour in the future, in this case)
const expiration = new Date();
expiration.setTime(expiration.getTime() + 1000 * 60 * 60);
const expirationEpoch = expiration.valueOf();
// Set options (Without policy in this example, but a JSON policy string can be substituted)
const options = {
url: url,
expires: expirationEpoch
};
return new Promise((resolve, reject) => {
// Call getSignedUrl passing in options, to be handled either by callback or synchronously without callback
signer.getSignedUrl(options, (err, url) => {
if (err) {
console.error(err.stack);
reject(err);
}
resolve(url);
});
});
}

Categories