I'm using the AWS Javascript SDK to download a file from S3
var s3 = new AWS.S3();
var params = {
Bucket: "MYBUCKET",
Key: file
};
s3.getObject(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else {
//code to save file from data's byte array here
}
});
This feels like it should be easier than I'm making it out to be. Basically I want to trigger the native file download for the browser. Every resource I've found on the internet is for node's file system. I can't just use the file's URL to download as it is stored encrypted via KMS, so that is why I am going about it this way.
Thanks for the help!
I ended up changing how I was storing files. Instead of encrypting them with KMS, I moved them to a private bucket and then based the retrieval off of the logged in cognito user's ID. Then, I switched to using getSignedURL to appropriately pass in the cognito user ID.
var s3 = new AWS.S3();
var params = {
Bucket: "MYBUCKET",
Key: cognitoUser.username + "/" + file
};
var url = s3.getSignedUrl('getObject', params);
window.open(url);
Related
I am trying to generate pre-signed url's for files in my s3 bucket so that users of my site will not have the links to the actual files. I have been trying to use this code below:
const AWS = require('aws-sdk');
const s3 = new AWS.S3()
AWS.config.update({accessKeyId: 'AKIAVXSBEXAMPLE', secretAccessKey: 'EXAMPLE5ig8MDGZD8p8iTj7t3KEXAMPLE'})
// Tried with and without this. Since s3 is not region-specific, I don't
// think it should be necessary.
AWS.config.update({region: 'eu-west-2'})
const myBucket = 'bucketexample'
const myKey = 'example.png'
const signedUrlExpireSeconds = 60 * 5
const url = s3.getSignedUrl('getObject', {
Bucket: myBucket,
Key: myKey,
Expires: signedUrlExpireSeconds
})
setTimeout(function(){ console.log("url", url); }, 3000);
console.log("url:", url)
However all it returns is this: "https://s3.amazonaws.com/"
I have also tried using this code:
const AWS = require('aws-sdk');
var s3 = new AWS.S3();
var params = {Bucket: 'bucketexample', Key: 'example.png'};
s3.getSignedUrl('putObject', params, function (err, url) {
console.log('The URL is', url);
});
Which does not return anything. Does anyone know why they are not returning a working urls?
I've had a similar issue. When the AWS SDK returns https://s3.amazonaws.com/ it is because of the machine not having the proper permissions. This can be frustrating, and I think that AWS should return a descriptive error message instead of just returning the wrong url.
I would recommend that you configure your AWS credentials for your machine or give it a role in AWS. Although you should be able to put in your credentials via code like you did in your code snippet, that didn't work for me as well for whatever reason.
What worked for me was updating my machine's default AWS credentials or adding proper roles for the deployed servers.
I had similar issue where I was getting incomplete signed URL like when I used :
let url = await s3.getSignedUrl("getObject", {
Bucket: bucket,
Key: s3FileLocation,
Expires: ttlInSeconds,
});
Then I used the Promise function instead :
// I used it inside an async function
let url = await s3.getSignedUrlPromise("getObject", {
Bucket: bucket,
Key: s3FileLocation,
Expires: ttlInSeconds,
});
And it returned complete signed URL.
Note:
In case you receive this message on tryign to access the object using signed URL.
<Error>
<Code>InvalidArgument</Code>
<Message>Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4.</Message>
Then use this while initializing s3 :
const s3 = new AWS.S3({"signatureVersion":"v4"});
I am getting the following error when trying to access my s3 bucket with aws rekognition:
message: 'Unable to get object metadata from S3. Check object key, region and/or access permissions.',
My hunch is it has something to do with the region.
Here is the code:
const config = require('./config.json');
const AWS = require('aws-sdk');
AWS.config.update({region:config.awsRegion});
const rekognition = new AWS.Rekognition();
var params = {
"CollectionId": config.awsFaceCollection
}
rekognition.createCollection(params, function(err, data) {
if (err) {
console.log(err, err.stack);
}
else {
console.log('Collection created'); // successful response
}
});
And here is my config file:
{
"awsRegion":"us-east-1",
"s3Bucket":"daveyman123",
"lexVoiceId":"justin",
"awsFaceCollection":"raspifacecollection6"
}
I have given almost all the permissions to the user I can think of. Also the region for the s3 bucket appears to be in a place that can work with rekognition. What can I do?
Was having the same issue and the solution was to to use the same region for the Rekognition API and S3 bucket, and if using a Role make sure that it has proper permissions to access both S3 and Rekognition.
I had the same problem, resolved by choosing the specific aws recommended region for rekognition.
In my nodejs project, I am using aws-sdk to download all the images from my s3 bucket, But I got this error- NoSuchKey: The specified key does not exist. But keys are correct and I can upload images with these keys.
My code is:
var AWS = require('aws-sdk');
s3 = new AWS.S3();
var params = {
Bucket: config.get("aws.s3.bucket"),
Key: config.get("aws.credentials.secretAccessKey")
};
s3.getObject(params, function (err, data) {
console.log("data");
if (err) console.log(err, err.stack); // an error occurred
else console.log(data);
});
}
Can anyone please tell me where I am doing wrong?
There are problems related to how to use aws-sdk and it should be as following example:
var aws = require('aws-sdk');
aws.config.update({
accessKeyId: {{AWS_ACCESS_KEY}},
secretAccessKey: {{AWS_SECRET_KEY}}
});
var s3 = new aws.S3();
var s3Params = {
Bucket: {{bucket name}},
Key: {{path to dedicated S3 Object (folder name + file/object
name)}}
};
s3.getObject(s3Params, function (err, data) {
//Continue handling the returned results.
});
replace the strings inside {{}} with correct data and it should work well.
This is because that img url doesnot exist with same user.Means that u put a img alone.jpeg in your postman when you upload image in aws.But in getobject you are posting image.jpg with same user.conclusion is that image should be same which you are uploading and then getting with same user. [when you are getting a getObject,you post this image with same user]
when you are uploading image to aws
[1]: https://i.stack.imgur.com/WLh5v.png
but when you use getObject with another image with same user(which user's token you are using ),it will give the same error.
When you user image.jpg instead [1]: https://i.stack.imgur.com/WLh5v.png
So use same image key.
use image key which is coming from aws's response instead of url.
I was faced with the following problem :
When I upload new file use the signed url and then try get head-object from uploaded file S3, using aws-sdk, I get error Forbidden, but if I upload new file use AWS console, I can get head-object. Does anyone know what the problem ?
Make sure you specify correct ACL in presigned POST url.
For example set (bucket-owner-full-control):
var s3 = new AWS.S3();
var params = { Bucket: req.body.bucketname, ACL: 'bucket-owner-full-control', Key: req.body.name, ContentType: req.body.type };
s3.getSignedUrl('putObject', params, function (err, url) ....
The "Background" outlines the problem in depth- I added it to make this question a good guide for using S3 to host images(like a profile image)
You can skip right to "HERE I'M HAVING TROUBLE" to help directly.
-----------------------------------------------Background------------------------------------------------------------------
Note: Feel Free to Critique My Assumptions on how to Host Images Properly for future readers.
So for a quick prototype- I'm hosting user avatar images in an AWS S3 Bucket,but I want to model roughly how it is done in production.
Here are my 3 assumptions on how to model industry standard image hosting.(based off sites I've studied):
For Reading - you can use public endpoints(no tokens needed)
To Secure Reading Access - use hashing to store the resources(image).The application will give the hashed URL to users with access.
For Example 2 hashes (1 for file path and 1 for image):
https://myapp.com/images/1024523452345233232000/f567457645674242455c32cbc7.jpg
^The above can be done with S3 using a hashed "Prefix" and a hashed file name.
Write Permissions - The user should be logged into the app and be given temp credentials to write to the storage(ie add an image).
So with those 3 assumptions this is what I'm doing and the problem:
(A Simple Write - Use Credentials)
<script type="text/javascript">
AWS.config.credentials = ...;
AWS.config.region = 'us-west-2';
var bucket = new AWS.S3({params: {Bucket: 'myBucket'}});
...
var params = {Key: file.name, ContentType: file.type, Body: file};
bucket.upload(params, function (err, data) {
//Do Something Amazing!!
});
</script>
---------------------------------HERE I'M HAVING TROUBLE----------------------------------------------------------(A Simple Read -Give The User a Signed URL) Error 403 Permissions
<script type="text/javascript">
AWS.config.credentials..
AWS.config.region = 'us-west-2';
var s3 = new AWS.S3();
var params = {Bucket: 'myBucket', Key: 'myKey.jpg'};
document.addEventListener('DOMContentLoaded', function () {
s3.getSignedUrl('getObject', params, function (err, url) {
// Getting a 403 Permissions Error!!!
});
});
</script>
I figure the signed URL isn't needed, but I thought it would get me around the permission error but I have to set the permission manually to public to read the image.
QUESTION:
So how should I make the endpoints completely public(read-able) for those who have gained access to the URL, but only write-able when the user has credentials?
To make an object publicly-downloadable when you are uploading it, apply the canned (predefined) ACL called "public-read" with the putObject request.
var params = {
Key: file.name,
ContentType: file.type,
Body: file,
ACL: 'public-read'
};
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property