I am getting the following error when trying to access my s3 bucket with aws rekognition:
message: 'Unable to get object metadata from S3. Check object key, region and/or access permissions.',
My hunch is it has something to do with the region.
Here is the code:
const config = require('./config.json');
const AWS = require('aws-sdk');
AWS.config.update({region:config.awsRegion});
const rekognition = new AWS.Rekognition();
var params = {
"CollectionId": config.awsFaceCollection
}
rekognition.createCollection(params, function(err, data) {
if (err) {
console.log(err, err.stack);
}
else {
console.log('Collection created'); // successful response
}
});
And here is my config file:
{
"awsRegion":"us-east-1",
"s3Bucket":"daveyman123",
"lexVoiceId":"justin",
"awsFaceCollection":"raspifacecollection6"
}
I have given almost all the permissions to the user I can think of. Also the region for the s3 bucket appears to be in a place that can work with rekognition. What can I do?
Was having the same issue and the solution was to to use the same region for the Rekognition API and S3 bucket, and if using a Role make sure that it has proper permissions to access both S3 and Rekognition.
I had the same problem, resolved by choosing the specific aws recommended region for rekognition.
I'm using the AWS Javascript SDK to download a file from S3
var s3 = new AWS.S3();
var params = {
Bucket: "MYBUCKET",
Key: file
};
s3.getObject(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else {
//code to save file from data's byte array here
}
});
This feels like it should be easier than I'm making it out to be. Basically I want to trigger the native file download for the browser. Every resource I've found on the internet is for node's file system. I can't just use the file's URL to download as it is stored encrypted via KMS, so that is why I am going about it this way.
Thanks for the help!
I ended up changing how I was storing files. Instead of encrypting them with KMS, I moved them to a private bucket and then based the retrieval off of the logged in cognito user's ID. Then, I switched to using getSignedURL to appropriately pass in the cognito user ID.
var s3 = new AWS.S3();
var params = {
Bucket: "MYBUCKET",
Key: cognitoUser.username + "/" + file
};
var url = s3.getSignedUrl('getObject', params);
window.open(url);
I'm trying to generate a pre-signed URL then upload a file to S3 through a browser. My server-side code looks like this, and it generates the URL:
let s3 = new aws.S3({
// for dev purposes
accessKeyId: 'MY-ACCESS-KEY-ID',
secretAccessKey: 'MY-SECRET-ACCESS-KEY'
});
let params = {
Bucket: 'reqlist-user-storage',
Key: req.body.fileName,
Expires: 60,
ContentType: req.body.fileType,
ACL: 'public-read'
};
s3.getSignedUrl('putObject', params, (err, url) => {
if (err) return console.log(err);
res.json({ url: url });
});
This part seems to work fine. I can see the URL if I log it and it's passing it to the front-end. Then on the front end, I'm trying to upload the file with axios and the signed URL:
.then(res => {
var options = { headers: { 'Content-Type': fileType } };
return axios.put(res.data.url, fileFromFileInput, options);
}).then(res => {
console.log(res);
}).catch(err => {
console.log(err);
});
}
With that, I get the 403 Forbidden error. If I follow the link, there's some XML with more info:
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>
The request signature we calculated does not match the signature you provided. Check your key and signing method.
</Message>
...etc
Your request needs to match the signature, exactly. One apparent problem is that you are not actually including the canned ACL in the request, even though you included it in the signature. Change to this:
var options = { headers: { 'Content-Type': fileType, 'x-amz-acl': 'public-read' } };
Receiving a 403 Forbidden error for a pre-signed s3 put upload can also happen for a couple of reasons that are not immediately obvious:
It can happen if you generate a pre-signed put url using a wildcard content type such as image/*, as wildcards are not supported.
It can happen if you generate a pre-signed put url with no content type specified, but then pass in a content type header when uploading from the browser. If you don't specify a content type when generating the url, you have to omit the content type when uploading. Be conscious that if you are using an upload tool like Uppy, it may attach a content type header automatically even when you don't specify one. In that case, you'd have to manually set the content type header to be empty.
In any case, if you want to support uploading any file type, it's probably best to pass the file's content type to your api endpoint, and use that content type when generating your pre-signed url that you return to your client.
For example, generating a pre-signed url from your api:
const AWS = require('aws-sdk')
const uuid = require('uuid/v4')
async function getSignedUrl(contentType) {
const s3 = new AWS.S3({
accessKeyId: process.env.AWS_KEY,
secretAccessKey: process.env.AWS_SECRET_KEY
})
const signedUrl = await s3.getSignedUrlPromise('putObject', {
Bucket: 'mybucket',
Key: `uploads/${uuid()}`,
ContentType: contentType
})
return signedUrl
}
And then sending an upload request from the browser:
import Uppy from '#uppy/core'
import AwsS3 from '#uppy/aws-s3'
this.uppy = Uppy({
restrictions: {
allowedFileTypes: ['image/*'],
maxFileSize: 5242880, // 5 Megabytes
maxNumberOfFiles: 5
}
}).use(AwsS3, {
getUploadParameters(file) {
async function _getUploadParameters() {
let signedUrl = await getSignedUrl(file.type)
return {
method: 'PUT',
url: signedUrl
}
}
return _getUploadParameters()
}
})
For further reference also see these two stack overflow posts: how-to-generate-aws-s3-pre-signed-url-request-without-knowing-content-type and S3.getSignedUrl to accept multiple content-type
If you're trying to use an ACL, make sure that your Lambda IAM role has the s3:PutObjectAcl for the given Bucket and also that your bucket allows for the s3:PutObjectAcl for the uploading Principal (user/iam/account that's uploading).
This is what fixed it for me after double checking all my headers and everything else.
Inspired by this answer https://stackoverflow.com/a/53542531/2759427
1) You might need to use S3V4 signatures depending on how the data is transferred to AWS (chunk versus stream). Create the client as follows:
var s3 = new AWS.S3({
signatureVersion: 'v4'
});
2) Do not add new headers or modify existing headers. The request must be exactly as signed.
3) Make sure that the url generated matches what is being sent to AWS.
4) Make a test request removing these two lines before signing (and remove the headers from your PUT). This will help narrow down your issue:
ContentType: req.body.fileType,
ACL: 'public-read'
Had the same issue, here is how you need to solve it,
Extract the filename portion of the signed URL.
Do a print that you are extracting your filename portion correctly with querystring parameters. This is critical.
Encode to URI Encoding of the filename with query string parameters.
Return the url from your lambda with encoded filename along with other path or from your node service.
Now post from axios with that url, it will work.
EDIT1:
Your signature will also be invalid, if you pass in wrong content type.
Please ensure that the content-type you have you create the pre-signed url is same as the one you are using it for put.
Hope it helps.
As others have pointed out the solution is to add the signatureVerision.
const s3 = new AWS.S3(
{
apiVersion: '2006-03-01',
signatureVersion: 'v4'
}
);
There is very detailed discussion around the same take a look https://github.com/aws/aws-sdk-js/issues/468
This code was working with credentials and a bucket I created several years ago, but caused a 403 error on recently created credentials/buckets:
const s3 = new AWS.S3({
region: region,
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_KEY,
})
The fix was simply to add signatureVersion: 'v4'.
const s3 = new AWS.S3({
signatureVersion: 'v4',
region: region,
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_KEY,
})
Why? I don't know.
TLDR: Check that your bucket exists and is accessible by the AWS Key that is generating the Signed URL..
All of the answers are very good and most likely are the real solution, but my issue actually stemmed from S3 returning a Signed URL to a bucket that didn't exist.
Because the server didn't throw any errors, I had assumed that it must be the upload that was causing the problems without realizing that my local server had an old bucket name in it's .env file that used to be the correct one, but has since been moved.
Side note: This link helped https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-403/
It was while checking the uploading users IAM policies that I discovered that the user had access to multiple buckets, but only 1 of those existed anymore.
Did you add the CORS policy to the S3 bucket? This fixed the problem for me.
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
I encountered the same error twice with different root causes / solutions:
I was using generate_presigned_url.
The solution for me was switching to generate_presigned_post (doc) which returns a host of essential information such as
"url":"https://xyz.s3.amazonaws.com/",
"fields":{
"key":"filename.ext",
"AWSAccessKeyId":"ASIAEUROPRSWEDWOMM",
"x-amz-security-token":"some-really-long-string",
"policy":"another-long-string",
"signature":"the-signature"
}
Add these fields to your request headers, don't forget to keep file last!
That time I forgot to give proper permissions to the Lambda. Interestingly, Lambda can create good looking signed upload URLs which you won't have permission to use. The solution is to enrich the policy with S3 actions:
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::my-own-bucket/*"
]
}
using python boto3 when you upload a file the permissions are private by default. you can make the object public using ACL='public-read'
s3.put_object_acl(
Bucket='gid-requests', Key='potholes.csv', ACL='public-read')
I did all that's mentioned here and allowed these permissions for it to work:
I want to load a file from S3 with line seperated values and push it into an array.
The following code does work on my local machine, but does not work executed as a lambda function. The lambda function times out (even if I bump the timeout up to 15 seconds).
Are the SDK's different? What do I miss here since I get no error message at all beside the timeout?
Lambda Env: Node 6.10
Permission to access S3 is set like this
"Statement": [{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::mybucket",
"arn:aws:s3:::mybucket/*"
]
}]
Code looks like this
var AWS = require('aws-sdk');
var s3 = new AWS.S3({region:'eu-central-1'});
exports.index = function(event, context, callback){
var params = {
Bucket: 'mybucket',
Key: 'file.txt'
}
urls=[];
var stream = s3.getObject(params);
stream.on('httpError',function(err){
console.log(err);
throw err;
});
stream.on('httpData', function(chunk) {
urls.push(chunk.toString());
});
stream.on('httpDone', function() {
urls2 = urls.join('\n\r');
callback(urls2);
});
stream.send();
}
I got following error executing the lambda via AWS console
{
"errorMessage": "2017-07-04T18:25:20.271Z 19ab7138-60e6-11e7-9e1e-c318d929bc39 Task timed out after 15.00 seconds"
}
Thanks for any help!
Handler is required to invoke the lambda function. Also, you need to mention handler name in the lambda function configuration.
exports.handler = (event, context, callback) => {
const bucket = event.Records[0].s3.bucket.name;
const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' '));
const params = {
Bucket: bucket,
Key: key,
};
var stream = s3.getObject(params);
....
stream.send();
}
eports.handler is invoked when lambda function triggers. Make sure you must define the handler name(filename.handler) in lambda function configuration.
If you trigger this code on s3 file upload it will read the uploaded s3 file. you change the bucket & key name to read any file(which exist).
Follow the documentation http://docs.aws.amazon.com/lambda/latest/dg/nodejs-prog-model-handler.html
This code works as expected Thanks at #anand for verifing it.
The issue was related to VPC settings.
Unfortunatly a good proper error message would have helped. But at the end of the day lesson learned.
If you are running on a VPC and your lambda code should run but you get a time out, better check your security and network settings =)
In my nodejs project, I am using aws-sdk to download all the images from my s3 bucket, But I got this error- NoSuchKey: The specified key does not exist. But keys are correct and I can upload images with these keys.
My code is:
var AWS = require('aws-sdk');
s3 = new AWS.S3();
var params = {
Bucket: config.get("aws.s3.bucket"),
Key: config.get("aws.credentials.secretAccessKey")
};
s3.getObject(params, function (err, data) {
console.log("data");
if (err) console.log(err, err.stack); // an error occurred
else console.log(data);
});
}
Can anyone please tell me where I am doing wrong?
There are problems related to how to use aws-sdk and it should be as following example:
var aws = require('aws-sdk');
aws.config.update({
accessKeyId: {{AWS_ACCESS_KEY}},
secretAccessKey: {{AWS_SECRET_KEY}}
});
var s3 = new aws.S3();
var s3Params = {
Bucket: {{bucket name}},
Key: {{path to dedicated S3 Object (folder name + file/object
name)}}
};
s3.getObject(s3Params, function (err, data) {
//Continue handling the returned results.
});
replace the strings inside {{}} with correct data and it should work well.
This is because that img url doesnot exist with same user.Means that u put a img alone.jpeg in your postman when you upload image in aws.But in getobject you are posting image.jpg with same user.conclusion is that image should be same which you are uploading and then getting with same user. [when you are getting a getObject,you post this image with same user]
when you are uploading image to aws
[1]: https://i.stack.imgur.com/WLh5v.png
but when you use getObject with another image with same user(which user's token you are using ),it will give the same error.
When you user image.jpg instead [1]: https://i.stack.imgur.com/WLh5v.png
So use same image key.
use image key which is coming from aws's response instead of url.