In my nodejs project, I am using aws-sdk to download all the images from my s3 bucket, But I got this error- NoSuchKey: The specified key does not exist. But keys are correct and I can upload images with these keys.
My code is:
var AWS = require('aws-sdk');
s3 = new AWS.S3();
var params = {
Bucket: config.get("aws.s3.bucket"),
Key: config.get("aws.credentials.secretAccessKey")
};
s3.getObject(params, function (err, data) {
console.log("data");
if (err) console.log(err, err.stack); // an error occurred
else console.log(data);
});
}
Can anyone please tell me where I am doing wrong?
There are problems related to how to use aws-sdk and it should be as following example:
var aws = require('aws-sdk');
aws.config.update({
accessKeyId: {{AWS_ACCESS_KEY}},
secretAccessKey: {{AWS_SECRET_KEY}}
});
var s3 = new aws.S3();
var s3Params = {
Bucket: {{bucket name}},
Key: {{path to dedicated S3 Object (folder name + file/object
name)}}
};
s3.getObject(s3Params, function (err, data) {
//Continue handling the returned results.
});
replace the strings inside {{}} with correct data and it should work well.
This is because that img url doesnot exist with same user.Means that u put a img alone.jpeg in your postman when you upload image in aws.But in getobject you are posting image.jpg with same user.conclusion is that image should be same which you are uploading and then getting with same user. [when you are getting a getObject,you post this image with same user]
when you are uploading image to aws
[1]: https://i.stack.imgur.com/WLh5v.png
but when you use getObject with another image with same user(which user's token you are using ),it will give the same error.
When you user image.jpg instead [1]: https://i.stack.imgur.com/WLh5v.png
So use same image key.
use image key which is coming from aws's response instead of url.
Related
I'm uploading a video to S3 using aws-sdk in a reaction environment.
And Use an accelerated endpoint for faster data transfers.
the endpoint is bucket-name.s3-accelerate.amazonaws.com
And I changed the option to 'Enabled' for accelerating transmission in bucket properties.
Below is my code for uploading file objects to s3.
import AWS from "aws-sdk";
require("dotenv").config();
const AWS_ACCESS_KEY = process.env.REACT_APP_AWS_ACCESS_KEY;
const AWS_SECRET_KEY = process.env.REACT_APP_AWS_SECRET_KEY;
const BUCKET = process.env.REACT_APP_BUCKET_NAME;
const REGION = process.env.REACT_APP_REGION;
AWS.config.update({
accessKeyId: AWS_ACCESS_KEY,
secretAccessKey: AWS_SECRET_KEY,
region: REGION,
useAccelerateEndpoint: true, //----> the options is here.
});
async function uploadFileToS3(file) {
const params = {
Bucket: BUCKET,
Key: file.name,
ContentType: 'multipart/form-data',
Body: file
};
const accelerateOption = {
Bucket: BUCKET,
AccelerateConfiguration: { Status: 'Enabled'},
ExpectedBucketOwner: process.env.REACT_APP_BUCKET_OWNER_ID,
};
const s3 = new AWS.S3();
try {
s3.putBucketAccelerateConfiguration(accelerateOption, (err, data) => {
if (err) console.log(err)
else console.log(data, 'data put accelerate') //----> this is just object {}
});
s3.upload(params)
.on("httpUploadProgress", progress => {
const { loaded, total } = progress;
const progressPercentage = parseInt((loaded / total) * 100);
console.log(progressPercentage);
})
.send((err, data) => {
console.log(data, 'data data');
});
} catch (err) {
console.log(err);
}
}
There is definitely s3-accelerate in url in the location property of the data object. (in console.log)
{
Bucket: "newvideouploada2f16b1e1d6a4671947657067c79824b121029-dev"
ETag: "\"d0b40e4afca23137bee89b54dd2ebcf3-8\""
Key: "Raw Run __ Race Against the Storm.mp4"
Location: "https://newvideouploada2f16b1e1d6a4671947657067c79824b121029-dev.s3-accelerate.amazonaws.com/Raw+Run+__+Race+Aga
}
However, the object URL of the property of the video I uploaded does not exist.
Is this how it's supposed to be?
Am I using Transfer Acceleration wrong way?
I saw the documentation and AWS said using putBucketAccelerateConfiguration.
But When I console.log there is noting responsed.
Please let me know How to use Transfer Acceleration in Javascript awd-sdk.
If you are running this code on some AWS compute (EC2, ECS, EKS, Lambda), and the bucket is in the same region as your compute, then consider using VPC Gateway endpoints for S3. More information here. If the compute and the bucket are in different regions, consider using VPC Endpoints for S3 with inter region VPC peering. Note: VPC Gateway endpoints are free while VPC Endpoints are not.
After you enable BucketAccelerate it takes at least half an hour to take effect. You don't need to call this every time you upload a file unless you are also suspending bucket acceleration after you are done.
Bucket acceleration helps when you want to use the AWS backbone network to upload data faster(may be user is in region 'A' and bucket is in region 'B' or you want to upload bigger files in which case it goes to the nearest edge location and then uses the AWS backbone network). You can use this tool to check the potential improvement in terms of speed for various regions.
Also there is additional cost when you use this feature. Check the Data Transfer section on the S3 pricing page.
I am trying to generate pre-signed url's for files in my s3 bucket so that users of my site will not have the links to the actual files. I have been trying to use this code below:
const AWS = require('aws-sdk');
const s3 = new AWS.S3()
AWS.config.update({accessKeyId: 'AKIAVXSBEXAMPLE', secretAccessKey: 'EXAMPLE5ig8MDGZD8p8iTj7t3KEXAMPLE'})
// Tried with and without this. Since s3 is not region-specific, I don't
// think it should be necessary.
AWS.config.update({region: 'eu-west-2'})
const myBucket = 'bucketexample'
const myKey = 'example.png'
const signedUrlExpireSeconds = 60 * 5
const url = s3.getSignedUrl('getObject', {
Bucket: myBucket,
Key: myKey,
Expires: signedUrlExpireSeconds
})
setTimeout(function(){ console.log("url", url); }, 3000);
console.log("url:", url)
However all it returns is this: "https://s3.amazonaws.com/"
I have also tried using this code:
const AWS = require('aws-sdk');
var s3 = new AWS.S3();
var params = {Bucket: 'bucketexample', Key: 'example.png'};
s3.getSignedUrl('putObject', params, function (err, url) {
console.log('The URL is', url);
});
Which does not return anything. Does anyone know why they are not returning a working urls?
I've had a similar issue. When the AWS SDK returns https://s3.amazonaws.com/ it is because of the machine not having the proper permissions. This can be frustrating, and I think that AWS should return a descriptive error message instead of just returning the wrong url.
I would recommend that you configure your AWS credentials for your machine or give it a role in AWS. Although you should be able to put in your credentials via code like you did in your code snippet, that didn't work for me as well for whatever reason.
What worked for me was updating my machine's default AWS credentials or adding proper roles for the deployed servers.
I had similar issue where I was getting incomplete signed URL like when I used :
let url = await s3.getSignedUrl("getObject", {
Bucket: bucket,
Key: s3FileLocation,
Expires: ttlInSeconds,
});
Then I used the Promise function instead :
// I used it inside an async function
let url = await s3.getSignedUrlPromise("getObject", {
Bucket: bucket,
Key: s3FileLocation,
Expires: ttlInSeconds,
});
And it returned complete signed URL.
Note:
In case you receive this message on tryign to access the object using signed URL.
<Error>
<Code>InvalidArgument</Code>
<Message>Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4.</Message>
Then use this while initializing s3 :
const s3 = new AWS.S3({"signatureVersion":"v4"});
I am getting the following error when trying to access my s3 bucket with aws rekognition:
message: 'Unable to get object metadata from S3. Check object key, region and/or access permissions.',
My hunch is it has something to do with the region.
Here is the code:
const config = require('./config.json');
const AWS = require('aws-sdk');
AWS.config.update({region:config.awsRegion});
const rekognition = new AWS.Rekognition();
var params = {
"CollectionId": config.awsFaceCollection
}
rekognition.createCollection(params, function(err, data) {
if (err) {
console.log(err, err.stack);
}
else {
console.log('Collection created'); // successful response
}
});
And here is my config file:
{
"awsRegion":"us-east-1",
"s3Bucket":"daveyman123",
"lexVoiceId":"justin",
"awsFaceCollection":"raspifacecollection6"
}
I have given almost all the permissions to the user I can think of. Also the region for the s3 bucket appears to be in a place that can work with rekognition. What can I do?
Was having the same issue and the solution was to to use the same region for the Rekognition API and S3 bucket, and if using a Role make sure that it has proper permissions to access both S3 and Rekognition.
I had the same problem, resolved by choosing the specific aws recommended region for rekognition.
I'm using the AWS Javascript SDK to download a file from S3
var s3 = new AWS.S3();
var params = {
Bucket: "MYBUCKET",
Key: file
};
s3.getObject(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else {
//code to save file from data's byte array here
}
});
This feels like it should be easier than I'm making it out to be. Basically I want to trigger the native file download for the browser. Every resource I've found on the internet is for node's file system. I can't just use the file's URL to download as it is stored encrypted via KMS, so that is why I am going about it this way.
Thanks for the help!
I ended up changing how I was storing files. Instead of encrypting them with KMS, I moved them to a private bucket and then based the retrieval off of the logged in cognito user's ID. Then, I switched to using getSignedURL to appropriately pass in the cognito user ID.
var s3 = new AWS.S3();
var params = {
Bucket: "MYBUCKET",
Key: cognitoUser.username + "/" + file
};
var url = s3.getSignedUrl('getObject', params);
window.open(url);
I was faced with the following problem :
When I upload new file use the signed url and then try get head-object from uploaded file S3, using aws-sdk, I get error Forbidden, but if I upload new file use AWS console, I can get head-object. Does anyone know what the problem ?
Make sure you specify correct ACL in presigned POST url.
For example set (bucket-owner-full-control):
var s3 = new AWS.S3();
var params = { Bucket: req.body.bucketname, ACL: 'bucket-owner-full-control', Key: req.body.name, ContentType: req.body.type };
s3.getSignedUrl('putObject', params, function (err, url) ....