Actually, this is the first time that I'm using s3 for uploading files. I have heard about pre-signed urls But apparently, I can't set a limitation for file size so I found "pre-signed post urls" but it's a little bit wierd!! Surprisingly I didn't find any example. maybe it's not what I want.
I'm getting pre-signed post url from the server:
const { S3 } = require("aws-sdk");
const s3 = new S3({
accessKeyId: accessKey,
secretAccessKey: secretKey,
endpoint: api,
s3ForcePathStyle: true,
signatureVersion: "v4",
});
app.post("/get-url", (req, res) => {
const key = `user/${uuri.v4()}.png`;
const params = {
Bucket: "bucketName",
Fields: {
Key: key,
ContentType: "image/png",
},
};
s3.createPresignedPost(params, function (err, data) {
if (err) {
console.error("Presigning post data encountered an error", err);
} else {
res.json({ url: data.url });
}
});
});
The weird thing is that the url that I get is not like a pre-signed url. it's just the endpoint followed by the bucket name. no query parameter. no option.
As you might guess, i can't use this url:
await axios.put(url, file, {
headers: {
"Content-Type": "image/png",
},
});
I do not even know if I should use post or two requests.
I tried both, Nothing happens. Maybe the pre-signed post url is not like pre-signed url!
At least show me an example! I can't found any.
You are on the right track, but you need to change the method you are invoking. The AWS S3 API docs of the createPresignedPost() that you are currently using states that:
Get a pre-signed POST policy to support uploading to S3 directly from an HTML form.
Try change this method to either getSignedUrl():
Get a pre-signed URL for a given operation name.
const params = { Bucket: 'bucket', Key: 'key' };
s3.getSignedUrl('putObject', params, function (err, url) {
if (err) {
console.error("Presigning post data encountered an error", err);
} else {
res.json({ url });
}
});
or synchronously:
const params = { Bucket: 'bucket', Key: 'key' };
const url = s3.getSignedUrl('putObject', params)
res.json({ url });
Alternatively, use a promise by executing getSignedUrlPromise():
Returns a 'thenable' promise that will be resolved with a pre-signed URL for a given operation name.
const params = { Bucket: 'bucket', Key: 'key' };
s3.getSignedUrlPromise('putObject', params)
.then(url => {
res.json({ url });
}, err => {
console.error("Presigning post data encountered an error", err);
});
Please also read the notes parts of the API documentation to make sure that you understand the limitations of each method.
Related
I have this code in my express server that generates an S3 presigned url using NodeJS AWS SDK:
const { S3Client } = require("#aws-sdk/client-s3");
const { getSignedUrl } = require("#aws-sdk/s3-request-presigner");
app.get('/getS3PresignedUrl', async (req, res) => {
const s3 = new S3Client({
region: 'ap-southeast-2',
credentials: {
accessKeyId: accessKey.data.access_key,
secretAccessKey: accessKey.data.secret_key
}
});
const presignedS3Url = await getSignedUrl(s3, new PutObjectCommand({
Bucket: 'xxx',
Key: 'test.txt',
})
);
res.send({
status: true,
message: 'success',
data: {
s3Url: presignedS3Url
}
});
})
Once I get the URL, I use that and put it in either curl or Postman but both gets 403 InvalidAccessKeyId.
For example:
curl
Postman
I don't understand why it fails with 403 when the S3 url is generated successfully? Note, I use that url right away and does not exceed the 900 seconds expire limit.
Any ideas what is the issue here?
any one please help me, by using express-fileupload library, I want to upload my file and image to AWS S3 bucket using restful API.
Hope you found the answer.
In case you haven't found the answer, here is what I just found out
const uploadSingleImage = async (file, s3, fileName) => {
const bucketName = process.env.S3_BUCKET_NAME;
if (!file.mimetype.startsWith('image')) {
return { status: false, message: 'File uploaded is not an image' };
}
const params = {
Bucket: bucketName,
Key: fileName,
Body: file.data,
ACL: 'public-read',
ContentType: file.mimetype,
};
return s3.upload(params).promise();
};
s3.upload(params).promise()
will return an object which contains the Location of the file you uploaded, in fact you could generate it yourself but that would not cover in case any error occur so I think what I posted here is a better solution.
I am generating preauthorized links for files from an S3 bucket, but would like to pass in the file name to download as a parameter.
This is what my API looks like:
reports.get('/xxx', async (req, res) => {
var AWS = require('aws-sdk');
var s3 = new AWS.S3();
var params = {
Bucket: config.xxx,
Key: 'xxx/xxx.json',
Expires: 60 * 5
}
try {
s3.getSignedUrl('getObject', params, function (err, url) {
if(err)throw err;
console.log(url)
res.json(url);
});
}catch (err) {
res.status(500).send(err.toString());
}
});
And this is how I call it from the front end:
getPreauthorizedLink(e){
fetch(config.api.urlFor('xxx'))
.then((response) => response.json())
.then((url) => {
console.log(url);
});
}
How can I add a parameter to the API call and corresponding API method to pass the filename?
Looks like you are using express on your server side so you can simply add parameters in the request URL and then get them on the server side.
In frontend or on your client side you will call the Api like
fetch('/xxx/FileName')
And on the backend you will modify your route like
reports.get('/xxx/:fileName', ..){
var fileName = req.params.fileName
}
Also you don't want to require everytime you receive a request. So you better move var AWS = require('aws-sdk'); outside your request handler.
I have a JPEG buffer which uploads and downloads successfully from S3. However, I'm trying to send it over the Messenger API, and when it's accessed programmatically Messenger throws errors because according to the S3 console, the actual Content-Type of the image is application/octet-stream.
My manually entered metadata appears under x-amz-meta-content-type. According to the AWS documentation, this is the default behavior. How might I override it to get image/jpeg under Content-Type?
My code:
var s3 = new AWS.S3();
var params = {
Body: buffer,
Bucket: <bucket>,
Key: <key>,
Metadata: {
'Content-Type': 'image/jpeg'
}
};
s3.putObject(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else {
console.log(data);
}
})
Don't set it in the Metadata section, that's only for properties that will be prefixed with x-amz-meta. There is a ContentType parameter at the main level, like so:
var s3 = new AWS.S3();
var params = {
Body: buffer,
Bucket: <bucket>,
Key: <key>,
ContentType: 'image/jpeg'
};
s3.putObject(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else {
console.log(data);
}
})
Good afternoon,
I'm trying to set up a connection to my aws product api, however I keep getting a 301 Permanent Redirect Error as follows:
{ [PermanentRedirect: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.]
message: 'The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.',
code: 'PermanentRedirect',
name: 'PermanentRedirect',
statusCode: 301,
retryable: false }
The code I am using to connect to the API is as follows:
var aws = require('aws-sdk');
//Setting up the AWS API
aws.config.update({
accessKeyId: 'KEY',
secretAccessKey: 'SECRET',
region: 'eu-west-1'
})
var s3 = new aws.S3();
s3.createBucket({Bucket: 'myBucket'}, function() {
var params = {Bucket: 'myBucket', Key: 'myKey', Body: 'Hello!'};
s3.putObject(params, function(err, data) {
if (err)
console.log(err)
else
console.log("Successfully uploaded data to myBucket/myKey");
});
});
If I try using different regions, like us-west-1 I just get the same error.
What am I doing wrong?
Thank you very much in advance!
I have fixed this issue:
You have to make sure that you already have created a bucket with the same name; in this case, the name of the bucket would be 'myBucket'.
s3.createBucket({Bucket: 'myBucket'}, function() {
var params = {Bucket: 'myBucket', Key: 'myKey', Body: 'Hello!'};
Once you created the bucket, go to properties and see what region it is using - add this into:
aws.config.update({
accessKeyId: 'KEY',
secretAccessKey: 'SECRET',
region: 'eu-west-1'
})
Now it should work! Best wishes
I've just come across this issue and I believe the example at http://aws.amazon.com/sdkfornodejs/ is incorrect.
The issue with the demo code is that Bucket names should be in lowercase - http://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html
Had the demo actually output the err argument on the callback in s3.createBucket this would have been immediately obvious.
Also bucket names need to be unique in the region your creating them.
s3.createBucket({Bucket: 'mylowercaseuniquelynamedtestbucket'}, function(err) {
if(err){
console.info(err);
}else{
var params = {Bucket: 'mylowercaseuniquelynamedtestbucket', Key: 'testKey', Body: 'Hello!'};
s3.putObject(params, function(err, data) {
if (err)
console.log(err)
else
console.log("Successfully uploaded data to testBucket/testKey");
});
}
})
Was stuck on this for a while...
Neither setting the region name or leaving it blank in my configuration file fixed the issue. I came across the gist talking about the solution and it was quite simple:
Given:
var AWS = require('aws-sdk');
Prior to instantiating your S3 object do the following:
AWS.config.update({"region": "us-west-2"}) //replacing your region there
I was then able to call the following with no problem:
var storage = new AWS.S3();
var params = {
Bucket: config.aws.s3_bucket,
Key: name,
ACL:'authenticated-read',
Body: data
}
storage.putObject(params, function(storage_err, data){...})