AWS SDK Presigned URL + Multipart upload - javascript

Is there a way to do a multipart upload via the browser using a generated presigned URL?

Angular - Multipart Aws Pre-signed URL
Example
https://multipart-aws-presigned.stackblitz.io/
https://stackblitz.com/edit/multipart-aws-presigned?file=src/app/app.component.html
Download Backend:
https://www.dropbox.com/s/9tm8w3ujaqbo017/serverless-multipart-aws-presigned.tar.gz?dl=0
To upload large files into an S3 bucket using pre-signed url it is necessary to use multipart upload, basically splitting the file into many parts which allows parallel upload.
Here we will leave a basic example of the backend and frontend.
Backend (Serveless Typescript)
const AWSData = {
accessKeyId: 'Access Key',
secretAccessKey: 'Secret Access Key'
};
There are 3 endpoints
Endpoint 1: /start-upload
Ask S3 to start the multipart upload, the answer is an UploadId associated to each part that will be uploaded.
export const start: APIGatewayProxyHandler = async (event, _context) => {
const params = {
Bucket: event.queryStringParameters.bucket, /* Bucket name */
Key: event.queryStringParameters.fileName /* File name */
};
const s3 = new AWS.S3(AWSData);
const res = await s3.createMultipartUpload(params).promise()
return {
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true,
},
body: JSON.stringify({
data: {
uploadId: res.UploadId
}
})
};
}
Endpoint 2: /get-upload-url
Create a pre-signed URL for each part that was split for the file to be uploaded.
export const uploadUrl: APIGatewayProxyHandler = async (event, _context) => {
let params = {
Bucket: event.queryStringParameters.bucket, /* Bucket name */
Key: event.queryStringParameters.fileName, /* File name */
PartNumber: event.queryStringParameters.partNumber, /* Part to create pre-signed url */
UploadId: event.queryStringParameters.uploadId /* UploadId from Endpoint 1 response */
};
const s3 = new AWS.S3(AWSData);
const res = await s3.getSignedUrl('uploadPart', params)
return {
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true,
},
body: JSON.stringify(res)
};
}
Endpoint 3: /complete-upload
After uploading all the parts of the file it is necessary to inform that they have already been uploaded and this will make the object assemble correctly in S3.
export const completeUpload: APIGatewayProxyHandler = async (event, _context) => {
// Parse the post body
const bodyData = JSON.parse(event.body);
const s3 = new AWS.S3(AWSData);
const params: any = {
Bucket: bodyData.bucket, /* Bucket name */
Key: bodyData.fileName, /* File name */
MultipartUpload: {
Parts: bodyData.parts /* Parts uploaded */
},
UploadId: bodyData.uploadId /* UploadId from Endpoint 1 response */
}
const data = await s3.completeMultipartUpload(params).promise()
return {
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true,
// 'Access-Control-Allow-Methods': 'OPTIONS,POST',
// 'Access-Control-Allow-Headers': 'Content-Type',
},
body: JSON.stringify(data)
};
}
Frontend (Angular 9)
The file is divided into 10MB parts
Having the file, the multipart upload to Endpoint 1 is requested
With the UploadId you divide the file in several parts of 10MB and from each one you get the pre-signed url upload using the Endpoint 2
A PUT is made with the part converted to blob to the pre-signed url obtained in Endpoint 2
When you finish uploading each part you make a last request the Endpoint 3
In the example of all this the function uploadMultipartFile

I was managed to achieve this in serverless architecture by creating a Canonical Request for each part upload using Signature Version 4. You will find the document here AWS Multipart Upload Via Presign Url

from the AWS documentation:
For request signing, multipart upload is just a series of regular requests, you initiate multipart upload, send one or more requests to upload parts, and finally complete multipart upload. You sign each request individually, there is nothing special about signing multipart upload request
So I think you should have to generate a presigned url for each part of the multipart upload :(
what is your use case? can't you execute a script from your server, and give s3 access to this server?

Related

Uploading file to S3 using a custom API and AWS lambda

I am trying to send a file through Postman using the form-data type. The request is sent to AWS Lambda using an API. The request content in the Lambda is corrupted with a lot of question marks in the content.
I would like to convert back to the file from the request content and store the file in S3.
Existing code -
const res = multipart.parse(event, false);
var file = res['File'];
var encodedFile = Buffer.from(file["content"], 'binary');
var encodedFilebs64 = Buffer.from(file["content"], 'binary').toString('base64');
const s3 = new AWS.S3();
const params = {
Bucket: config.s3Bucket,
Key: "asset_" + known_asset_id + '.bin',
Body: encodedFile
};
await s3.upload(params).promise().then(function(data) {
console.log(`File uploaded successfully. ${data.Location}`);
}, function(err) {
console.error("Upload failed", err);
});
Response content from Cloudwatch logs -
https://i.stack.imgur.com/SvBfF.png
When converting this to binary and comparing, the file is not same as the original file.
It would be helpful if someone could help me construct the file from response and store it in S3.

AWS Image upload data to S3 from form is corrupt

I am using multer in a Lambda function to upload an image through an API POST request I am building, this is part of a form on my website.
This is the console log from cloud watch:
{
fieldname: 'logo_image',
originalname: '8824.png',
encoding: '7bit',
mimetype: 'image/png',
destination: '/tmp/upload',
filename: 'f7f44f5c39304937d10e90ceb7e9ddbb',
path: '/tmp/upload/f7f44f5c39304937d10e90ceb7e9ddbb',
size: 1376654
}
This is my js express function that accepts the image using multer:
routes.post('/', upload.single('logo_image'), async (req, res) => {
const file = req.file
// here I am just going into another function to check the form data is valid
await checkTicketId(req, res)
});
I then upload the data to my S3 bucket:
const result = await uploadFile(req.file)
function uploadFile(file, policy_id) {
const fileStream = fs.createReadStream(file.path)
const uploadParams = {
Bucket: bucketName,
Body: fileStream,
Key: file.filename,
}
return s3.putObject(uploadParams).promise()
}
The data is uploaded fine, however its just some binary nonsense. And If I return the object I get data like this:
IHDR � �2�p IDATx���i�$I�%��Ǣj�WVVf]���9zw�X�h"\D � ��7�# -�h���S�]gfU��n�����EDE��##"�ȡݠ���ws3UQ��� �?P ��_#� P`3�5l'�.��w�[�! J�0���?��#� ���1h`�8A�H#�$�#D C1�#�d`��� �ְ5 A21��rʫ ���ɀ� F̈ ���
��
k��p8 $��/4Ly3�A���e"IH�</�ߞ�
�C�R&���<��j�����b �eY�N���nY�K����x�0��Y(�Dx L(d�$� p�pE��5��Q������ٟ�u�K��ΰ $#��N������� ��0���?~���?����?�ዏ>^���tw��\��+�����_۴IҊ#�1�����_~��������s�=%�>��W��D�z
'4c�#̺�[��_M�#D^b{)}�'�b�W���p}su3��ӡ�����xw\�u� ����
onts=s9�B ��89֠#�Cw�K�~Z�Ӳ���XݗEp��"P�uٓ�]롅�����飛'7W���<�zz}uSJ)tDz���/�;.��O�_����8�*y�����vsu����j����Cy<MWW��j.�Ja������������w����Ջ�/��}~{w��
�a��t�2�M4PB�W_|�X!x���ږs1��
���l6�P��NZ�8=Dz����,���mXl�*��`���'_���_(�o�����?�����v�
��a#���?�������C$ �� ��^!�.D=�/Σ�~O���+u�rz�%���.���f��K���x�}ɚ�4Ѭ� �����8m�&7�#�jf�_|���Y�[=pq�v9�5����0a���h�<�;��:)#��B�_��.���H`#��� �8�b�bF�P("K���Z�^8F(i0�f�X������bC �0����M�����}��|CQo��կ�A�ר?��/�VZk�s��b
��n�A�jr0�Z�&�n�wL�i��E`�����<�A���p!�� ��E�yp�*�����dD( ��� M����k�֧�EC�T�V�����Ԋ+�l�<�H�Y1�!�ʤIe��r���
ɱ���L3$��O� �-�z`�Kv���O7nK�.����9CE���Ŧ����|����g�<��W�����V���
�rw���1���D��*�J
Ideally I want it stored as .webp, I have also tried using upload instead of putObject and changing file extension. There doesn't seem to be any buffer data I can use either on the req.file property.
Any insight into this would be helpful, I've been stuck on it for a while now and I've enabled binary data through the API on AWS as well. Thanks!

How to use an S3 pre-signed POST url?

Actually, this is the first time that I'm using s3 for uploading files. I have heard about pre-signed urls But apparently, I can't set a limitation for file size so I found "pre-signed post urls" but it's a little bit wierd!! Surprisingly I didn't find any example. maybe it's not what I want.
I'm getting pre-signed post url from the server:
const { S3 } = require("aws-sdk");
const s3 = new S3({
accessKeyId: accessKey,
secretAccessKey: secretKey,
endpoint: api,
s3ForcePathStyle: true,
signatureVersion: "v4",
});
app.post("/get-url", (req, res) => {
const key = `user/${uuri.v4()}.png`;
const params = {
Bucket: "bucketName",
Fields: {
Key: key,
ContentType: "image/png",
},
};
s3.createPresignedPost(params, function (err, data) {
if (err) {
console.error("Presigning post data encountered an error", err);
} else {
res.json({ url: data.url });
}
});
});
The weird thing is that the url that I get is not like a pre-signed url. it's just the endpoint followed by the bucket name. no query parameter. no option.
As you might guess, i can't use this url:
await axios.put(url, file, {
headers: {
"Content-Type": "image/png",
},
});
I do not even know if I should use post or two requests.
I tried both, Nothing happens. Maybe the pre-signed post url is not like pre-signed url!
At least show me an example! I can't found any.
You are on the right track, but you need to change the method you are invoking. The AWS S3 API docs of the createPresignedPost() that you are currently using states that:
Get a pre-signed POST policy to support uploading to S3 directly from an HTML form.
Try change this method to either getSignedUrl():
Get a pre-signed URL for a given operation name.
const params = { Bucket: 'bucket', Key: 'key' };
s3.getSignedUrl('putObject', params, function (err, url) {
if (err) {
console.error("Presigning post data encountered an error", err);
} else {
res.json({ url });
}
});
or synchronously:
const params = { Bucket: 'bucket', Key: 'key' };
const url = s3.getSignedUrl('putObject', params)
res.json({ url });
Alternatively, use a promise by executing getSignedUrlPromise():
Returns a 'thenable' promise that will be resolved with a pre-signed URL for a given operation name.
const params = { Bucket: 'bucket', Key: 'key' };
s3.getSignedUrlPromise('putObject', params)
.then(url => {
res.json({ url });
}, err => {
console.error("Presigning post data encountered an error", err);
});
Please also read the notes parts of the API documentation to make sure that you understand the limitations of each method.

By using Express-fileupload library ,Upload image to AWS S3 bucket

any one please help me, by using express-fileupload library, I want to upload my file and image to AWS S3 bucket using restful API.
Hope you found the answer.
In case you haven't found the answer, here is what I just found out
const uploadSingleImage = async (file, s3, fileName) => {
const bucketName = process.env.S3_BUCKET_NAME;
if (!file.mimetype.startsWith('image')) {
return { status: false, message: 'File uploaded is not an image' };
}
const params = {
Bucket: bucketName,
Key: fileName,
Body: file.data,
ACL: 'public-read',
ContentType: file.mimetype,
};
return s3.upload(params).promise();
};
s3.upload(params).promise()
will return an object which contains the Location of the file you uploaded, in fact you could generate it yourself but that would not cover in case any error occur so I think what I posted here is a better solution.

AWS: Access Denied when trying to load presigned url (direct fileupload browser)

I'm attempting to use a presigned URL but I keep getting a 403 Forbidden Access Denied despite setting up everything as I believe that I'm supposed to. I want to upload a file directly from the browser to Amazon S3.
I'm first of all enabling the root AWS account to use putObject. I don't have any additional accounts - I just want it to work for my root account to begin with. Here is the bucket policy:
{
"Version": "2012-10-17",
"Id": "XXXX",
"Statement": [
{
"Sid": "XXXXX",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXX:root"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::XXXXX/*"
}
]
}
This is my Node.js backend. Here I just generate the url and send it to the frontend. Some code for the backend:
const aws = require('aws-sdk');
aws.config.update({
region: "eu-north-1",
accessKeyId: "XXX",
secretAccessKey: "YYY"
});
const s3 = new aws.S3({ apiVersion: "2006-03-01" });
app.get('/geturl', (req,res) => {
const s3Params = {
Bucket: 'XXXXXXXXXXXXX',
Key: req.query.filename,
Expires: 500,
ContentType: req.query.type,
ACL: "public-read"
};
s3.getSignedUrl("putObject", s3Params, (err, data) => {
res.send(data);
});
})
In the frontend, i make a simple call using the URL with the file I wish to upload. When I perform the second fetch call, it will generate the error:
async function handleUpload(e) {
const file = e.target.files[0];
const res = await fetch('http://localhost:3001/geturl');
const url = await res.text();
const resUpload = await fetch(url, { method: 'PUT', body: file });
}
Any ideas what I did wrong?
Edit - Seems like it works if I uncheck the first checkbox - is this a big deal or should this always be blocked in a production env?
In your backend try changing ACL: "public-read" to ACL: "private". You should then be able to block all public access and still be able to successfully complete presigned puts/posts.

Categories