Im currently developing a React webapp that lets users upload images. I have it working with AWS S3 but would like to have everything on DO for simplicities sake.
The flow goes like this:
Send image info to server and receive a signed url
Post the image to the signed url
When i try this with spaces im constantly getting a "SignatureDoesNotMatch" error.
Please help, im going batty!
My node server generating the signed url:
spacesEndpoint = new aws.Endpoint(`${DO_REGION}.digitaloceanspaces.com`),
s3 = new aws.S3({
endpoint: spacesEndpoint,
accessKeyId: DO_ACCESS_KEY_ID,
secretAccessKey: DO_SECRET_ACCESS_KEY,
region: DO_REGION,
signatureVersion: 'v4',
});
const s3Params = {
Bucket: DO_SPACE,
Expires: 60,
Key: filePath,
ContentType: fileType, // "image/jpeg"
ACL: 'public-read',
};
const promise = new Promise((resolve, reject) => {
s3.getSignedUrl('putObject', s3Params, (err, url) => {
if (err) {
reject(err);
}
resolve(url);
});
});
my client side js posting after retrieving the signed url
const xhr = new XMLHttpRequest();
xhr.withCredentials = true;
xhr.open('PUT', payload.signedUrl);
xhr.setRequestHeader('Host', `${DO_SPACE}.${DO_REGION}.digitaloceanspaces.com');
xhr.setRequestHeader('x-amz-acl', 'public-read');
xhr.setRequestHeader('Content-Type', payload.file.type);
xhr.setRequestHeader('Content-Length', payload.file.size);
xhr.send(payload.file);
Related
I have this code in my express server that generates an S3 presigned url using NodeJS AWS SDK:
const { S3Client } = require("#aws-sdk/client-s3");
const { getSignedUrl } = require("#aws-sdk/s3-request-presigner");
app.get('/getS3PresignedUrl', async (req, res) => {
const s3 = new S3Client({
region: 'ap-southeast-2',
credentials: {
accessKeyId: accessKey.data.access_key,
secretAccessKey: accessKey.data.secret_key
}
});
const presignedS3Url = await getSignedUrl(s3, new PutObjectCommand({
Bucket: 'xxx',
Key: 'test.txt',
})
);
res.send({
status: true,
message: 'success',
data: {
s3Url: presignedS3Url
}
});
})
Once I get the URL, I use that and put it in either curl or Postman but both gets 403 InvalidAccessKeyId.
For example:
curl
Postman
I don't understand why it fails with 403 when the S3 url is generated successfully? Note, I use that url right away and does not exceed the 900 seconds expire limit.
Any ideas what is the issue here?
Actually, this is the first time that I'm using s3 for uploading files. I have heard about pre-signed urls But apparently, I can't set a limitation for file size so I found "pre-signed post urls" but it's a little bit wierd!! Surprisingly I didn't find any example. maybe it's not what I want.
I'm getting pre-signed post url from the server:
const { S3 } = require("aws-sdk");
const s3 = new S3({
accessKeyId: accessKey,
secretAccessKey: secretKey,
endpoint: api,
s3ForcePathStyle: true,
signatureVersion: "v4",
});
app.post("/get-url", (req, res) => {
const key = `user/${uuri.v4()}.png`;
const params = {
Bucket: "bucketName",
Fields: {
Key: key,
ContentType: "image/png",
},
};
s3.createPresignedPost(params, function (err, data) {
if (err) {
console.error("Presigning post data encountered an error", err);
} else {
res.json({ url: data.url });
}
});
});
The weird thing is that the url that I get is not like a pre-signed url. it's just the endpoint followed by the bucket name. no query parameter. no option.
As you might guess, i can't use this url:
await axios.put(url, file, {
headers: {
"Content-Type": "image/png",
},
});
I do not even know if I should use post or two requests.
I tried both, Nothing happens. Maybe the pre-signed post url is not like pre-signed url!
At least show me an example! I can't found any.
You are on the right track, but you need to change the method you are invoking. The AWS S3 API docs of the createPresignedPost() that you are currently using states that:
Get a pre-signed POST policy to support uploading to S3 directly from an HTML form.
Try change this method to either getSignedUrl():
Get a pre-signed URL for a given operation name.
const params = { Bucket: 'bucket', Key: 'key' };
s3.getSignedUrl('putObject', params, function (err, url) {
if (err) {
console.error("Presigning post data encountered an error", err);
} else {
res.json({ url });
}
});
or synchronously:
const params = { Bucket: 'bucket', Key: 'key' };
const url = s3.getSignedUrl('putObject', params)
res.json({ url });
Alternatively, use a promise by executing getSignedUrlPromise():
Returns a 'thenable' promise that will be resolved with a pre-signed URL for a given operation name.
const params = { Bucket: 'bucket', Key: 'key' };
s3.getSignedUrlPromise('putObject', params)
.then(url => {
res.json({ url });
}, err => {
console.error("Presigning post data encountered an error", err);
});
Please also read the notes parts of the API documentation to make sure that you understand the limitations of each method.
I am trying to upload the cropped results of croppie to an S3 bucket. I am currently getting a blank error when I successfully crop and then try to upload the cropped results.
I have followed Amazon docs including setting up the S3 bucket, identity pools, and configuring my CORS.
I believe the error has something to do with how croppie is packaging the cropped results. I have included my app.js file (where I handle the upload) and the code where the addPhoto function is being called. Resp is the response from croppie.
The expected outcome is that I can successfully crop a photo and then upload it to my S3 bucket.
$('.crop').on('click', function (ev) {
$uploadCrop.croppie('result', {
type: 'canvas',
size: 'original'
}).then(function (resp) {
Swal.fire({
imageUrl: resp,
showCancelButton: true,
confirmButtonText: "Upload",
reverseButtons: true,
showCloseButton: true
}).then((result) => {
if(result.value) {
addPhoto(resp);
}
app.js
var albumBucketName = "colorsort";
var bucketRegion = "xxx";
var IdentityPoolId = "xxx";
AWS.config.update({
region: bucketRegion,
credentials: new AWS.CognitoIdentityCredentials({
IdentityPoolId: IdentityPoolId
})
});
var s3 = new AWS.S3({
apiVersion: "2006-03-01",
params: { Bucket: albumBucketName }
});
function addPhoto(resp) {
var file = resp;
var fileName = file.name;
console.log(resp.type);
var photoKey = fileName;
// Use S3 ManagedUpload class as it supports multipart uploads
var upload = new AWS.S3.ManagedUpload({
params: {
Bucket: albumBucketName,
Key: photoKey,
Body: file,
ACL: "public-read"
}
});
var promise = upload.promise();
promise.then(
function(data) {
alert("Successfully uploaded photo.");
},
function(err) {
return alert("There was an error uploading your photo: ", err.message);
}
);
}
The solution I found involved adding the following snippet to my CORS config as well as changing the croppie result 'type:' from canvas to base64.
<AllowedHeader>*</AllowedHeader>
Useful resources: Upload missing ETag, Uploading base64 image to Amazon with Node.js
Have looked at all the tutorials on how to download files from S3 to local disk. I have followed all the solutions and what they do is download the file to the server and not to the client. The code I currently have is
app.get('/download_file', function(req, res) {
var file = fs.createWriteStream('/Users/arthurlecalvez/Downloads/file.csv');
file.on('close', function(){console.log('done'); });
s3.getObject({ Bucket: 'data.pool.al14835', Key: req.query.filename }).on('error', function (err) {
console.log(err);
}).on('httpData', function (chunk) {
file.write(chunk);
}).on('httpDone', function () {
file.end();
}).send();
res.send('success')
})
How do I then send this to the client so that it is downloaded onto their device?
you can use SignedURL Like that
var params = {Bucket: bucketname , Key: keyfile , Expires: 3600 , ResponseContentDisposition : `attachment; filename="filename.ext"` };
var url = s3.getSignedUrl('getObject', params);
the generated link will force download with the name filename.ext
S3 supports the ability to generate a pre-signed URL via the AWS Javascript API. Users can then GET this URL to download the S3 object to their local device.
See this question for a Node.js code sample.
If you have the file URL you can do it like that,
new Observable((observer) => {
var xhr = new XMLHttpRequest();
xhr.open("get", fileURL, true);
xhr.responseType = "blob";
xhr.onload = function () {
if (xhr.readyState === 4) {
observer.next(xhr.response);
observer.complete();
}
};
xhr.send();
}).subscribe((blob: any) => {
let link = document.createElement("a");
link.href = window.URL.createObjectURL(blob);
link.download = elem.material.driverUrl;
link.click();
});
The file URL is:
https://BucketName.Region.amazonaws.com/Key
Just replace with your Bucket name, Region and Key
You can use the res.download method http://expressjs.com/en/api.html#res.download ?
I am looking to use Digital Oceans spaces (which seems to have an identical API to S3), and would like to try it by uploading a sample file. I am having lots of difficulty. Here's what I've done so far
{'hi' : 'world'}
Is the contents of a file hiworld.json that I would like to upload. I understand that I need to create an aws v4 signature before I can make this request.
var aws4 = require('aws4')
var request = require('request')
var opts = {'json': true,'body': "{'hi':'world'}",host: '${myspace}.nyc3.digitaloceanspaces.com', path: '/hiworld.json'}
aws4.sign(opts, {accessKeyId: '${SECRET}', secretAccessKey: '${SECRET}'})
Then I send the request
request.put(opts,function(error, response) {
if(error) {
console.log(error);
}
console.log(response.body);
});
However, when I check my Digital Ocean space, I see that my file was not created. I have noticed that if I changed my PUT to GET and try to access an existing file, I have no issues.
Here's what my headers look like
headers:
{ Host: '${myspace}.nyc3.digitaloceanspaces.com',
'Content-Type': 'application/x-www-form-urlencoded; charset=utf-8',
'Content-Length': 14,
'X-Amz-Date': '20171008T175325Z',
Authorization: 'AWS4-HMAC-SHA256 Credential=${mykey}/20171008/us-east-1//aws4_request, SignedHeaders=content-length;content-type;host;x-amz-date, Signature=475e691d4ddb81cca28eb0dcdc7c926359797d5e383e7bef70989656822accc0' },
method: 'POST' }
As an alternative, using aws-sdk:
// 1. Importing the SDK
import AWS from 'aws-sdk';
// 2. Configuring the S3 instance for Digital Ocean Spaces
const spacesEndpoint = new AWS.Endpoint(
`${REGION}.digitaloceanspaces.com`
);
const url = `https://${BUCKET}.${REGION}.digitaloceanspaces.com/${file.path}`;
const S3 = new AWS.S3({
endpoint: spacesEndpoint,
accessKeyId: ACCESS_KEY_ID,
secretAccessKey: SECRET_ACCESS_KEY
});
// 3. Using .putObject() to make the PUT request, S3 signs the request
const params = { Body: file.stream, Bucket: BUCKET, Key: file.path };
S3.putObject(params)
.on('build', request => {
request.httpRequest.headers.Host = `https://${BUCKET}.${REGION}.digitaloceanspaces.com`;
// Note: I am assigning the size to the file Stream manually
request.httpRequest.headers['Content-Length'] = file.size;
request.httpRequest.headers['Content-Type'] = file.mimetype;
request.httpRequest.headers['x-amz-acl'] = 'public-read';
})
.send((err, data) => {
if (err) logger(err, err.stack);
else logger(JSON.stringify(data, '', 2));
});
var str = {
'hi': 'world'
}
var c = JSON.stringify(str);
request(aws4.sign({
'uri': 'https://${space}.nyc3.digitaloceanspaces.com/newworlds.json',
'method': 'PUT',
'path': '/newworlds.json',
'headers': {
"Cache-Control":"no-cache",
"Content-Type":"application/x-www-form-urlencoded",
"accept":"*/*",
"host":"${space}.nyc3.digitaloceanspaces.com",
"accept-encoding":"gzip, deflate",
"content-length": c.length
},
body: c
},{accessKeyId: '${secret}', secretAccessKey: '${secret}'}),function(err,res){
if(err) {
console.log(err);
} else {
console.log(res);
}
})
This gave me a successful PUT
It can be done using multer and aws sdk. It worked for me.
const aws = require('aws-sdk');
const multer = require('multer');
const express = require('express');
const multerS3 = require('multer-s3');
const app = express();
const spacesEndpoint = new aws.Endpoint('sgp1.digitaloceanspaces.com');
const spaces = new aws.S3({
endpoint: spacesEndpoint,
accessKeyId: 'your_access_key_from_API',
secretAccessKey: 'your_secret_key'
});
const upload = multer({
storage: multerS3({
s3: spaces,
bucket: 'bucket-name',
acl: 'public-read',
key: function (request, file, cb) {
console.log(file);
cb(null, file.originalname);
}
})
}).array('upload', 1);
Now you can also call this using an API like this
app.post('/upload', function (request, response, next) {
upload(request, response, function (error) {
if (error) {
console.log(error);
}
console.log('File uploaded successfully.');
});
});
HTML would look like this
<form method="post" enctype="multipart/form-data" action="/upload">
<label for="file">Upload a file</label>
<input type="file" name="upload">
<input type="submit" class="button">
</form>