SignatureDoesNotMatch error from S3 putObject - javascript

I'm using the AWS Javascript SDK to put files to my S3. The following code is intended to upload a user avatar to the S3. I'm hardcoding the accessKeyId and secretAccessKey for now, and taking the file and key for uploading from a web form.
document.getElementById("upload-button").onclick = function() {
const key = document.getElementById("key-text").value;
var file = document.getElementById("file-chooser").files[0];
const S3 = new AWS.S3({
signatureVersion: "v4",
apiVersion: '2006-03-01',
accessKeyId: 'ACCESS_KEY_ID',
secretAccessKey: 'SECRET_ACCESS_KEY',
region: 'us-west-2'
})
S3.putObject({
Key: key,
Bucket: 'my-bucket-name',
Body: file,
}, (err, data) => {
if (err) {
alert("Error: " + err);
} else {
alert("Upload successful: " + data);
}
})
}
document.getElementById("upload-button").onclick = function() {
const key = document.getElementById("key-text").value;
var file = document.getElementById("file-chooser").files[0];
const S3 = new AWS.S3({`
signatureVersion: "v4",
apiVersion: '2006-03-01',
accessKeyId: 'ACCESS_KEY_ID',
secretAccessKey: 'SECRET_ACCESS_KEY',
region: 'us-west-2'
})
S3.putObject({
Key: key,
Bucket: 'ilarp-data-prod-1',
Body: file,
}, (err, data) => {
if (err) {
alert("Error: " + err);
} else {
alert("Upload successful: " + data);
}
})
}
The code above gives me an error return of SignatureDoesNotMatch I'm mystified by that, since I thought I was letting the API do the signing, and earlier versions of this (which I unfortunately cannot reproduce) did not give me this error.

It turns out that this was pilot error. I was mismatching the ACCESS_KEY_ID and SECRET_ACCESS_KEY. Even though I checked this very thing a dozen times, I still got it wrong. Sorry about that. If you come here wondering about this, know that every programmer makes a dumb mistake every once in a while.

Related

How to use an S3 pre-signed POST url?

Actually, this is the first time that I'm using s3 for uploading files. I have heard about pre-signed urls But apparently, I can't set a limitation for file size so I found "pre-signed post urls" but it's a little bit wierd!! Surprisingly I didn't find any example. maybe it's not what I want.
I'm getting pre-signed post url from the server:
const { S3 } = require("aws-sdk");
const s3 = new S3({
accessKeyId: accessKey,
secretAccessKey: secretKey,
endpoint: api,
s3ForcePathStyle: true,
signatureVersion: "v4",
});
app.post("/get-url", (req, res) => {
const key = `user/${uuri.v4()}.png`;
const params = {
Bucket: "bucketName",
Fields: {
Key: key,
ContentType: "image/png",
},
};
s3.createPresignedPost(params, function (err, data) {
if (err) {
console.error("Presigning post data encountered an error", err);
} else {
res.json({ url: data.url });
}
});
});
The weird thing is that the url that I get is not like a pre-signed url. it's just the endpoint followed by the bucket name. no query parameter. no option.
As you might guess, i can't use this url:
await axios.put(url, file, {
headers: {
"Content-Type": "image/png",
},
});
I do not even know if I should use post or two requests.
I tried both, Nothing happens. Maybe the pre-signed post url is not like pre-signed url!
At least show me an example! I can't found any.
You are on the right track, but you need to change the method you are invoking. The AWS S3 API docs of the createPresignedPost() that you are currently using states that:
Get a pre-signed POST policy to support uploading to S3 directly from an HTML form.
Try change this method to either getSignedUrl():
Get a pre-signed URL for a given operation name.
const params = { Bucket: 'bucket', Key: 'key' };
s3.getSignedUrl('putObject', params, function (err, url) {
if (err) {
console.error("Presigning post data encountered an error", err);
} else {
res.json({ url });
}
});
or synchronously:
const params = { Bucket: 'bucket', Key: 'key' };
const url = s3.getSignedUrl('putObject', params)
res.json({ url });
Alternatively, use a promise by executing getSignedUrlPromise():
Returns a 'thenable' promise that will be resolved with a pre-signed URL for a given operation name.
const params = { Bucket: 'bucket', Key: 'key' };
s3.getSignedUrlPromise('putObject', params)
.then(url => {
res.json({ url });
}, err => {
console.error("Presigning post data encountered an error", err);
});
Please also read the notes parts of the API documentation to make sure that you understand the limitations of each method.

How do I transfer data from one method to another in Node.js?

I'm using Telegram bot API and AWS S3 to read data from a bucket. I need to use the data from the s3 method in the Telgraf method, but I don't know how:
'use strict'
const Telegraf = require('telegraf');
const bot = new Telegraf('TOKEN')
var AWS = require('aws-sdk')
var s3 = new AWS.S3({
accessKeyId: 'key',
secretAccessKey: 'secret'
})
var params = {Bucket: 'myBucket', Key:"ipsum.txt"};
var s3Promise = s3.getObject(params, function(err, data) {
if (err) console.log(err, err.stack);
else
var words= data.Body.toString(); //WHAT I WANT IN IN COMMAND METHOD
console.log('\n' + words+ '\n') //Returns ipsum.txt as string on console
})
bot.command('s', (ctx) => { //Bot Command
s3Promise; //Returns ipsum.txt as string on console
ctx.reply('Check console') //Meesage in Telegram
//ctx.reply(<I WANT data.Body.toSting() HERE>)
});
const { PORT = 3000 } = process.env
bot.startWebhook('/', null, PORT)
How do I use the data from the s3.getObject method in ctx.reply() ?
If you want to send the file as an attachment, you have to use: ctx.replyWithDocument. Aside from that your problem is: How do I return the response from an asynchronous call?
In this particular case you can use s3.getObject(params).promise() in order to avoid the callback API, and use it easily inside your bot.command listener.
Using async/await (Node >= 7.6) your code can be written like this
'use strict';
const Telegraf = require('telegraf');
const bot = new Telegraf('TOKEN');
const AWS = require('aws-sdk');
const s3 = new AWS.S3({
accessKeyId: 'key',
secretAccessKey: 'secret'
});
const params = {
Bucket: 'myBucket',
Key: 'ipsum.txt'
};
bot.command('s', async ctx => { // Bot Command
try {
// If you're sending always the same file and it won't change
// too much, you can cache it to avoid the external call everytime
const data = await s3.getObject(params).promise();
ctx.reply('Check console'); // Message in Telegram
// This will send the file as an attachment
ctx.replyWithDocument({
source: data.Body,
filename: params.Key
});
// or just as text
ctx.reply(data.Body.toString());
} catch(e) {
// S3 failed
ctx.reply('Oops');
console.log(e);
}
});
const {
PORT = 3000
} = process.env;
bot.startWebhook('/', null, PORT);
More info on how to work with files can be found on telegraf docs
PS: I tested the code and it it's fully working:
While I haven't used S3, I do know that AWS services added support for Promises to their implementations to avoid using callbacks. Personally, I much prefer the use of promises as I think they lead to more readable code.
I think the following should handle the issue you're having.
'use strict'
const Telegraf = require('telegraf');
const bot = new Telegraf('TOKEN')
var AWS = require('aws-sdk')
var s3 = new AWS.S3({
accessKeyId: 'key',
secretAccessKey: 'secret'
})
var params = {Bucket: 'myBucket', Key:"ipsum.txt"};
bot.command('s', (ctx) => {
s3.getObject(params).promise()
.then(data => {
ctx.reply('Check console');
ctx.reply(data.Body.toString());
}, err => console.log(err, err.stack));
})
const { PORT = 3000 } = process.env
bot.startWebhook('/', null, PORT)
As suggested by Luca, I called bot.command inside of s3.getObject and it works!
s3.getObject(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else
bot.command('s', (ctx) => {
ctx.reply('Succesfully read from S3:\n\n' + data.Body.toString())
});
})

How to upload file to Digital Ocean Spaces using Javascript

I am looking to use Digital Oceans spaces (which seems to have an identical API to S3), and would like to try it by uploading a sample file. I am having lots of difficulty. Here's what I've done so far
{'hi' : 'world'}
Is the contents of a file hiworld.json that I would like to upload. I understand that I need to create an aws v4 signature before I can make this request.
var aws4 = require('aws4')
var request = require('request')
var opts = {'json': true,'body': "{'hi':'world'}",host: '${myspace}.nyc3.digitaloceanspaces.com', path: '/hiworld.json'}
aws4.sign(opts, {accessKeyId: '${SECRET}', secretAccessKey: '${SECRET}'})
Then I send the request
request.put(opts,function(error, response) {
if(error) {
console.log(error);
}
console.log(response.body);
});
However, when I check my Digital Ocean space, I see that my file was not created. I have noticed that if I changed my PUT to GET and try to access an existing file, I have no issues.
Here's what my headers look like
headers:
{ Host: '${myspace}.nyc3.digitaloceanspaces.com',
'Content-Type': 'application/x-www-form-urlencoded; charset=utf-8',
'Content-Length': 14,
'X-Amz-Date': '20171008T175325Z',
Authorization: 'AWS4-HMAC-SHA256 Credential=${mykey}/20171008/us-east-1//aws4_request, SignedHeaders=content-length;content-type;host;x-amz-date, Signature=475e691d4ddb81cca28eb0dcdc7c926359797d5e383e7bef70989656822accc0' },
method: 'POST' }
As an alternative, using aws-sdk:
// 1. Importing the SDK
import AWS from 'aws-sdk';
// 2. Configuring the S3 instance for Digital Ocean Spaces
const spacesEndpoint = new AWS.Endpoint(
`${REGION}.digitaloceanspaces.com`
);
const url = `https://${BUCKET}.${REGION}.digitaloceanspaces.com/${file.path}`;
const S3 = new AWS.S3({
endpoint: spacesEndpoint,
accessKeyId: ACCESS_KEY_ID,
secretAccessKey: SECRET_ACCESS_KEY
});
// 3. Using .putObject() to make the PUT request, S3 signs the request
const params = { Body: file.stream, Bucket: BUCKET, Key: file.path };
S3.putObject(params)
.on('build', request => {
request.httpRequest.headers.Host = `https://${BUCKET}.${REGION}.digitaloceanspaces.com`;
// Note: I am assigning the size to the file Stream manually
request.httpRequest.headers['Content-Length'] = file.size;
request.httpRequest.headers['Content-Type'] = file.mimetype;
request.httpRequest.headers['x-amz-acl'] = 'public-read';
})
.send((err, data) => {
if (err) logger(err, err.stack);
else logger(JSON.stringify(data, '', 2));
});
var str = {
'hi': 'world'
}
var c = JSON.stringify(str);
request(aws4.sign({
'uri': 'https://${space}.nyc3.digitaloceanspaces.com/newworlds.json',
'method': 'PUT',
'path': '/newworlds.json',
'headers': {
"Cache-Control":"no-cache",
"Content-Type":"application/x-www-form-urlencoded",
"accept":"*/*",
"host":"${space}.nyc3.digitaloceanspaces.com",
"accept-encoding":"gzip, deflate",
"content-length": c.length
},
body: c
},{accessKeyId: '${secret}', secretAccessKey: '${secret}'}),function(err,res){
if(err) {
console.log(err);
} else {
console.log(res);
}
})
This gave me a successful PUT
It can be done using multer and aws sdk. It worked for me.
const aws = require('aws-sdk');
const multer = require('multer');
const express = require('express');
const multerS3 = require('multer-s3');
const app = express();
const spacesEndpoint = new aws.Endpoint('sgp1.digitaloceanspaces.com');
const spaces = new aws.S3({
endpoint: spacesEndpoint,
accessKeyId: 'your_access_key_from_API',
secretAccessKey: 'your_secret_key'
});
const upload = multer({
storage: multerS3({
s3: spaces,
bucket: 'bucket-name',
acl: 'public-read',
key: function (request, file, cb) {
console.log(file);
cb(null, file.originalname);
}
})
}).array('upload', 1);
Now you can also call this using an API like this
app.post('/upload', function (request, response, next) {
upload(request, response, function (error) {
if (error) {
console.log(error);
}
console.log('File uploaded successfully.');
});
});
HTML would look like this
<form method="post" enctype="multipart/form-data" action="/upload">
<label for="file">Upload a file</label>
<input type="file" name="upload">
<input type="submit" class="button">
</form>

How do I delete an object on AWS S3 using JavaScript?

I want to delete a file from Amazon S3 using JavaScript. I have already uploaded the file using JavaScript. How can I delete it?
You can use the JS method from S3:
var AWS = require('aws-sdk');
AWS.config.loadFromPath('./credentials-ehl.json');
var s3 = new AWS.S3();
var params = { Bucket: 'your bucket', Key: 'your object' };
s3.deleteObject(params, function(err, data) {
if (err) console.log(err, err.stack); // error
else console.log(); // deleted
});
Be aware that S3 never returns the object if it has been deleted.
You have to check it before or after with getobject, headobject, waitfor, etc
You can use construction like this:
var params = {
Bucket: 'yourBucketName',
Key: 'fileName'
/*
where value for 'Key' equals 'pathName1/pathName2/.../pathNameN/fileName.ext'
- full path name to your file without '/' at the beginning
*/
};
s3.deleteObject(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
And don't forget to wrap it to the Promise.
Before deleting the file you have to check the 1) file whether it is in the bucket because If the file is not available in the bucket and using deleteObject API this doesn't throw any error 2)CORS Configuration of the bucket. By using headObject API gives the file status in the bucket.
AWS.config.update({
accessKeyId: "*****",
secretAccessKey: "****",
region: region,
version: "****"
});
const s3 = new AWS.S3();
const params = {
Bucket: s3BucketName,
Key: "filename" //if any sub folder-> path/of/the/folder.ext
}
try {
await s3.headObject(params).promise()
console.log("File Found in S3")
try {
await s3.deleteObject(params).promise()
console.log("file deleted Successfully")
}
catch (err) {
console.log("ERROR in file Deleting : " + JSON.stringify(err))
}
} catch (err) {
console.log("File not Found ERROR : " + err.code)
}
As params are constant, the best way to use it with const. If the file is not found in the s3 it throws the error NotFound : null.
If you want to apply any operations in the bucket, you have to change the permissions of CORS Configuration in the respective bucket in the AWS. For changing permissions Bucket->permission->CORS Configuration and Add this code.
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
for more information about CROS Configuration : https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html
You can use deleteObjects API to delete multiple objects at once instead of calling API for each key to delete. Helps save time and network bandwidth.
You can do following-
var deleteParam = {
Bucket: 'bucket-name',
Delete: {
Objects: [
{Key: 'a.txt'},
{Key: 'b.txt'},
{Key: 'c.txt'}
]
}
};
s3.deleteObjects(deleteParam, function(err, data) {
if (err) console.log(err, err.stack);
else console.log('delete', data);
});
For reference see - https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#deleteObjects-property
You can follow this GitHub gist link https://gist.github.com/jeonghwan-kim/9597478.
delete-aws-s3.js:
var aws = require('aws-sdk');
var BUCKET = 'node-sdk-sample-7271';
aws.config.loadFromPath(require('path').join(__dirname, './aws-config.json'));
var s3 = new aws.S3();
var params = {
Bucket: 'node-sdk-sample-7271',
Delete: { // required
Objects: [ // required
{
Key: 'foo.jpg' // required
},
{
Key: 'sample-image--10.jpg'
}
],
},
};
s3.deleteObjects(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Very straight forward
At first, create an instance of s3 and configure it with credentials
const S3 = require('aws-sdk').S3;
const s3 = new S3({
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: process.env.AWS_REGION
});
Afterward, follow the docs
var params = {
Bucket: "ExampleBucket",
Key: "HappyFace.jpg"
};
s3.deleteObject(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
/*
data = {
}
*/
});

Node.js 303 Permanent Redirect when connecting to AWS-SDK

Good afternoon,
I'm trying to set up a connection to my aws product api, however I keep getting a 301 Permanent Redirect Error as follows:
{ [PermanentRedirect: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.]
message: 'The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.',
code: 'PermanentRedirect',
name: 'PermanentRedirect',
statusCode: 301,
retryable: false }
The code I am using to connect to the API is as follows:
var aws = require('aws-sdk');
//Setting up the AWS API
aws.config.update({
accessKeyId: 'KEY',
secretAccessKey: 'SECRET',
region: 'eu-west-1'
})
var s3 = new aws.S3();
s3.createBucket({Bucket: 'myBucket'}, function() {
var params = {Bucket: 'myBucket', Key: 'myKey', Body: 'Hello!'};
s3.putObject(params, function(err, data) {
if (err)
console.log(err)
else
console.log("Successfully uploaded data to myBucket/myKey");
});
});
If I try using different regions, like us-west-1 I just get the same error.
What am I doing wrong?
Thank you very much in advance!
I have fixed this issue:
You have to make sure that you already have created a bucket with the same name; in this case, the name of the bucket would be 'myBucket'.
s3.createBucket({Bucket: 'myBucket'}, function() {
var params = {Bucket: 'myBucket', Key: 'myKey', Body: 'Hello!'};
Once you created the bucket, go to properties and see what region it is using - add this into:
aws.config.update({
accessKeyId: 'KEY',
secretAccessKey: 'SECRET',
region: 'eu-west-1'
})
Now it should work! Best wishes
I've just come across this issue and I believe the example at http://aws.amazon.com/sdkfornodejs/ is incorrect.
The issue with the demo code is that Bucket names should be in lowercase - http://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html
Had the demo actually output the err argument on the callback in s3.createBucket this would have been immediately obvious.
Also bucket names need to be unique in the region your creating them.
s3.createBucket({Bucket: 'mylowercaseuniquelynamedtestbucket'}, function(err) {
if(err){
console.info(err);
}else{
var params = {Bucket: 'mylowercaseuniquelynamedtestbucket', Key: 'testKey', Body: 'Hello!'};
s3.putObject(params, function(err, data) {
if (err)
console.log(err)
else
console.log("Successfully uploaded data to testBucket/testKey");
});
}
})
Was stuck on this for a while...
Neither setting the region name or leaving it blank in my configuration file fixed the issue. I came across the gist talking about the solution and it was quite simple:
Given:
var AWS = require('aws-sdk');
Prior to instantiating your S3 object do the following:
AWS.config.update({"region": "us-west-2"}) //replacing your region there
I was then able to call the following with no problem:
var storage = new AWS.S3();
var params = {
Bucket: config.aws.s3_bucket,
Key: name,
ACL:'authenticated-read',
Body: data
}
storage.putObject(params, function(storage_err, data){...})

Categories