I have an application using Node and the AWS-SDK package. I am copying objects from one bucket to another using the copyObject method. I'm getting an error that says SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
I've been able to successfully run the code on my local machine and it copies the files from one bucket to another. The error occurs on our AWS server, which I deployed the application to. The full error is:
{ [SignatureDoesNotMatch: The request signature we calculated does not
match the signature you provided. Check your key and signing method.]
message: 'The request signature we calculated does not match the signature you provided. Check your key and signing method.',
code: 'SignatureDoesNotMatch',
region: null,
time: Mon Jul 11 2016 12:11:36 GMT-0400 (EDT),
requestId: <requestId>,
extendedRequestId: <extendedRequestId>,
cfId: undefined,
statusCode: 403,
retryable: false,
retryDelay: 66.48076744750142 }
Also, I'm able to perform the listObjects command. The error is only happening on copyObject.
So far, I've tried
setting correctClockSkew to true
checked the servers time (same as local computer)
checked the key/secret (loading from a config file and is working locally)
checked the file names (there are no strange characters. Alphanumeric, '.', '-' and '/')
Here is the code causing the problem:
AWS.config.update({
accessKeyId: <accessKeyId>,
secretAccessKey: <secretAccessKey>,
correctClockSkew: true
});
var s3 = new AWS.S3();
var params = {
Bucket: <bucket>,
Prefix: <prefix>
};
s3.listObjects(params, function(err, data) {
if (data.Contents.length) {
async.each(data.Contents, function(file, cb) {
var file_name = file.Key.substr(file.Key.indexOf('/')+1);
var copy_params = {
Bucket: <bucket2>,
CopySource: <bucket> + '/' + file.Key,
Key: file_name,
ACL: 'public-read'
};
s3.copyObject(copy_params, function(copyErr, copyData){
if (copyErr) {
console.log('Error:', copyErr);
}
else {
cb();
}
});
}, function(err){
...
}
});
} else {
...
}
});
Not sure if you've found a solution to this or not, but this was an issue raised on github and the solution seems to simply URL encode your CopySource parameter with encodeURI():
https://github.com/aws/aws-sdk-js/issues/1949
Related
Not sure if this is the best place to post this question, please redirect me if this isn't then I will remove the post and post it to the correct location.
I know that recently amazon s3 has changed their url while accessing files.
It used to be something like http://s3.amazonaws.com/<bucket> or http://s3.<region>.amazonaws.com/<bucket>
But there's been changes into http://<bucket>.s3-<aws-region>.amazonaws.com or http://<bucket>.s3.amazonaws.com, due to this documentation https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro
http://<bucket>.s3.amazonaws.com would not be reachable after March 20, 2019, BUT when I use aws-sdk in javascript to do file upload with skipper-better-s3 the url I get in return from aws is http://<bucket>.s3.amazonaws.com/<Key>
If that url is not suppose to be reachable why would aws return such url? (I can still access the file using the url)
If that url is not suppose to be reachable in the near future, am I suppose to add in the region myself or modify the url myself instead of using the url returned by aws?
Or it might my code's problem?
Below is my code for the upload
const awsOptions = { // these fields are different because this uses skipper
adapter: require('skipper-better-s3'),
key: aws_access_key,
secret: aws_secret_key,
saveAs: PATH,
bucket: BUCKET,
s3params: {
ACL: 'public-read'
},
}
const fieldName = req._fileparser.upstreams[0].fieldName;
req.file(fieldName).upload(awsOptions, (err, filesUploaded) => {
if (err) reject(err);
const filesUploadedF = filesUploaded[0]; // F = first file
const url = filesUploadedF.extra.Location; // image url -> https://<bucket>.s3.amazonaws.com/<Key>
console.log(url, 'urlurlurl');
});
filesUploadedF would return
UploadedFileMetadata {
fd: '<Key>',
size: 4337,
type: 'image/png',
filename: 'filename.png',
status: 'bufferingOrWriting',
field: 'image',
extra:
{ ETag: '111111111111111111111',
Location: 'https://<bucket>.s3.amazonaws.com/<Key>',
key: '<key>',
Key: '<Key>',
Bucket: '<Bucket>',
md5: '32890jf32890jf0892j3f',
fd: '<Key>',
ContentType: 'image/png' }
}
The documentation you linked to for http://<bucket>.s3.amazonaws.com style naming says this:
Note
Buckets created in Regions launched after March 20, 2019 are not reachable via the https://bucket.s3.amazonaws.com naming scheme.
The wording there is important. They're only talking about new regions brought online after March 20, 2019.
To date, that's only buckets created in Middle East (Bahrain) and Asia Pacific (Hong Kong) regions.
My goal is to make sure that all videos that are being uploaded to my application is the right format and that they are formatted to fit minimum size.
I did this before using ffmpeg however i have recently moved my application to an amazon server.
This gives me the option to use Amazon Elastic Transcoder
However by the looks of it from the interface i am unable to set up automatic jobs that look for video or audio files and converts them.
For this i have been looking at their SDK / api references but i am not quite sure how to use that in my application.
My question is has anyone successfully started transcoding jobs in node.js and know how to convert videos from one format to another and / or down set the bitrate? I would really appreciate it if someone could point me in the right direction with some examples of how this might work.
However by the looks of it from the interface i am unable to set up
automatic jobs that look for video or audio files and converts them.
The Node.js SDK doesn't support it but you can do the followings: if you store the videos in S3 (if not move them to S3 because elastic transcoder uses S3) you can run a Lambda function on S3 putObject triggered by AWS.
http://docs.aws.amazon.com/lambda/latest/dg/with-s3.html
My question is has anyone successfully started transcoding jobs in
node.js and know how to convert videos from one format to another and
/ or down set the bitrate? I would really appreciate it if someone
could point me in the right direction with some examples of how this
might work.
We used AWS for video transcoding with node without any problem. It was time consuming to find out every parameter, but I hope these few line could help you:
const aws = require('aws-sdk');
aws.config.update({
accessKeyId: config.AWS.accessKeyId,
secretAccessKey: config.AWS.secretAccessKey,
region: config.AWS.region
});
var transcoder = new aws.ElasticTranscoder();
let transcodeVideo = function (key, callback) {
// presets: http://docs.aws.amazon.com/elastictranscoder/latest/developerguide/system-presets.html
let params = {
PipelineId: config.AWS.transcode.video.pipelineId, // specifies output/input buckets in S3
Input: {
Key: key,
},
OutputKeyPrefix: config.AWS.transcode.video.outputKeyPrefix,
Outputs: config.AWS.transcode.video.presets.map(p => {
return {Key: `${key}${p.suffix}`, PresetId: p.presetId};
})
};
params.Outputs[0].ThumbnailPattern = `${key}-{count}`;
transcoder.createJob(params, function (err, data) {
if (!!err) {
logger.err(err);
return;
}
let jobId = data.Job.Id;
logger.info('AWS transcoder job created (' + jobId + ')');
transcoder.waitFor('jobComplete', {Id: jobId}, callback);
});
};
An example configuration file:
let config = {
accessKeyId: '',
secretAccessKey: '',
region: '',
videoBucket: 'blabla-media',
transcode: {
video: {
pipelineId: '1450364128039-xcv57g',
outputKeyPrefix: 'transcoded/', // put the video into the transcoded folder
presets: [ // Comes from AWS console
{presetId: '1351620000001-000040', suffix: '_360'},
{presetId: '1351620000001-000020', suffix: '_480'}
]
}
}
};
If you want to generate master playlist you can do it like this.
".ts" files can not playable via hls players. Generate ".m3u8" file
async function transcodeVideo(mp4Location, outputLocation) {
let params = {
PipelineId: elasticTranscoderPipelineId,
Input: {
Key: mp4Location,
AspectRatio: 'auto',
FrameRate: 'auto',
Resolution: 'auto',
Container: 'auto',
Interlaced: 'auto'
},
OutputKeyPrefix: outputLocation + "/",
Outputs: [
{
Key: "hls2000",
PresetId: "1351620000001-200010",
SegmentDuration: "10"
},
{
Key: "hls1500",
PresetId: "1351620000001-200020",
SegmentDuration: "10"
}
],
Playlists: [
{
Format: 'HLSv3',
Name: 'hls',
OutputKeys: [
"hls2000",
"hls1500"
]
},
],
};
let jobData = await createJob(params);
return jobData.Job.Id;
}
async function createJob(params) {
return new Promise((resolve, reject) => {
transcoder.createJob(params, function (err, data) {
if(err) return reject("err: " + err);
if(data) {
return resolve(data);
}
});
});
}
I'm trying to create a user in a AWS User Pool from an AWS Lambda
I tried with this script took from what seems to be the official JavascriptSDK for the AWS but can't get it working. http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CognitoIdentityServiceProvider.html#adminCreateUser-property
I keep getting this error:ypeError: cognitoidentityserviceprovider.adminCreateUser is not a function
var cognitoidentityserviceprovider = new AWS.CognitoIdentityServiceProvider({apiVersion: '2016-04-18'});
var params = {
UserPoolId: 'eu-west-1_XXXXXXXX', /* required */
Username: 'me#example.com', /* required */
DesiredDeliveryMediums: [
'EMAIL'
],
ForceAliasCreation: false,
MessageAction: 'SUPPRESS',
TemporaryPassword: 'tempPassword1',
UserAttributes: [
{
Name: 'email', /* required */
Value: 'me#example.com'
},
{
Name: 'name', /* required */
Value: 'Me'
},
{
Name: 'last_name', /* required */
Value: 'lastme'
}
/* more items */
]
};
cognitoidentityserviceprovider.adminCreateUser(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
callback(null, data);
});
For adminCreateUser (you basically require the aws sdk, configure credentials, instantiate the client and call the specific operation).
var aws = require('aws-sdk');
aws.config.update({accessKeyId: 'akid', secretAccessKey: 'secret'});
var CognitoIdentityServiceProvider = aws.CognitoIdentityServiceProvider;
var client = new CognitoIdentityServiceProvider({ apiVersion: '2016-04-19 });
//your code goes here
Note that there might be different ways in which you can configure the AWS credentials to call the operation. You do need credentials as this is an authenticated operation. Other admin operations are similar, you just need to pass the appropriate parameters as JSON in the call.
According to this, the AWS SDK for JavaScript version 2.7.25 which contains the adminCreateUser operation should be available.
http://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html
Try bundling the latest aws-sdk into your uploaded package instead of relying on the one made available by default.
Source: AWS Cognito adminCreateUser from Lambda
I am using a node s3 client (https://github.com/andrewrk/node-s3-client#clientdownloaddirparams) to sync an entire directory from S3 to a local directory.
As per the documentation, my code is as follows:
var s3 = require('s3');
var client = s3.createClient({
s3Options: {
accessKeyId: Config.accessKeyId,
secretAccessKey: Config.secretAccessKey
}
});
var downloader = client.downloadDir({
localDir: 'images/assets',
deleteRemoved: true,
s3Params: {
Bucket: Config.bucket,
Prefix: Config.bucketFolder
}
});
downloader.on('error', function(err) {
console.error("unable to download: ", err);
});
downloader.on('progress', function() {
console.log("progress", downloader.progressMd5Amount, downloader.progressAmount, downloader.progressTotal);
});
downloader.on('end', function(data) {
console.log("done downloading", data);
});
This begins syncing and the folder begins downloading, but eventually returns this:
progress 0 0 0
...
progress 1740297 225583 5150000
unable to download: { Error: EISDIR: illegal operation on a directory, open 'images/assets'
at Error (native)
errno: -21,
code: 'EISDIR',
syscall: 'open',
path: 'images/assets' }
The directory does indeed exist. I've tried moving directory location, path, etc, but nothing seems to do the trick. I have researched this error and have found out that it occurs when you try to open a file, but the path given is a directory. Not sure why this s3-client is trying to open a file instead of a directory. Any help or advice would be awesome. Thanks!
I just determined that download speeds were causing this issue. Unfortunately, I was on a network with .5 up and down. I just switched over to 25/10 and its working fine.
Remember that in S3, you can create a directory that has the same name as a file. Based on the error you're getting, I would say that in S3 you have a file named images and a folder named images. This would be illegal on the file system but not in S3.
Use getS3Params function to resolve this :
getS3Params: function getS3Params(localFile, s3Object, callback) {
if (path.extname(localFile) === '') { callback(null, null); }
else { callback(null, {}); }
}
https://github.com/andrewrk/node-s3-client/issues/80
I'm trying to upload files to my Amazon S3 Bucket. S3 and amazon is set up.
This is the error message from Amazon:
Conflicting query string parameters: acl, policy
Policy and signature is encoded, with Crypto.js for Node.js
var crypto=Npm.require("crypto");
I'm trying to build POST request with Meteor HTTP.post method. This could be wrong as well.
var BucketName="mybucket";
var AWSAccessKeyId="MY_ACCES_KEY";
var AWSSecretKey="MY_SECRET_KEY";
//create policy
var POLICY_JSON={
"expiration": "2009-01-01T00:00:00Z",
"conditions": [
{"bucket": BucketName},
["starts-with", "$key", "uploads/"],
{"acl": 'public-read'},
["starts-with", "$Content-Type", ""],
["content-length-range", 0, 1048576],
]
}
var policyBase64=encodePolicy(POLICY_JSON);
//create signature
var SIGNATURE = encodeSignature(policyBase64,AWSSecretKey);
console.log('signature: ', SIGNATURE);
This is the POST request I'm using with Meteor:
//Send data----------
var options={
"params":{
"key":file.name,
'AWSAccessKeyId':AWSAccessKeyId,
'acl':'public-read',
'policy':policyBase64,
'signature':SIGNATURE,
'Content-Type':file.type,
'file':file,
"enctype":"multipart/form-data",
}
}
HTTP.call('POST','https://'+BucketName+'.s3.amazonaws.com/',options,function(error,result){
if(error){
console.log("and HTTP ERROR:",error);
}else{
console.log("result:",result);
}
});
and her I'm encoding the policy and the signature:
encodePolicy=function(jsonPolicy){
// stringify the policy, store it in a NodeJS Buffer object
var buffer=new Buffer(JSON.stringify(jsonPolicy));
// convert it to base64
var policy=buffer.toString("base64");
// replace "/" and "+" so that it is URL-safe.
return policy.replace(/\//g,"_").replace(/\+/g,"-");
}
encodeSignature=function(policy,secret){
var hmac=crypto.createHmac("sha256",secret);
hmac.update(policy);
return hmac.digest("hex");
}
A can't figure out whats going on. There might already be a problem at the POST method, or the encryption, because I don't know these methods too well. If someone could point me to the right direction, to encode, or send POST request to AmazonS3 properly, it could help a lot.
(I don't like to use filepicker.io, because I don't want to force the client to sign up there as well.)
Thanks in advance!!!
Direct uploads to S3 you can use the slingshot package:
meteor add edgee:slingshot
On the server side declare your directive:
Slingshot.createDirective("myFileUploads", Slingshot.S3Storage, {
bucket: "mybucket",
allowedFileTypes: ["image/png", "image/jpeg", "image/gif"],
acl: "public-read",
authorize: function () {
//You can add user restrictions here
return true;
},
key: function (file) {
return file.name;
}
});
This directive will generate policy and signature automatically.
And them just upload it like this:
var uploader = new Slingshot.Upload("myFileUploads");
uploader.send(document.getElementById('input').files[0], function (error, url) {
Meteor.users.update(Meteor.userId(), {$push: {"profile.files": url}});
});
Why don't you use the aws-sdk package? It packs all the needed methods for you. For example, here's the simple function for adding a file to bucket:
s3.putObject({
Bucket: ...,
ACL: ...,
Key: ...,
Metadata: ...,
ContentType: ...,
Body: ...,
}, function(err, data) {
...
});
check out the S3 meteor package. The readme has a very comprehensive walkthrough of how to get started
First thing is to add the package for s3 file upload.
For Installation: ADD (AWS SDK Smart Package)
$ meteor add peerlibrary: aws-sdk
1.Create Directive upload.js and paste this code.
angular.module('techno')
.directive("fileupload", [function () {
return {
scope: {
fileupload: "="
},
link: function(scope,element, attributes){
$('.button-collapse').sideNav();
element.bind("change", function (event) {
scope.$apply(function () {
scope.fileupload = event.target.files[0];
});
})
}};
}]);
2.Get Access key and paste it in your fileUpload.js file.
AWS.config.update({
accessKeyId: ' AKIAJ2TLJBEUO6IJLKMN ',
secretAccessKey: lqGE9o4WkovRi0hCFPToG0B6w9Okg/hUfpVr6K6g'
});
AWS.config.region = 'us-east-1';
let bucket = new AWS.S3();
3.Now put this upload code in your directive fileUpload.js
vm.upload = (Obj) =>{
vm.loadingButton = true;
let name = Obj.name;
let params = {
Bucket: 'technodheeraj',
Key: name,
ContentType: 'application/pdf',
Body: Obj,
ServerSideEncryption: 'AES256'
};
bucket.putObject(params, (err, data) => {
if (err) {
console.log('---err------->', err);
}
else {
vm.fileObject = {
userId: Meteor.userId(),
eventId: id,
fileName: name,
fileSize: fileObj.size,
};
vm.call("saveFile", vm.fileObject, (error, result) => {
if (!error){
console.log('File saved successfully');
}
})
}
})
};
4.Now in “saveFile” method paste this code
saveFile: function(file){
if(file){
return Files.insert(file);
}
};
5.In HTML paste this code
<input type="file" name="file" fileupload="file">
<button type="button" class="btn btn-info " ng-click="vm.upload(file)"> Upload File</button>