Upload S3 knox node js (signature doesnt match) - javascript

I've been trying for many days now to upload a file (message.txt) to aws s3 using knox and node js.
I keep having a signature doesnt match error.
my code in node js (upload was not working so i'm just trying to do a get) :
var client = knox.createClient({
key: 'myAWSkey'
, secret: 'mySecretKey'
, bucket: 'mybucket'
, endpoint: 'mybucket.s3-eu-west-1.amazonaws.com'
});
client.get('/').on('response', function(res){
console.log(res.statusCode);
console.log(res.headers);
res.setEncoding('utf8');
res.on('data', function(chunk){
console.log(chunk);
});
}).end();
I also tried the amazon to compare the test signature with many different methods like this one : html and python version
Nothing worked for me, I'm probably a bit lost in the process...
If someone could give me some big lines to guide me and/or a script to generate correctly a signature in javascript/node js I would be very grateful.

You might want to try the AwsSum library. It's actively maintained and also comes with a load of examples and another repo with more fully featured scripts.
https://github.com/appsattic/node-awssum/
And for your needs, there is an example upload script in the scripts repo (separate GitHub project):
https://github.com/appsattic/node-awssum-scripts/blob/master/bin/amazon-s3-upload.js
Let me know if you need any help or if you get on ok. Disclaimer: I'm the author of AwsSum. :)

I just struggled with this issue for a few days. Assuming you're on Windows, it seems like it's an issue on Knox's end. Apparently the problem has been solved, but the solution has not not pulled into the main project yet.
See this thread: https://github.com/LearnBoost/knox/issues/56
My workaround was to just remove the knox library and clone this repository into my node_modules folder: https://github.com/domenic/knox.git
Hope that helps!

For NodeJS, there is an API which helps to generate Signature. It is available at NPM repo with name aws4.
Can refer from link: https://www.npmjs.com/package/aws4
To download and install, use below command:
npm i aws4
Can add in package.json file
{
"dependencies": {
"aws4": "^1.11.0",
"https": "^1.0.0",
"crypto": "^1.0.1"
}
}
Following parameters are used for generating signature:
host: host of service, mandatory
path: Path of the file being uploaded, mandatory
service: Service name e.g. s3
region: Region name e.g. us-east-1
method: HTTP method e.g. GET, PUT
accessKeyId: Access Key ID for the service, mandatory
secretAccessKey: Secret for access key id, mandatory
sessionToken: Temporary session token, optional
Use below code to upload a file:
var https = require('https')
var aws4 = require('aws4')
var crypto = require('crypto');
var fs = require('fs');
var fileBuffer = fs.readFileSync('1.jpg'); //File name from local which need to upload
var hashSum = crypto.createHash('sha256');
hashSum.update(fileBuffer);
var hex = hashSum.digest('hex'); //Generate SHA256 from the file
var opts = aws4.sign({
host: '<host name for the s3 service>',
path: '<bucket and file path in s3>',
service: 's3',
region: 'us-east-1',
method: 'PUT',
headers: {
'X-Amz-Content-Sha256': hex
},
body: undefined
}, {
accessKeyId: '<access key>',
secretAccessKey: '<secret key>'
sessionToken: '<session token>'
}
)
opts.path = '<complete path: https://host+bucket+filepath>';
opts.headers['Content-Type'] = 'image/jpeg'; //Content type of the file
opts.headers['User-Agent'] = 'Custom Agent - v0.0.1'; //Agent name, optional
opts.headers['Agent-Token'] = '47a8e1a0-87df-40a1-a021-f9010e3f6690'; // Agent unique token, optional
opts.headers['Content-Length'] = fileBuffer.length; //Content length of the file being uploaded
console.log(opts) //It will print generated Signature
var req = https.request(opts, function(res) {
console.log('STATUS: ' + res.statusCode);
console.log('HEADERS: ' + JSON.stringify(res.headers));
res.setEncoding('utf8');
res.on('data', function (chunk) {
console.log('BODY: ' + chunk);
});
});
req.on('error', function(e) {
console.log('problem with request: ' + e.message);
});
req.write(fileBuffer);
req.end();
Note: The SHA256 is generated manually and passed into the argument before generating signature and body set as undefined in aws4.sign() method. This important when uploading a file as binary data therefore, SHA256 is generated and set before aws4.sign() method call.
This same API can be used for all different calls e.g. GET call for file download.
The sessionToken is optional as it is only required for the cases where temporary session token is generated for accessing S3 service.

Related

How to read zip file from S3 public bucket without Aws key and secret

I have a use case where I have to read a ZIP file and pass it to the creation of lambda as a template.
Now I want to read zip file from a S3 public bucket. How can I read the file from the public bucket?
S3 bucket zip file where I am reading is https://lambda-template-code.s3.amazonaws.com/LambdaTemplate.zip
const zipContents = 'https://lambda-template-code.s3.amazonaws.com/LambdaTemplate.zip';
var params = {
Code: {
// here at below I have to pass the zip file reading it from the S3 public bucket
ZipFile: zipContents,
},
FunctionName: 'testFunction', /* required */
Role: 'arn:aws:iam::149727569662:role/ROLE', /* required */
Description: 'Created with tempalte',
Handler: 'index.handler',
MemorySize: 256,
Publish: true,
Runtime: 'nodejs12.x',
Timeout: 15,
TracingConfig: {
Mode: "Active"
}
};
lambda.createFunction(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
The above code gives error Could not unzip uploaded file. Please check your file, then try to upload again
How can I read the URL file? And pass it in the params
Can anyone help me here
Using the createFunction docs and specifically the Code docs, you can see that ZipFile expects
The base64-encoded contents of the deployment package. AWS SDK and AWS CLI clients handle the encoding for you.
and not a URL of where it is. Instead, you need to use S3Bucket and S3Key.
It is not clear from the docs that public buckets are allowed for this purpose, but the docs do say
An Amazon S3 bucket in the same AWS Region as your function. The bucket can be in a different AWS account.

Error: 7 PERMISSION_DENIED: Your application has authenticated using end user credentials from the Google Cloud SDK

This was working a couple months ago without code changes inside of my websocket server, however using it today it seems that the Google speech to text api no longer allows authentication using access tokens.
This was my previously working method until I hit this error today
const client = new speech.SpeechClient({
access_token: ACCESS_TOKEN,
projectId: 'project-name'
});
That nets me the above error in the title.
I also tried switching to a service account (which I used in the past) by setting up the environment as follows
export GOOGLE_APPLICATION_CREDENTIALS="path-to-key.json"
I then run the client without the above code and instead run:
const client = new speech.SpeechClient();
and that nets me this beautiful error instead, even though the environment is set at this point with the project Id
Error: Unable to detect a Project Id in the current environment.
Any help in resolving this would be much appreciated!
I resolved the environment problem and subsequent error by doing the following:
const options = {
keyFilename: 'path-to-key.json',
projectId: 'project-name',
};
const client = new speech.SpeechClient(options);
I was able to follow the Official Quickstart and got it working by using Client Libraries with no issues. I will explain what I did right below.
From Cloud Speech-to-Text - Quickstart:
Create or select a project:
gcloud config set project YOUR_PROJECT_NAME
Enable the Cloud Speech-to-Text API for the current project:
gcloud services enable speech.googleapis.com
Create a service account:
gcloud iam service-accounts create [SA-NAME] \
--description "[SA-DESCRIPTION]" \
--display-name "[SA-DISPLAY-NAME]"
Download a private key as JSON:
gcloud iam service-accounts keys create ~/key.json \
--iam-account [SA-NAME]#[PROJECT-ID].iam.gserviceaccount.com
Set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the file path of the JSON file that contains your service account key:
export GOOGLE_APPLICATION_CREDENTIALS="[PATH]"
Install the Client Library
npm install --save #google-cloud/speech
Created a quickstart.js file and put the following code sample inside:
'use strict';
// [START speech_quickstart]
async function main() {
// Imports the Google Cloud client library
const speech = require('#google-cloud/speech');
const fs = require('fs');
// Creates a client
const client = new speech.SpeechClient();
// The name of the audio file to transcribe
const fileName = './resources/audio.raw';
// Reads a local audio file and converts it to base64
const file = fs.readFileSync(fileName);
const audioBytes = file.toString('base64');
// The audio file's encoding, sample rate in hertz, and BCP-47 language code
const audio = {
content: audioBytes,
};
const config = {
encoding: 'LINEAR16',
sampleRateHertz: 16000,
languageCode: 'en-US',
};
const request = {
audio: audio,
config: config,
};
// Detects speech in the audio file
const [response] = await client.recognize(request);
const transcription = response.results
.map(result => result.alternatives[0].transcript)
.join('\n');
console.log("Transcription: ${transcription}");
}
main().catch(console.error);
WHERE const fileName = './resources/audio.raw' is the path where your test.raw audio is located.

Access key issue with aws-sdk S3 in production environment (Javascript)

I am using the AWS-SDK in order to storage my images from my webapp. Also, I am using Javascript and multer in order to make the manage of the process. When I call the API from my localhost (my local computer) in order to do my tests, all works fine. But when I upload my project to a Digital Ocean droplet, it always says the following error:
The AWS Access Key Id you provided does not exist in our records.
I have set my environment access key and the secret key in the file /etc/environment and when I execute the command printenv, I can see the variables listed.
I am using the same variables both in my local computer and in the server. Why AWS reject my request?
In advance thanks. Here is my code:
var s3 = new aws.S3({
region: 'us-west-2',
accessKeyId: config.aws.accessKey,
secretAccessKey: config.aws.secretKey
})
var storage = multerS3({
s3: s3,
bucket: 'pos-lisa',
acl: 'public-read',
metadata: function (req, file, cb) {
cb(null, {fieldName: file.fieldname})
},
key: function (req, file, cb) {
cb(null, file.fieldname + '-' + Date.now() + '.' + fileExt(file.originalname))
}
})

Is Dotenv still a necessary package with 6.x.x Node?

I recently set up a simple project with a .env file and called the env variables in my code with process.env.[variable name] and it totally worked without adding the dotenv package to my project.
Has node incorporated this natively? I tried googling but it didn't turn up any useful information so I am kind of confused. Thought it would be easy to confirm or deny.
Here is my 'app':
// Load the SDK and UUID
var AWS = require('aws-sdk');
var uuid = require('node-uuid');
// Create an S3 client
var s3 = new AWS.S3({
region: 'us-east-1',
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
});
// Create a bucket and upload something into it
var bucketName = 'node-sdk-sample-' + uuid.v4();
var keyName = 'hello_colorado.txt';
s3.createBucket({Bucket: bucketName}, function() {
var params = {Bucket: bucketName, Key: keyName, Body: 'Coloradoical!'};
s3.putObject(params, function(err, data) {
if (err)
console.log(err)
else
console.log("Successfully uploaded data to " + bucketName + "/" + keyName);
});
});
And my package.json (without dotenv):
{
"dependencies": {
"aws-sdk": ">= 2.0.9",
"node-uuid": ">= 1.4.1"
}
}
Just a thought, could it be related to the fact that I am running my application from the command line with node simple.js? If so, can you explain why?
No, Node.js does not read .env files automatically.
Possible explanations for what's going on:
Perhaps the environment variable you are using is already set in your shell before you run the program.
Perhaps your AWS credentials are being stored/utilized some other way by your machine. (Based on our comment conversation, this looks like the case for you, but I'm including other things to generically help others who might be seeing something similar.)
Perhaps one of the modules you are loading is reading the .env file.
Suggested additional info from #maxwell:
AWS CLI help pages indicate that the precedence for configuration values is:
command line options
environment variables
configuration file
So sounds like the information was coming out of a configuration file for #maxwell.
I also hit the case that .env was read into process.env "automatically", which puzzled me a lot. I then found out it was because I install a zsh plugin called dotenv which will automatically load ENV variables from .env file when you cd into the directory.

simple node.js example in aws lambda

I am trying to send a simple request with aws lambda.
My module structure is as follows:
mylambda
|-- index.js
|-- node_modules
| |-- request
I zip the file up and it is uploaded to lambda.
Then I invoke it, and it returns the following error. "errorMessage": "Cannot find module 'index'"
Here is the contents of the index.js file
var request = require('request');
exports.handler = function(event, context) {
var headers = { 'User-Agent': 'Super Agent/0.0.1', 'Content-Type': 'application/x-www-form-urlencoded' }
// Configure the request
var options = {
url: 'https://myendpoint',
method: 'POST',
headers: headers,
form: {'payload': {"text":""} }
}
// Start the request
request(options, function (error, response, body) {
if (!error && response.statusCode == 200) {
console.log(body)
}
})
console.log('value1 =', event.key1);
context.succeed(event.key1); // Echo back the first key value
};
Any help is appreciated, Thanks
All working now, I had to increase the Timeout(s) seconds in advanced settings, as it was taking longer than 3 seconds.
Also I had to ensure my node modules were correctly installed. I had messed up the request module when trying to figure out what was wrong.
To reinstall the module, I deleted then re-installed request.
deleted node_modules
npm init
added the dependancies "request" : "*" in the package.json,
npm install. Compressed the zip and uploaded, all working now. :)
You have to zip and upload subfolders only, not a root folder. You have to zip following folders as per your example, then upload:
|-- index.js
|-- node_modules
|-- request
Task: Write an aws lamda function:
How I have see us doing:
We write code in aws editor and run that
Not running as expected, put a lot of consoles there(because we can't debug our code)
Wait for some seconds then see the consoles in another window, keep changing the windows until we resolve our problem
4.changing the windows takes a lot of time and effort.
why can't we?
write the code in our server (not aws editor) and then send that code to aws.
Yes, we can.
new Function (https://davidwalsh.name/new-function) Blessing in disguise
concept.
Sample code:
let fs = require('fs');
const aws = require("aws-sdk");
const s3 = new aws.S3(),
async = require('async');
aws.config = {
"accessKeyId": "xyz",
"secretAccessKey": "xyz",
"region": "us-east-1"
};
fs.readFile('path to your code file', 'utf-8', async (err, code) => {
if (err) return res.status(500).send({ err });
async function uploadToS3(docs) { (only this function has to go into aws editor)
let func = new Function('docs', "aws", "s3", 'async', `${code}`);
return func(docs, aws, s3, async);
}
let resp = await uploa`enter code here`dToS3(req.files.docs);(this line will call aws lambda function from our server)
return res.send({ resp });
});
Code which I have written in my file:
docs = Array.isArray(docs) ? docs : [docs]
let funArray = [];
docs.forEach((value) => {
funArray.push(function (callback) {
s3.upload({
Bucket: "xxx",
Body: value.data,
Key: "anurag" + "/" + new Date(),
ContentType: value.mimetype
}, function (err, res) {
if (err) {
return callback(err, null);
}
return callback(null, res);
});
});
});
return new Promise((resolve, reject) => {
async.parallel(funArray, (err, data) => {
resolve(data);
});
});
Benefit:
As the whole code will be written in our familiar IDE, it will be easy to debug.
Only two lines have to go into aws editor (isn't it easy).
quite easy to modify/update the code(as the code will in our repo, we may not even have to go to aws editor).
yes we can other third parties libraries, but the above thing is written in pure JavaScript, no third party library is utilized there.
Also, here you don't have to deploy your code.
Sometimes our libraries total size increased to 5 MB and AWS lambda editor stop supporting it.
It will resolve this problem as well, now we will send only the required function from a library, not the whole library
on an average an async library contains around 100s of functions, but we use 1-2 functions. So, now we will send only the function which we are going to use.
Note:
I searched this a lot but nowhere found this kind of thing.
The above piece of code will upload docs to the s3 bucket.

Categories