I want to load a file from S3 with line seperated values and push it into an array.
The following code does work on my local machine, but does not work executed as a lambda function. The lambda function times out (even if I bump the timeout up to 15 seconds).
Are the SDK's different? What do I miss here since I get no error message at all beside the timeout?
Lambda Env: Node 6.10
Permission to access S3 is set like this
"Statement": [{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::mybucket",
"arn:aws:s3:::mybucket/*"
]
}]
Code looks like this
var AWS = require('aws-sdk');
var s3 = new AWS.S3({region:'eu-central-1'});
exports.index = function(event, context, callback){
var params = {
Bucket: 'mybucket',
Key: 'file.txt'
}
urls=[];
var stream = s3.getObject(params);
stream.on('httpError',function(err){
console.log(err);
throw err;
});
stream.on('httpData', function(chunk) {
urls.push(chunk.toString());
});
stream.on('httpDone', function() {
urls2 = urls.join('\n\r');
callback(urls2);
});
stream.send();
}
I got following error executing the lambda via AWS console
{
"errorMessage": "2017-07-04T18:25:20.271Z 19ab7138-60e6-11e7-9e1e-c318d929bc39 Task timed out after 15.00 seconds"
}
Thanks for any help!
Handler is required to invoke the lambda function. Also, you need to mention handler name in the lambda function configuration.
exports.handler = (event, context, callback) => {
const bucket = event.Records[0].s3.bucket.name;
const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' '));
const params = {
Bucket: bucket,
Key: key,
};
var stream = s3.getObject(params);
....
stream.send();
}
eports.handler is invoked when lambda function triggers. Make sure you must define the handler name(filename.handler) in lambda function configuration.
If you trigger this code on s3 file upload it will read the uploaded s3 file. you change the bucket & key name to read any file(which exist).
Follow the documentation http://docs.aws.amazon.com/lambda/latest/dg/nodejs-prog-model-handler.html
This code works as expected Thanks at #anand for verifing it.
The issue was related to VPC settings.
Unfortunatly a good proper error message would have helped. But at the end of the day lesson learned.
If you are running on a VPC and your lambda code should run but you get a time out, better check your security and network settings =)
Related
How do I add my own lambda without going to AWS console manually and, most importantly, how do I call it from my React app?
Currently, if I want to execute my own lambda function, I go to AWS console and create it there manually. Clearly, there is a way to do this locally in VS Code, since serverless-framework has created its functions already using the fullstack-app deployment.
This is my current Lambda function (send an email from contact form using Amazon SES) created in AWS using console.
var aws = require('aws-sdk');
var ses = new aws.SES({region: 'us-east-1'});
exports.handler = (event, context, callback) => {
var params = {
Destination: {
ToAddresses: [`${event.toEmail}`]
},
Message: {
Body: {Html: { Data: `${event.body}`}
},
Subject: { Data: `${event.subject}`}
},
Source: `${event.fromEmail}`
};
ses.sendEmail(params, function (err, data) {
callback(null, {err: err, data: data});
if (err) {
console.log(err);
context.fail(err);
} else {
console.log(data);
context.succeed(event);
}
});
};
I have created a REST API for it in AWS and I call it from my React app with axios:
axios.post(`https://xxxxxxxx.execute-api.us-east-1.amazonaws.com/default/contactFormSend`, email)
.then(res => {console.log(res)})
My goal is to not create the lambda function manually in AWS console, but write it locally using serverless-framework architecture and find a way to call it.
I have looked everywhere, but I feel like I have missed something very important during my learning about serverless-framework architecture.
How do I add my own lambda without going to AWS console manually and,
most importantly ?
I hope you have serverless.yml file with your function config. Here is a template with possible configs https://www.serverless.com/framework/docs/providers/aws/guide/serverless.yml/
If everything is setup deploying is super easy , just by using serverless deploy
https://www.serverless.com/framework/docs/providers/aws/guide/deploying/
Here is a very simple one from serverless examples - https://github.com/serverless/examples/tree/master/aws-node-rest-api
How do I call it from my React app ?
You need an exposed public endpoint either you take directly the one generated by API Gateway or you create custom domain and map it to your existing domain.
I try to pull messages from a SQS queue(not FIFO) but it doesn't work. I have a lambda writing a value in a DynamoDB, triggering another Lambda creating a SQS message. Then I created a Lambda function with rights ReceiveMessage, GetQueueAttributes and DeleteMessage and added a "Trigger for Lambda Functions" in SQS giving permissions to my new lambda function. Whenever an entry is added to the database my last lambda function is triggered so the trigger seems to work. When I try to get the entries using sqs.receivemessage(...) it waits until the WaitTimeSeconds period is passed and I receive an answer like this:
{
"ResponseMetadata": {
"RequestId": "b1e34ce3-9003-5c00-a3d8-1ab75ba132b8"
}
}
unfortunately no Message object is added. I also tried wrapping this call in a loop to try it several times in a row as suggested on other pages but unfortunately he just runs until the timeout is reached and I don't get the message. When the lambda is running I can see the message in the "Messages in the flight" list.
I tried modifying the timeouts, the way I call the function and googled a lot, but unfortunately I still receive no messages.
Any ideas?
Here is my code:
'use strict';
var AWS = require("aws-sdk");
AWS.config.update({region: 'eu-central-1'});
var sqs = new AWS.SQS({apiVersion: '2012-11-05'});
exports.handler =(event, context, callback) => {
var params = {
QueueUrl: 'https://sqs.eu-central-1.amazonaws.com/476423085846/email',
AttributeNames: ['All'],
MaxNumberOfMessages: 5,
MessageAttributeNames: [
'email',
'hash'
],
WaitTimeSeconds: 10
};
sqs.receiveMessage(params, function(err, data) {
if (err)
{
console.log(err, err.stack); // an error occurred
}
else
{
console.info('DATA\n'+JSON.stringify(data));
}
}
}
My SQS settings:
Default Visibility Timeout: 5min,
Message Retention Period: 4 days,
Maximum Message Size: 256kb(my messages should be way smaller),
Delivery Delay: 0sec,
Receive Message Wait Time: 10sec
Thanks in advance :)
Regards Christian
With AWS Lambda SQS integration, AWS handles polling the SQS queue for you. The SQS message(s) will be in the event object that is passed to the Lambda function invocation. You should not be calling receiveMessage directly in that scenario.
I'm utilizing WebdriverIO v4 to create HTML reports of various tests passing/failing. I gather the results of the tests via multiple event listeners (ex. this.on('test:start'), this.on('suite:end'), etc.). The final event is this.on('end'), which is called when all of the tests have completed execution. It is here were the test results are sorted based on which Operating System it was run on, Browser, etc.
It seems that the problem is that my program terminates before allowing the function to complete or receiving a callback from the function.
Here is the gist of my code:
let aws = require('aws-sdk');
aws.config.update({
//Censored keys for security
accessKeyId: '*****',
secretAccessKey: '*****',
region: 'us-west-2'
});
let s3 = new aws.S3({
apiVersion: "2006-03-01",
});
/*
*
* Here is where the program generates HTML files
*
*/
//The key and data fields are not actually asterisks, this is just to show that key and data fields are initialized at this point
let key = "****"
let data = "****"
s3.upload({
Bucket: 'html',
Key: key,
Body: data
}, function (err, data) {
if (err) {
console.log("Error: ", err);
}
if (data) {
console.log("Success: ", data.Location);
}
}).on('httpUploadProgress', event => {
console.log(`Uploaded ${event.loaded} out of ${event.total}`);
});
When the tests are run, the results are displayed, and then the program terminates. The program does not wait for the callback to return, so no error or success message is displayed. The files are not uploaded to S3.
I know that the s3.upload() works because if I run the code earlier in the project with dummy data, the upload succeeds. But at the end of my project, the program terminates before the upload finishes.
How can I fix this issue? How can I guarantee that the files are uploaded to S3 before the program terminates? How can I see the callback before termination? Thank you!
In client-side javascript, I set:
AWS.config.credentials = {
"accessKeyId": ak, // starts with "AKIA..."
"secretAccessKey": sk // something long and cryptic
};
Then eventually call
var lambda = new AWS.Lambda({apiVersion: '2015-03-31'});
var params = {
FunctionName: 'my-function-name',
InvokeArgs : my_data
};
lambda.invokeAsync(params, function(err, data) {
...
The HTML request seems to contain the correct access key:
authorization:AWS4-HMAC-SHA256 Credential=AKIA...
And in server-side node.js, I don't manually set any AWS credentials, with the understanding that setting them in the client-side is sufficient, as:
var AWS = require('aws-sdk');
var s3 = new AWS.S3();
...
Following the request, the server's upload handler gets called as expected, but within that handler, s3.putObject() fails with an Access Denied error. Trying to debug this, I added console.log(AWS.config.credentials) to the upload handler, and Cloudwatch is showing:
accessKeyId: 'ASIA...
I don't recognize the accessKeyId that is shown, and it certainly doesn't match the one provided in the request header. Am I doing something wrong here, or is this expected behavior?
The Lambda function does not use the AWS credentials you used in your client-side JavaScript code. The credentials in your client-side code were used to issue a Lambda.invoke() command to the AWS API. In this context, the credentials you are using on the client-side only need the Lambda invoke permission.
Your Lambda function is then invoked by AWS Lambda service. The Lambda service will attach the IAM Execution Role to the invocation that you specified when you created/configured the Lambda function. That IAM Execution Role is what needs to have the appropriate S3 access.
I'm trying to delete a 'folder' called js from an Amazon AWS S3 bucket. I've followed numerous tutorials and the Identity I'm using has AmazonS3FullAccess permissions.
By all I've gathered, the following should work - but it doesn't. I get no errors, I merely get console output of {}.
I have a method that can upload to Amazon S3 using the same credentials, so I know they check out right. This is a listing of how the credentials are configured on my IAM;
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
and this is the actual code I'm trying to run to delete the content;
var AWS = require('aws-sdk');
AWS.config = {
'accessKeyId': `{HIDDEN}`,
'secretAccessKey': `{HIDDEN}`,
'region': `us-east-1`
};
var rmAWS = function() {
var BUCKET_NAME = `{HIDDEN}`; var s3 = new AWS.S3();
var params = {
Bucket: BUCKET_NAME,
Key: 'js'
};
s3.deleteObject(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
}
rmAWS();
AWS S3 doesn't understand the concept of directory. So, when you try to delete js it looks for a file who's key is js. The reality is that there are several files who's keys are js/foo. You should try deleting all those files, that the "folder" will be "deleted".
To do so, you should use the deleteObjects method (http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#deleteObjects-property).
You can't delete objects using a wildcard such as js/*, but you can list the objects in the bucket and then create an array with only the ones you want to delete.
Try using the listObjects method (http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#listObjects-property). Assign the prefix to js/ that it should only return the objects you'd like to delete. Then use this result as the array for the deleteObjects method.
Are there multiple "js" "folders" that need to be deleted? Or just one?
var params = {
Bucket: BUCKET_NAME,
Delimiter: '/',
Prefix: 'FOLDERNAME/FOLDERNAME/js/'
};
Make sure to try it out with listing the objects instead of deleting them first.