I try to pull messages from a SQS queue(not FIFO) but it doesn't work. I have a lambda writing a value in a DynamoDB, triggering another Lambda creating a SQS message. Then I created a Lambda function with rights ReceiveMessage, GetQueueAttributes and DeleteMessage and added a "Trigger for Lambda Functions" in SQS giving permissions to my new lambda function. Whenever an entry is added to the database my last lambda function is triggered so the trigger seems to work. When I try to get the entries using sqs.receivemessage(...) it waits until the WaitTimeSeconds period is passed and I receive an answer like this:
{
"ResponseMetadata": {
"RequestId": "b1e34ce3-9003-5c00-a3d8-1ab75ba132b8"
}
}
unfortunately no Message object is added. I also tried wrapping this call in a loop to try it several times in a row as suggested on other pages but unfortunately he just runs until the timeout is reached and I don't get the message. When the lambda is running I can see the message in the "Messages in the flight" list.
I tried modifying the timeouts, the way I call the function and googled a lot, but unfortunately I still receive no messages.
Any ideas?
Here is my code:
'use strict';
var AWS = require("aws-sdk");
AWS.config.update({region: 'eu-central-1'});
var sqs = new AWS.SQS({apiVersion: '2012-11-05'});
exports.handler =(event, context, callback) => {
var params = {
QueueUrl: 'https://sqs.eu-central-1.amazonaws.com/476423085846/email',
AttributeNames: ['All'],
MaxNumberOfMessages: 5,
MessageAttributeNames: [
'email',
'hash'
],
WaitTimeSeconds: 10
};
sqs.receiveMessage(params, function(err, data) {
if (err)
{
console.log(err, err.stack); // an error occurred
}
else
{
console.info('DATA\n'+JSON.stringify(data));
}
}
}
My SQS settings:
Default Visibility Timeout: 5min,
Message Retention Period: 4 days,
Maximum Message Size: 256kb(my messages should be way smaller),
Delivery Delay: 0sec,
Receive Message Wait Time: 10sec
Thanks in advance :)
Regards Christian
With AWS Lambda SQS integration, AWS handles polling the SQS queue for you. The SQS message(s) will be in the event object that is passed to the Lambda function invocation. You should not be calling receiveMessage directly in that scenario.
Related
I have a google cloud function that sends notifications to a firebase topic.
The function was working fine till suddenly, it start to send more than one notification 2 or 3 at the same time. After contacting the Firebase support team, they told may I should make the function Idempotent, but I don't know how, since it's a callable function.
for more details, this is a reference question containing more detail about the case.
below is the function's code.
UPDATE 2
it was a bug in the admin sdk and they resolved it in the last release.
UPDATE
the function is already idempotent because it is an event driven function
the link above contains the functions log as prof it runs only once.
after 2 month on go and back it appears the problem with firebase admin sdk
the function code getMessaging().sendToTopic() has retry 4 times and the origin request so its 5 times by default before throwing error and terminate the function. So the reason of duplicate notification is that the admin sdk from time to time cant reach the FCM server for some reason.it try to send notification to all subs but in half way or before send all notification it get error so it retry again from the beginning so some users receives one notification and some get 2, 3,4.
And Now the question is how to prevent these default retries or how to make the retry continue from where it get the error. probably Ill ask a separated question.
For now I did a naive solution by prevent the duplicate notification from the receiver( mobile client). if it get more than one notification has same content within a minute show only one.
const functions = require("firebase-functions");
// The Firebase Admin SDK to access Firestore.
const admin = require("firebase-admin");
const {getMessaging} = require("firebase-admin/messaging");
const serviceAccount = require("./serviceAccountKey.json");
admin.initializeApp({
credential: admin.credential.cert(serviceAccount),
databaseURL: "https://mylinktodatabase.firebaseio.com",
});
exports.callNotification = functions.https.onCall( (data) => {
// Grab the text parameter.
const indicator = data.indicator;
const mTitle = data.title;
const mBody = data.body;
// topic to send to
const topic = "mytopic";
const options = {
"priority": "high",
"timeToLive": 3600,
};
let message;
if (indicator != null ) {
message = {
data: {
ind: indicator,
},
};
} else {
message = {
data: {
title: mTitle,
body: mBody,
},
};
}
// Send a message to devices subscribed to the provided topic.
return getMessaging().sendToTopic(topic, message, options)
.then(() => {
if (indicator != null ) {
console.log("Successfully sent message");
return {
result: "Successfully sent message", status: 200};
} else {
console.log("Successfully sent custom");
return {
result: "Successfully sent custom", status: 200};
}
})
.catch((error) => {
if (indicator != null ) {
console.log("Error sending message:", error);
return {result: `Error sending message: ${error}`, status: 500};
} else {
console.log("Error sending custom:", error);
return {result: `Error sending custom: ${error}`, status: 500};
}
});
});
In this blog Cloud Functions pro tips: Building idempotent functions, shows how to do a function idempotent using two approaches:
Use your event IDs
One way to fix this is to use the event ID, a number that uniquely identifies an event that triggers a background function, and— this is important—remains unchanged across function retries for the same event.
To use an event ID to solve the duplicates problem, the first thing is to extract it from the event context that is accessed through function parameters. Then, we utilize the event ID as a document ID and write the document contents to Cloud Firestore. This way, a retried function execution doesn’t create a new document, just overrides the existing one with the same content. Similarly, some external APIs (e.g., Stripe) accept an idempotency key to prevent data or work duplication. If you depend on such an API, simply provide the event ID as your idempotency key.
A new lease on retries
While this approach eliminates the vast majority of duplicated calls on function retries, there’s a small chance that two retried executions running in parallel could execute the critical section more than once. To all but eliminate this problem, you can use a lease mechanism, which lets you exclusively execute the non-idempotent section of the function for a specific amount of time. In this example, the first execution attempt gets the lease, but the second attempt is rejected because the lease is still held by the first attempt. Finally, a third attempt after the first one fails re-takes the lease and successfully processes the event.
To apply this approach to your code, simply run a Cloud Firestore transaction before you send your email, checking to see if the event has been handled, but also storing the time until which the current execution attempt has exclusive rights to sending the email. Other concurrent execution attempts will be rejected until the lease expires, eliminating all duplicates for all intents and purposes.
Also, as stated in this other question:
Q: Is there a need to make these onCall functions idempotent or will they never perform retries?
A: Calls to onCall functions are not automatically retried. It's up to your application's client-side and server-side code, to agree on a retry strategy.
See also:
Retrying Event-Driven Functions - Best practices
I am trying to handle a modal submission in slack, but there are some database operations in between which are taking a few seconds of time, due to this delay, I am getting: We had some trouble connecting error when submitting slack dialog (Slack API)
I know in node.js we can do something like this:
app.post('/', async (req, res){
res.status(200).send({text: 'Acknowledgement received !'});
// handle other task
return res.json({done: 'Yipee !'})
})
But in AWS Lambda function, I have no idea how will I handle this acknowledgement response in 3 sec.
module.exports.events = async (event, context, callback) => {
??? -> How to handle acknowledgement here, it must be handled at top.
// handle task
return {
statusCode: 200,
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({text: 'Done !'})
}
}
If all you want to do is to get notified for a successfull invocation and then have the lambda keep doing its own thing you can invoke lambda asynchronously by setting the InvocationType parameter to Event. https://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html#API_Invoke_RequestSyntax
Slack's API can be difficult to handle with a serverless architecture, as most serverless implementations like the response to be the last thing they do, rather than the first. One approach would be to wrap any required behaviour inside a promise, and only resolve that promise once you have handled the task. See here for an example of this.
How do I add my own lambda without going to AWS console manually and, most importantly, how do I call it from my React app?
Currently, if I want to execute my own lambda function, I go to AWS console and create it there manually. Clearly, there is a way to do this locally in VS Code, since serverless-framework has created its functions already using the fullstack-app deployment.
This is my current Lambda function (send an email from contact form using Amazon SES) created in AWS using console.
var aws = require('aws-sdk');
var ses = new aws.SES({region: 'us-east-1'});
exports.handler = (event, context, callback) => {
var params = {
Destination: {
ToAddresses: [`${event.toEmail}`]
},
Message: {
Body: {Html: { Data: `${event.body}`}
},
Subject: { Data: `${event.subject}`}
},
Source: `${event.fromEmail}`
};
ses.sendEmail(params, function (err, data) {
callback(null, {err: err, data: data});
if (err) {
console.log(err);
context.fail(err);
} else {
console.log(data);
context.succeed(event);
}
});
};
I have created a REST API for it in AWS and I call it from my React app with axios:
axios.post(`https://xxxxxxxx.execute-api.us-east-1.amazonaws.com/default/contactFormSend`, email)
.then(res => {console.log(res)})
My goal is to not create the lambda function manually in AWS console, but write it locally using serverless-framework architecture and find a way to call it.
I have looked everywhere, but I feel like I have missed something very important during my learning about serverless-framework architecture.
How do I add my own lambda without going to AWS console manually and,
most importantly ?
I hope you have serverless.yml file with your function config. Here is a template with possible configs https://www.serverless.com/framework/docs/providers/aws/guide/serverless.yml/
If everything is setup deploying is super easy , just by using serverless deploy
https://www.serverless.com/framework/docs/providers/aws/guide/deploying/
Here is a very simple one from serverless examples - https://github.com/serverless/examples/tree/master/aws-node-rest-api
How do I call it from my React app ?
You need an exposed public endpoint either you take directly the one generated by API Gateway or you create custom domain and map it to your existing domain.
I have been using pubnub for realtime communication between mobile devices.
The scenario I am creating is as follows:
The sender mobile device will publish a message to pubnub.
The message will call a On Before Function PubNub Function where original
message is sent to laravel endpoint where it is persisted and the
database record id is added to the message and published to the subscriber.
(Laravel endpoint is called using xhr module from PN Function)
The issue I am facing is that my laravel endpoint is called approximately 7 to 12 times for each publish the message.
Below is my onBefore PubNub function.
export default (request) => {
console.log('intial message: ',request.message);
const kvstore = require('kvstore');
const xhr = require('xhr');
const http_options = {
"timeout": 5000, // 5 second timeout.
"method": "POST",
"body": "foo=bar&baz=faz"
};
const url = "redacted_backend_url";
return xhr.fetch(url,http_options).then((x) => {
console.log('Messages after calling ajax: ',request.message.content);
// here i am changing the message
request.message.content = 'hello world';
return request.ok();
}).catch(err=>console.log(err)) ;
}
Please identify what exactly wrong.
PubNub Function Wildcard Channel Binding
Your Function is bound to channel '*' which of course means capture all publishes to all channels. What you don't know is that console.log in a Function publishes the message to a channel that looks like this: blocks-output-kpElEbxa9VOgYMJQ.77042688368579w9
And the Functions output window is subscribed to that channel to display your console.log's. So when the Function is invoked, it publishes a message to the console.logs channel which is captured by your Function, which calls console.log and eventually, a configured recursion limit is hit to protect you from getting into an infinite loop.
So if you were to change your channel binding to something like foo.* and publish to a channel like foo.bar, this undesired recursion would be avoided. In production, the console.logs should be removed, too, and it would not cause this to happen.
Additionally, you could implement some channel filter condition at the top of the your Function to prevent it from further execution your Function code:
if (channel.startsWith("blocks-output"))
return request.ok()
}
I want to load a file from S3 with line seperated values and push it into an array.
The following code does work on my local machine, but does not work executed as a lambda function. The lambda function times out (even if I bump the timeout up to 15 seconds).
Are the SDK's different? What do I miss here since I get no error message at all beside the timeout?
Lambda Env: Node 6.10
Permission to access S3 is set like this
"Statement": [{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::mybucket",
"arn:aws:s3:::mybucket/*"
]
}]
Code looks like this
var AWS = require('aws-sdk');
var s3 = new AWS.S3({region:'eu-central-1'});
exports.index = function(event, context, callback){
var params = {
Bucket: 'mybucket',
Key: 'file.txt'
}
urls=[];
var stream = s3.getObject(params);
stream.on('httpError',function(err){
console.log(err);
throw err;
});
stream.on('httpData', function(chunk) {
urls.push(chunk.toString());
});
stream.on('httpDone', function() {
urls2 = urls.join('\n\r');
callback(urls2);
});
stream.send();
}
I got following error executing the lambda via AWS console
{
"errorMessage": "2017-07-04T18:25:20.271Z 19ab7138-60e6-11e7-9e1e-c318d929bc39 Task timed out after 15.00 seconds"
}
Thanks for any help!
Handler is required to invoke the lambda function. Also, you need to mention handler name in the lambda function configuration.
exports.handler = (event, context, callback) => {
const bucket = event.Records[0].s3.bucket.name;
const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' '));
const params = {
Bucket: bucket,
Key: key,
};
var stream = s3.getObject(params);
....
stream.send();
}
eports.handler is invoked when lambda function triggers. Make sure you must define the handler name(filename.handler) in lambda function configuration.
If you trigger this code on s3 file upload it will read the uploaded s3 file. you change the bucket & key name to read any file(which exist).
Follow the documentation http://docs.aws.amazon.com/lambda/latest/dg/nodejs-prog-model-handler.html
This code works as expected Thanks at #anand for verifing it.
The issue was related to VPC settings.
Unfortunatly a good proper error message would have helped. But at the end of the day lesson learned.
If you are running on a VPC and your lambda code should run but you get a time out, better check your security and network settings =)