Cannot get the contact attributes of Amazon Connect by aws-sdk Javascript - javascript

I am using call recording on Amazon Connect.
I am trying to get the contact attribute of Amazon Connect by using the metadata value of the .wav file on S3 where the conversation was recorded.
This is my Lambda function.
Object.defineProperty(exports, "__esModule", { value: true });
const AWS = require("aws-sdk");
const connect = new AWS.Connect();
const s3 = new AWS.S3();
exports.handler = async (event, context) => {
await Promise.all(event.Records.map(async (record) => {
const bucket = event.Records[0].s3.bucket.name;
const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' '));
var params = {
Bucket: bucket,
Key: key,
};
const metadata = await s3.headObject(params).promise();
console.log(metadata);
const contactid = metadata.Metadata['contact-id'];
const instanceid = metadata.Metadata['organization-id'];
var params = {
InitialContactId: contactid,
InstanceId: instanceid,
};
console.log(params);
const connectdata = await connect.getContactAttributes(params).promise();
console.log(connectdata);
}));
};
This is the JSON value of the .wav file (I hide my personal information).
{
AcceptRanges: 'bytes',
LastModified: 2021-09-01TXX:XX:XX.000Z,
ContentLength: 809644,
ETag: '"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"',
ContentType: 'audio/wav',
ServerSideEncryption: 'aws:kms',
Metadata: {
'contact-id': 'XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX',
'aws-account-id': 'XXXXXXXXXXXX',
'organization-id': 'XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX'
},
SSEKMSKeyId: 'arn:aws:kms:ap-northeast-1:XXXXXXXXXXXX:key/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX'
}
However, when I used Connect's getContactAttributes method using aws-sdk, there was no value in the obtained parameters.
Even though the parameter values are certainly included.
console.log(params)
{
InitialContactId: 'XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX',
InstanceId: 'XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX'
}
console.log(connectdata)
{ Attributes: {} }
I want to know what {Attributes: {}} stands for.
Is there something wrong with the argument of the getContactAttributes method, or the output method of console.log?
In the first place, can't I get the contact attribute from the metadata of the .wav file?
There may be many mistakes because it is a beginner, but I would like advice for this.
Thanks.

This problem has been self-solved.
The connect.getContactAttributes method seems to get only the value of Attributes in the contact flow. I misunderstood that it was to get the JSON itself sent from the contact flow.
I found that the value of Attributes is set by posting a key-value pair in the "Set contact attributes" block of the Amazon Connect contact flow.

Related

API Gateway doesn't receive headers from AWS Lambda

I am using AWS Lambda service to create a serverless function (triggered by API Gateway) that has to do the following:
Get the data from the DynamoDB table
Create a .docx based on that data
Return a .docx document (so that it automatically downloads when the function is triggered).
I managed to successfully accomplish first 2 tasks but no matter what I do it returns a base64 string instead of a document. When I check the Network tab, I always get content-type: application/json in the Response, despite the fact that I specify the headers in the return of my Lambda function. Is there something I need to configure in my API Gateway to make it work? Or is there an issue with my code?
Updates: now the headers are coming and the document download is triggered successfully. But when I try to open it, I get an error: Word found unreadable content. I opened the file in the text editor and its content is the base64 string instead of what I am passing to it. What can be causing this issue?
const AWS = require('aws-sdk');
const { encode } = require("js-base64");
const { Document, Packer, Paragraph, TextRun } = require("docx");
const dynamoDb = new AWS.DynamoDB.DocumentClient();
exports.handler = async (event) => {
const params = {
TableName: 'table-name',
Key: {
'id': 'item-key',
},
};
const dynamoDbResult = await dynamoDb.get(params).promise();
const data = dynamoDbResult.Item;
const doc = new Document({
sections: [
{
properties: {},
children: [
new Paragraph({
children: [
new TextRun(data.projectName),
new TextRun({
text: data.clientEmail1,
bold: true
}),
]
})
]
}
]
});
const buffer = await Packer.toBuffer(doc);
return {
statusCode: 200,
headers: {
'Content-Type': 'application/vnd.openxmlformats-officedocument.wordprocessingml.document',
'Content-Disposition': 'attachment; filename="your_file_name.docx"'
},
body: encode(buffer),
isBase64Encoded: true
};
}

amazon s3.upload is taking time

I am trying to upload file to s3, before that I am altering the name of the file. Now I am accepting 2 files from request form-data object, renaming the filename, and uploading the file to s3. And end of the task I need to return the renamed file list which is uploaded successfully.
I am using S3.upload() function. But the problem is, the variable which is assigned as empty array initially, that will contain the renamed file list. But the array is returning empty response. The s3.upload() is taking much time. is there any probable solution where I can store the file name if upload is successful and return those names in response.
Please help me to fix this. The code looks like this,
if (formObject.files.document && formObject.files.document.length > 0) {
const circleCode = formObject.fields.circleCode[0];
let collectedKeysFromAwsResponse = [];
formObject.files.document.forEach(e => {
const extractFileExtension = ".pdf";
if (_.has(FILE_EXTENSIONS_INCLUDED, _.lowerCase(extractFileExtension))) {
console.log(e);
//change the filename
const originalFileNameCleaned = "cleaning name logic";
const _id = mongoose.Types.ObjectId();
const s3FileName = "s3-filename-convension;
console.log(e.path, "", s3FileName);
const awsResponse = new File().uploadFileOnS3(e.path, s3FileName);
if(e.hasOwnProperty('ETag')) {
collectedKeysFromAwsResponse.push(awsResponse.key.split("/")[1])
}
}
});
};
use await s3.upload(params).promise(); is the solution.
Use the latest code - which is AWS SDK for JavaScript V3. Here is the code you should be using
// Import required AWS SDK clients and commands for Node.js.
import { PutObjectCommand } from "#aws-sdk/client-s3";
import { s3Client } from "./libs/s3Client.js"; // Helper function that creates Amazon S3 service client module.
import {path} from "path";
import {fs} from "fs";
const file = "OBJECT_PATH_AND_NAME"; // Path to and name of object. For example '../myFiles/index.js'.
const fileStream = fs.createReadStream(file);
// Set the parameters
export const uploadParams = {
Bucket: "BUCKET_NAME",
// Add the required 'Key' parameter using the 'path' module.
Key: path.basename(file),
// Add the required 'Body' parameter
Body: fileStream,
};
// Upload file to specified bucket.
export const run = async () => {
try {
const data = await s3Client.send(new PutObjectCommand(uploadParams));
console.log("Success", data);
return data; // For unit tests.
} catch (err) {
console.log("Error", err);
}
};
run();
More details can be found in the AWS JavaScript V3 DEV Guide.

Upload image to cloud storage from firebase cloud functions

I am trying so hard to upload one image from cloud functions
I am sending an image from the web to the cloud function using onRequest. I am sending a base64 string and the fileName. Now I was following different tutorials on the internet and couldn't seem to solve my problem.
Here is my code. I think I am doing something wrong with the service account json. Although i generated the json file and used it but still it didn't work.
I get the error of The caller does not have permission at Gaxios._request when i don't use service account json
And when i do use serviceAccount.json then i get this error The "path" argument must be of type string. Received an instance of Object which is from file.createWriteStream() i think
Anyway here is the code can anyone please help me with this
The projectId that I am using is shown in the picture below
const functions = require("firebase-functions");
const admin = require("firebase-admin");
const projectId = functions.config().apikeys.projectid; // In the picture below
const stream = require("stream");
const cors = require("cors")({ origin: true });
const { Storage } = require("#google-cloud/storage");
// Enable Storage
const storage = new Storage({
projectId: projectId, // I did use serviceAccount json here but that wasn't working
});
// With serviceAccount.json code
// const storage = new Storage({
// projectId: projectId,
// keyFilename: serviceAccount,
// });
// This is giving the error of: The "path" argument must be of type string. Received an instance of Object
exports.storeUserProfileImage = functions.https.onRequest((req, res) => {
cors(req, res, async () => {
try {
const bucket = storage.bucket(`gs://${projectId}.appspot.com`);
let pictureURL;
const image = req.body.image;
const userId = req.body.userId;
const fileName = req.body.fileName;
const mimeType = image.match(
/data:([a-zA-Z0-9]+\/[a-zA-Z0-9-.+]+).*,.*/
)[1];
//trim off the part of the payload that is not part of the base64 string
const base64EncodedImageString = image.replace(
/^data:image\/\w+;base64,/,
""
);
const imageBuffer = Buffer.from(base64EncodedImageString, "base64");
const bufferStream = new stream.PassThrough();
bufferStream.end(imageBuffer);
// Define file and fileName
const file = bucket.file("images/" + fileName);
bufferStream
.pipe(
file.createWriteStream({
metadata: {
contentType: mimeType,
},
public: true,
validation: "md5",
})
)
.on("error", function (err) {
console.log("error from image upload", err.message);
})
.on("finish", function () {
// The file upload is complete.
console.log("Image uploaded");
file
.getSignedUrl({
action: "read",
expires: "03-09-2491",
})
.then((signedUrls) => {
// signedUrls[0] contains the file's public URL
console.log("Signed urls", signedUrls[0]);
pictureURL = signedUrls[0];
});
});
console.log("image url", pictureURL);
res.status(200).send(pictureURL);
} catch (e) {
console.log(e);
return { success: false, error: e };
}
});
});
const storage = new Storage({
projectId: projectId
keyFilename: "" // <-- Path to a .json, .pem, or .p12 key file
});
keyFilename accepts path to where your service account is stored and the credentials themselves.
folder
|-index.js
|-credentials
|-serviceAccountKey.json
If your directory structure looks like about then the path should be like this:
const storage = new Storage({
projectId: projectId
keyFilename: "./credentials/serviceAccountKey.json"
});
Do note that if you are using Cloud functions then the SDK will use Application Default Credentials so you don't have to pass those params. Simply initialize as shown below:
const storage = new Storage()
So first of all I didn't give any serviceaccounts because I am using the firebase cloud functions as #Dharmaraj said in his answer
Secondly, this was a permission problem in the google cloud platform which can be solved by going through the following steps
Go to your project's Cloud Console (https://console.cloud.google.com/) > IAM & admin > IAM, Find the App Engine default service account then click on the pencil at far left > Click on add role > In the filter field enter Service Account Token Creator and click on it save and you are good to go
Found this solution from here
https://github.com/firebase/functions-samples/issues/782

Sometimes data is not getting written in the AWS S3 bucket by Lambda

I'm facing a freaking issue with my Lambda code written using TypeScript that creates S3 bucket and write an array of JSON data into it. This Lambda gets triggered based on messages arrived in the SQS queue. Sometimes, there can be many message all of a sudden.
I suspect that when there is just 1 message, then my Lambda works fine by first creating a S3 bucket and write array of JSON into it however when the messages grow say 10 messages at a time then Lambda just creates a bucket only and could not write the contents in it, as a result I just get an empty JSON in it like {}.
Not sure if it is due to no. of messages or not. Because all messages has to do same task that is creating same bucket (if not exists already) and write similar contents into or it has something to do CacheControl property describe below.
Below is my code snippet :-
exports.createBucketAndUploadToS3 = async (s3Client, bucket, prefix, contents) => {
const params = {
Bucket: bucket,
Key: `${prefix}/data.json`,
Body: JSON.stringify(contents),
ContentType: 'application/json; charset=utf-8',
CacheControl: 'max-age=60'
};
await s3Client.createBucket({ Bucket: bucket }).promise();
return await s3Client.putObject(params).promise();
};
I would propose to check if the bucket exists before calling the createBucket command, this often causes an exception when you try to create an existing object again. You can do this using the following code:
const checkBucketExists = async bucket => {
const s3 = new AWS.S3();
const options = {
Bucket: bucket,
};
try {
await s3.headBucket(options).promise();
return true;
} catch (error) {
if (error.statusCode === 404) {
return false;
}
throw error;
}
};
// in your code
let isBucketExisting = await checkBucketExists(bucket);
if (isBucketExisting) {
await s3Client.createBucket({ Bucket: bucket }).promise();
}
return await s3Client.putObject(params).promise();

How to upload a file into Firebase Storage from a callable https cloud function

I have been trying to upload a file to Firebase storage using a callable firebase cloud function.
All i am doing is fetching an image from an URL using axios and trying to upload to storage.
The problem i am facing is, I don't know how to save the response from axios and upload it to storage.
First , how to save the received file in the temp directory that os.tmpdir() creates.
Then how to upload it into storage.
Here i am receiving the data as arraybuffer and then converting it to Blob and trying to upload it.
Here is my code. I have been missing a major part i think.
If there is a better way, please recommend me. Ive been looking through a lot of documentation, and landed up with no clear solution. Please guide. Thanks in advance.
const bucket = admin.storage().bucket();
const path = require('path');
const os = require('os');
const fs = require('fs');
module.exports = functions.https.onCall((data, context) => {
try {
return new Promise((resolve, reject) => {
const {
imageFiles,
companyPIN,
projectId
} = data;
const filename = imageFiles[0].replace(/^.*[\\\/]/, '');
const filePath = `ProjectPlans/${companyPIN}/${projectId}/images/${filename}`; // Path i am trying to upload in FIrebase storage
const tempFilePath = path.join(os.tmpdir(), filename);
const metadata = {
contentType: 'application/image'
};
axios
.get(imageFiles[0], { // URL for the image
responseType: 'arraybuffer',
headers: {
accept: 'application/image'
}
})
.then(response => {
console.log(response);
const blobObj = new Blob([response.data], {
type: 'application/image'
});
return blobObj;
})
.then(async blobObj => {
return bucket.upload(blobObj, {
destination: tempFilePath // Here i am wrong.. How to set the path of downloaded blob file
});
}).then(buffer => {
resolve({ result: 'success' });
})
.catch(ex => {
console.error(ex);
});
});
} catch (error) {
// unknown: 500 Internal Server Error
throw new functions.https.HttpsError('unknown', 'Unknown error occurred. Contact the administrator.');
}
});
I'd take a slightly different approach and avoid using the local filesystem at all, since its just tmpfs and will cost you memory that your function is using anyway to hold the buffer/blob, so its simpler to just avoid it and write directly from that buffer to GCS using the save method on the GCS file object.
Here's an example. I've simplified out a lot of your setup, and I am using an http function instead of a callable. Likewise, I'm using a public stackoverflow image and not your original urls. In any case, you should be able to use this template to modify back to what you need (e.g. change the prototype and remove the http response and replace it with the return value you need):
const functions = require('firebase-functions');
const axios = require('axios');
const admin = require('firebase-admin');
admin.initializeApp();
exports.doIt = functions.https.onRequest((request, response) => {
const bucket = admin.storage().bucket();
const IMAGE_URL = 'https://cdn.sstatic.net/Sites/stackoverflow/company/img/logos/so/so-logo.svg';
const MIME_TYPE = 'image/svg+xml';
return axios.get(IMAGE_URL, { // URL for the image
responseType: 'arraybuffer',
headers: {
accept: MIME_TYPE
}
}).then(response => {
console.log(response); // only to show we got the data for debugging
const destinationFile = bucket.file('my-stackoverflow-logo.svg');
return destinationFile.save(response.data).then(() => { // note: defaults to resumable upload
return destinationFile.setMetadata({ contentType: MIME_TYPE });
});
}).then(() => { response.send('ok'); })
.catch((err) => { console.log(err); })
});
As a commenter noted, in the above example the axios request itself makes an external network access, and you will need to be on the Blaze or Flame plan for that. However, that alone doesn't appear to be your current problem.
Likewise, this also defaults to using a resumable upload, which the documentation does not recommend when you are doing large numbers of small (<10MB files) as there is some overhead.
You asked how this might be used to download multiple files. Here is one approach. First, lets assume you have a function that returns a promise that downloads a single file given its filename (I've abridged this from the above but its basically identical except for the change of INPUT_URL to filename -- note that it does not return a final result such as response.send(), and there's sort of an implicit assumption all the files are the same MIME_TYPE):
function downloadOneFile(filename) {
const bucket = admin.storage().bucket();
const MIME_TYPE = 'image/svg+xml';
return axios.get(filename, ...)
.then(response => {
const destinationFile = ...
});
}
Then, you just need to iteratively build a promise chain from the list of files. Lets say they are in imageUrls. Once built, return the entire chain:
let finalPromise = Promise.resolve();
imageUrls.forEach((item) => { finalPromise = finalPromise.then(() => downloadOneFile(item)); });
// if needed, add a final .then() section for the actual function result
return finalPromise.catch((err) => { console.log(err) });
Note that you could also build an array of the promises and pass them to Promise.all() -- that would likely be faster as you would get some parallelism, but I wouldn't recommend that unless you are very sure all of the data will fit inside the memory of your function at once. Even with this approach, you need to make sure the downloads can all complete within your function's timeout.

Categories