I am using AWS Lambda service to create a serverless function (triggered by API Gateway) that has to do the following:
Get the data from the DynamoDB table
Create a .docx based on that data
Return a .docx document (so that it automatically downloads when the function is triggered).
I managed to successfully accomplish first 2 tasks but no matter what I do it returns a base64 string instead of a document. When I check the Network tab, I always get content-type: application/json in the Response, despite the fact that I specify the headers in the return of my Lambda function. Is there something I need to configure in my API Gateway to make it work? Or is there an issue with my code?
Updates: now the headers are coming and the document download is triggered successfully. But when I try to open it, I get an error: Word found unreadable content. I opened the file in the text editor and its content is the base64 string instead of what I am passing to it. What can be causing this issue?
const AWS = require('aws-sdk');
const { encode } = require("js-base64");
const { Document, Packer, Paragraph, TextRun } = require("docx");
const dynamoDb = new AWS.DynamoDB.DocumentClient();
exports.handler = async (event) => {
const params = {
TableName: 'table-name',
Key: {
'id': 'item-key',
},
};
const dynamoDbResult = await dynamoDb.get(params).promise();
const data = dynamoDbResult.Item;
const doc = new Document({
sections: [
{
properties: {},
children: [
new Paragraph({
children: [
new TextRun(data.projectName),
new TextRun({
text: data.clientEmail1,
bold: true
}),
]
})
]
}
]
});
const buffer = await Packer.toBuffer(doc);
return {
statusCode: 200,
headers: {
'Content-Type': 'application/vnd.openxmlformats-officedocument.wordprocessingml.document',
'Content-Disposition': 'attachment; filename="your_file_name.docx"'
},
body: encode(buffer),
isBase64Encoded: true
};
}
Related
I'm using google API and I want to download files from UI to my google drive.
As I found in google drive API documentation here, I want to use simple import.
For the moment I have such code for my onChange input event.
const onLoadFile = async (e: { target: { files: any } }) => {
const fileData = e.target.files[0];
//gapi request
uploadFile(body);
return null;
};
uploadFile:
const uploadFile = async (body: string) => {
const result = await gapiRequest({
path: `${ENDPOINT}/upload/drive/v3/files`,
method: 'POST',
body,
});
setUploadFileData(result);
};
gapiRequest:
const gapiRequest = async (options: gapi.client.RequestOptions): Promise<any> =>
new Promise<any>((resolve, reject) =>
gapi.client.request(options).execute((res) => {
resolve(res);
if (!res) {
reject(res);
}
})
);
I need to know which request body I need to create for such a request.
The request body should consist of a form that contains both metadata and the file, like so:
const metadata = {
"name": "yourFilename",
"mimeType": "text/plain", // whatever is appropriate in your case
"parents": ["folder id or 'root'"], // Google Drive folder id
};
const form = new FormData();
form.append('metadata', new Blob([JSON.stringify(metadata)], { type: 'application/json' }));
form.append('file', file); // file could be a blob or similar
You might also need to add an uploadType parameter to your path property. The multipart value works even for simple uploads.
See also here: https://stackoverflow.com/a/68595887/7821823
I am using call recording on Amazon Connect.
I am trying to get the contact attribute of Amazon Connect by using the metadata value of the .wav file on S3 where the conversation was recorded.
This is my Lambda function.
Object.defineProperty(exports, "__esModule", { value: true });
const AWS = require("aws-sdk");
const connect = new AWS.Connect();
const s3 = new AWS.S3();
exports.handler = async (event, context) => {
await Promise.all(event.Records.map(async (record) => {
const bucket = event.Records[0].s3.bucket.name;
const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' '));
var params = {
Bucket: bucket,
Key: key,
};
const metadata = await s3.headObject(params).promise();
console.log(metadata);
const contactid = metadata.Metadata['contact-id'];
const instanceid = metadata.Metadata['organization-id'];
var params = {
InitialContactId: contactid,
InstanceId: instanceid,
};
console.log(params);
const connectdata = await connect.getContactAttributes(params).promise();
console.log(connectdata);
}));
};
This is the JSON value of the .wav file (I hide my personal information).
{
AcceptRanges: 'bytes',
LastModified: 2021-09-01TXX:XX:XX.000Z,
ContentLength: 809644,
ETag: '"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"',
ContentType: 'audio/wav',
ServerSideEncryption: 'aws:kms',
Metadata: {
'contact-id': 'XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX',
'aws-account-id': 'XXXXXXXXXXXX',
'organization-id': 'XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX'
},
SSEKMSKeyId: 'arn:aws:kms:ap-northeast-1:XXXXXXXXXXXX:key/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX'
}
However, when I used Connect's getContactAttributes method using aws-sdk, there was no value in the obtained parameters.
Even though the parameter values are certainly included.
console.log(params)
{
InitialContactId: 'XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX',
InstanceId: 'XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX'
}
console.log(connectdata)
{ Attributes: {} }
I want to know what {Attributes: {}} stands for.
Is there something wrong with the argument of the getContactAttributes method, or the output method of console.log?
In the first place, can't I get the contact attribute from the metadata of the .wav file?
There may be many mistakes because it is a beginner, but I would like advice for this.
Thanks.
This problem has been self-solved.
The connect.getContactAttributes method seems to get only the value of Attributes in the contact flow. I misunderstood that it was to get the JSON itself sent from the contact flow.
I found that the value of Attributes is set by posting a key-value pair in the "Set contact attributes" block of the Amazon Connect contact flow.
I am trying to consume an external api with Firebase Functions, but it gives a time out error when I use OPTIONS, when I use GET to work normally.
I don't want to use const request = require('request'); or var rp = require('request-promise');, as they are obsolete.
What may be wrong, I await help from colleagues.
const express = require('express');
const cors = require('cors');
const app = express();
// Permitir solicitações de origem cruzada automaticamente
app.use(cors({
origin: true
}));
//app.use(cors());
app.get('/criarcliente', (req, res) => {
let uri = "https://api.iugu.com/v1/customers?api_token=<mytoken>";
let headers = {
'Content-Type': 'application/json'
}
let body = {
custom_variables: [{
name: 'fantasia',
value: 'Dolci Technology'
}, {
name: 'vendedor',
value: ''
}],
email: 'teste1#teste1.com',
name: 'John Dolci',
phone: 9999999,
phone_prefix: 66,
cpf_cnpj: '00000000000',
cc_emails: 'test#test.com',
zip_code: '78520000',
number: '49',
street: 'Name Street',
city: 'Guarantã do Norte',
state: 'MT',
district: 'Jardim Araguaia'
}
var options = {
method: 'POST',
uri: uri,
body: body,
headers: headers,
json: true
};
const https = require('https');
var req = https.request(options, (resp) => {
let data = '';
resp.on('data', (chunk) => {
data += chunk;
});
resp.on('end', () => {
var result = JSON.parse(data);
res.send(result);
});
console.log("aqui");
}).on("error", (err) => {
console.log("Error: " + err.message);
});
}); ```
There are two points that I would point out to your case. First, you need to confirm that you are using the Blaze plan in your billing. As clarified in the official pricing documentation, in case you are not and are trying to use a non Google-owned service, you won't be able to do it and this will cause an error.
The second point would be to use the axios library within your application. It's the one that I believe to be most used with Node.js to allow you access to third-party application within Cloud Functions - I would say that it's pretty easy to use as well. You should find a whole example on using it with third-party API here.
To summarize, take a look at your pricing, as it's the most common issue to be caused and give it a try with axios, to confirm that you can avoid the error with it.
The function simply inserts a new column on a DynamoDB´s table. I can verify it works by clicking the "test" button in the lambda function tab (it responds with a 200), but it returns an error when I attach it to an API Gateway´s POST Request, using the "test" button too, in this case the test button of the API Gateway´s method test tab.
This are the errors:
Response Body
{
"message": "Internal server error"
}
Response Headers
{"x-amzn-ErrorType":"InternalServerErrorException"}
Logs
Lambda execution failed with status 200 due to customer function error: One or more parameter values were invalid: Missing the key site in the item.
Here´s the code of the lambda function:
function response( message) {
return message
}
const AWS = require('aws-sdk');
const docClient = new AWS.DynamoDB.DocumentClient({region: 'eu-west-2'});
exports.handler = function(event, context) {
let scanningParameters = {
Item: {
"site":event.site
},
TableName: 'Galleries'
}
return docClient
.put(scanningParameters)
.promise()
.then(() => {
return {
"statusCode": 200,
'headers': { 'Content-Type': 'application/json' }
}
})
}
However I don´t get why is asking for a key since from the lambda tab it does insert the column in the table correctly. Here is another function which does work in API Gateway too and the schema in absolutely the same:
function response( message) {
return message
}
const AWS = require('aws-sdk');
const docClient = new AWS.DynamoDB.DocumentClient({region: 'eu-west-2'});
exports.handler = function(event, context) {
let scanningParameters = {
Item: {
"email":event.email
},
TableName: 'Users'
}
return docClient
.put(scanningParameters)
.promise()
.then(() => {
return {
"statusCode": 200,
'headers': { 'Content-Type': 'application/json' }
}
})
}
Edit: I just solved it by unchecking the check of "Use Lambda Proxy integration" in the integration request tab
Unchecking the "Use Lambda Proxy integration" check box fixed the issue.
I'm trying a multipart s3 upload from the browser with the JS SDK. I have no trouble with createMultipartUpload, but I get no data back from uploadPart. I can't call completeMultipartUpload because I don't get any eTags back. I get the $response part of the object only, which indicates a 200 status and that all the parameters I passed were defined and the proper datatypes. I can't see any of the parts in my bucket, although I don't know if they're going to a special "parts" place that I can't access.
Here's my code:
const createParams = {
Bucket,
Key: `${uuid()}.${getExtension(file.type)}`,
ContentType: file.type,
ContentDisposition: 'attachment'
}
return s3.createMultipartUpload(createParams).promise()
.then(result => {
console.log(result);
console.log('chunking...')
let chunkArr = chunker(file);
let chunkMap = Promise.map(chunkArr, (chunk, index) => {
const chunkParams = {
Body: chunk,
Bucket: result.Bucket,
Key: result.Key,
PartNumber: index + 1,
UploadId: result.UploadId,
ContentLength: chunk.size
}
console.log(chunkParams)
return s3.uploadPart(chunkParams).promise();
});
return Promise.all(chunkMap);
})
.then(result => {
console.log(result);
return Promise.resolve(true)
// let stopParams = {
//
// }
// return s3.completeMultipartUpload(stopParams).promise();
})
.catch(err => {
throw err;
return Promise.reject(err);
});
s3 instance looks like this:
import AWS from 'aws-sdk';
AWS.config.setPromisesDependency(Promise);
const s3 = new AWS.S3({
apiVersion: '2006-03-01',
accessKeyId: credentials.credentials.AccessKeyId,
secretAccessKey: credentials.credentials.SecretAccessKey,
sessionToken: credentials.credentials.SessionToken,
sslEnabled: true,
s3ForcePathStyle: true,
httpOptions: {
xhrAsync: true,
xhrWithCredentials: true
}
})
chunker function looks like this:
const chunkFile = (file) => {
console.log(typeof(file));
const fileSize = file.size;
const chunkSize = 5242881; // bytes
let offset = 0;
let chunkArr = [];
const chunkReaderBlock = (_offset, _file) => {
console.log(_offset);
if (_offset >= fileSize) {
console.log("Done reading file");
return chunkArr;
}
let blob = _file.slice(_offset, chunkSize + _offset);
console.log(blob);
console.log(typeof(blob));
chunkArr.push(blob);
return chunkReaderBlock(chunkSize + _offset, _file);
}
return chunkReaderBlock(offset, file);
}
The response object I'm getting back looks like this:
(2)[{…}, {…}]
0: {
$response: Response
}
1: $response: Response
cfId: undefined
data: {
$response: Response
}
error: null
extendedRequestId: undefined
httpResponse: HttpResponse
body: Uint8Array[]
headers: {}
statusCode: 200
statusMessage: "OK"
stream: EventEmitter {
_events: {…},
_maxListeners: undefined,
statusCode: 200,
headers: {…}
}
streaming: false
_abortCallback: ƒ callNextListener(err) __proto__: Object
maxRedirects: 10
maxRetries: 3
redirectCount: 0
request: Request {
domain: undefined,
service: f… s.constructor,
operation: "uploadPart",
params: {…},
httpRequest: HttpRequest,
…
}
retryCount: 0
__proto__: Object
__proto__: Object
length: 2
__proto__: Array(0)
Any ideas? This is in React and my test file is 9.xx MB. I also tried with callbacks, and uploading one part at a time, and got the same thing.
In a cross-origin context, you'd need this in your bucket's CORS configuration:
<ExposeHeader>ETag</ExposeHeader>
ExposeHeader — Identifies the response headers ... that customers will be able to access from their applications (for example, from a JavaScript XMLHttpRequest object)."
https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html
To clarify what's going in here, CORS isn't an access restriction mechanism -- it's a mechanism for giving the browser permission to do something that it otherwise assumes might not be something the user would want to happen. It tells the browser to give JavaScript permission to do and see things that would not otherwise be allowed.
From the Mozilla CORS documentation:
By default, only the 6 simple response headers are exposed:
Cache-Control Content-Language Content-Type Expires Last-Modified Pragma
If you want clients to be able to access other headers, you have to list them using the Access-Control-Expose-Headers header.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Expose-Headers
In S3, the way you set the Access-Control-Expose-Headers response header is by configuring <ExposeHeaders> (above). Otherwise, JavaScript can't see them.
I can't see any of the parts in my bucket, although I don't know if they're going to a special "parts" place that I can't access.
They are. Use the listMultipartUploads to find abandoned uploads, and abortMultipartUploads to delete partial uploads and free the allocated storage space for the parts you uploaded. Otherwise, uploads you don't complete will linger indefinitely and you'll be billed for storage of the parts. Also, you can create a bucket lifecycle policy to dispose of them automatically after so many days -- almost always a good idea.