Node.js binary to PDF - javascript

I have got a express server, which creates a pdf file.
I am trying to send this file to the client:
const fs = require('fs');
function download(req, res) {
var filePath = '/../../myPdf.pdf';
fs.readFile(__dirname + filePath, function(err, data) {
if (err) throw new Error(err);
console.log('yeyy, no errors :)');
if (!data) throw new Error('Expected data, but got', data);
console.log('got data', data);
res.contentType('application/pdf');
res.send(data);
});
}
On the client I want to download it:
_handleDownloadAll = async () => {
console.log('handle download all');
const response = await request.get(
`http://localhost:3000/download?accessToken=${localStorage.getItem(
'accessToken'
)}`
);
console.log(response);
};
I recieve an body.text like
%PDF-1.4↵1 0 obj↵<<↵/Title (��)↵/Creator (��)↵/Producer (��Qt 5.5.1)↵
but I can't achieve a download.
How can I create a PDF from the data OR directly download it from the server?
I've got it working:
The answer was pretty simple. I just let the browser handle the download with an html anchor tag:
server:
function download(req, res) {
const { creditor } = req.query;
const filePath = `/../../${creditor}.pdf`;
res.download(__dirname + filePath);
}
client:
<a href{`${BASE_URL}?accessToken=${accessToken}&creditor=${creditorId}`} download>Download</a>

The result is the string of the binary. We use base 64 to convert from binary to pdf
var buffer = Buffer.from(result['textBinary'], 'base64')
fs.writeFileSync('/path/to/my/file.pdf', buffer)

You can prompt the browser to download the file by setting the correct content-disposition header:
res.setHeader('Content-disposition', 'attachment; filename=myfile.pdf');

readFile returns a Buffer which is a wrapper around bytes. You're sending Buffer back to the client which is logging them to the console.
The body.text you see is to be expected.
You will need to write these bytes to a file using fs.writeFile or similar. Here's an example:
_handleDownloadAll = async () => {
console.log('handle download all');
const response = await request.get(
`http://localhost:3000/download?accessToken=${localStorage.getItem(
'accessToken'
)}`
);
// load your response data into a Buffer
let buffer = Buffer.from(response.body.text)
// open the file in writing mode
fs.open('/path/to/my/file.pdf', 'w', function(err, fd) {
if (err) {
throw 'could not open file: ' + err;
}
// write the contents of the buffer
fs.write(fd, buffer, 0, buffer.length, null, function(err) {
if (err) {
throw 'error writing file: ' + err;
}
fs.close(fd, function() {
console.log('file written successfully');
});
});
});
};
You may need to experiment with the buffer encoding, it defaults to utf8.
Read this!
The other option you may want to consider is generating the PDF on the server and simply sending the client a link to where it can download this.

Related

File content missing when i download from s3

I am using node.js aws sdk for s3 related methods. I have a method to download the file from s3 bucket.
I am downloading the file using the below code.
const downloadFileBase64 = async (payload) => {
let params = { Bucket: s3BucketName, Key: `${payload.folderName}/${payload.fileName}` };
try {
const response = await s3
.getObject(params, (err) => {
if (err) {
return err;
}
})
.promise();
return {
data: response.Body.toString('base64'),
fileName: payload.fileName
};
} catch (error) {
return Boom.badRequest(error.message);
}
};
Once i get the base64 content i am sending it over an email using sendgrid.
Issue: When i download small files everything is working fine. But when i download large files, some part of the file is missing in multiple pages. I just copy pasted the base64 in few online websites and downloaded the file from there, it's the same issue in those websites also. With this i concluded that there is some issue while returning the response from s3 itself. When i go to s3 and check it in the folder, it's showing proper file.
If you see the above screenshot, its the pdf which is having some random grey background in few pages and some text is also missing from the pdf.
I tried to use another method which just download buffer excluding the base64 conversion as shown below.
const downloadFileBuffer = async (payload) => {
let params = { Bucket: s3BucketName, Key: `${payload.folderName}/${payload.fileName}` };
try {
const response = await s3
.getObject(params, (err) => {
if (err) {
return err;
}
})
.promise();
return {
data: response.Body,
fileName: payload.fileName
};
} catch (error) {
return Boom.badRequest(error.message);
}
};
And once i get the file content in this above response, i am storing temporarily in a folder on server and then reading again and sending over email. But i am still having the same issue.
const fileContent = await docs.downloadFileBuffer({ payload: req.payload.action.dire });
await fs.writeFileSync(`${temp}testinggg.pdf`, fileContent?.data);
const fileData = await fs.readFileSync(`${temp}testinggg.pdf`, { encoding: 'base64' });
Any help on this issue is really appreciated.
After days of research and trying different ways, I found the issue. The issue was with .promise() used in s3.getObject(params, (err) => {}).promise();. Instead of that, I used callback using Promise as shown below. Now the file is properly showing the full content without missing any data.
const downloadFileBuffer = async (payload) => {
let params = { Bucket: s3BucketName, Key: `${payload.folderName}/${payload.fileName}` };
try {
return new Promise((resolve, reject) => {
s3.getObject(params, (err, response) => {
if (err) {
reject(err);
}
resolve({
data: response.Body,
fileName: payload.fileName
});
});
});
} catch (error) {
return Boom.badRequest(error.message);
}
};

upload large javascript object as json file on aws s3 (efficient way)

Hi I have json response probably of size 150-200MB. Because of its size, I want to save it on aws s3 as json file instead of returning it to the client.
Below this the code I m using currently.
async function uploadFileOnS3(fileData, s3Detail) {
const params = {
Bucket: s3Detail.Bucket,
Key: s3Detail.Key_response,
Body: JSON.stringify(fileData), // big fat js object
};
try {
const stored = await S3.upload(params).promise();
console.log("file uploaded Sucessfully ", stored);
} catch (err) {
console.log(err);
}
console.log("upload exit");
}
I m concern about JSON.stringify(fileData) operation. assuming this function will be part of a aws lambda, won't it take huge resources to parse it as string?
is there any other efficient way to save javascript object as json on aws s3 bucket?
You don't really have to stringify the JSON file. You can pass a stream as a body:
async function uploadFileOnS3(fileData, s3Detail) {
const params = {
Bucket: s3Detail.Bucket,
Key: s3Detail.Key_response,
Body: fileData, // remove stringify from here
};
try {
const stored = await S3.upload(params).promise();
console.log("file uploaded Sucessfully ", stored);
} catch (err) {
console.log(err);
}
console.log("upload exit");
}
exports.handler = async (event) => {
// We create a stream
const stream = fs.createReadStream("/tmp/upload.json");
// Pass the stream to the upload function
await uploadFileOnS3(stream, {
Bucket: "bucket_name",
Key_response: "upload.json"
});
}

How to make HTTP requests from different URLs stored in a text file in node

I'm new to Node. I have a txt file with three different URLs, one on each line. I want to read each line, axios.get that URL, and then write the content of each response to a different txt file and name that file the name of the website.
For example, urls.txt currently has:
https://google.com
https://facebook.com
https://twitter.com
I want to be able to run node app.js urls.txt and have three files created for me with the names google.txt, facebook.txt, and twitter.txt, and inside each file is the response.data for each call.
Right now I have it working, but only if there is a single URL in the txt file. It also writes the response to a file I already created temp.txt.
app.js
const fs = require('fs');
const process = require('process');
const axios = require('axios');
async function webCat(url) {
let resp = await axios.get(url);
fs.writeFile('temp.txt', resp.data, 'utf8', function (err) {
if (err) {
console.log(`An error occured. , ${err}`);
process.exit(1);
}
});
}
fs.readFile('urls.txt', 'utf8', function (err, url) {
if (err) {
console.error(err);
process.exit(1);
}
webCat(url);
});
How can I change this so that I can handle different URLs?
You only need to split URLs in the file.
const fs = require('fs');
const process = require('process');
const axios = require('axios');
async function webCat(url) {
let resp = await axios.get(url);
const domain = url.split(/[/.]/)[2];
fs.writeFile(`${domain}.txt`, resp.data, 'utf8', function (err) {
if (err) {
console.log(`An error occured. , ${err}`);
process.exit(1);
}
});
}
fs.readFile('urls.txt', 'utf8', function (err, data) {
if (err) {
console.error(err);
process.exit(1);
}
const urls = data.split('\n');
urls.forEach(url => webCat(url));
});

AWS S3 file upload with Node.js: Unsupported body payload error

I am trying to get my node.js backend to upload a file to AWS S3, which it got in a post request from my front-end. This is what my function looks like:
async function uploadFile(file){
var uploadParams = {Bucket: '<bucket-name>', Key: file.name, Body: file};
s3.upload (uploadParams, function (err, data) {
if (err) {
console.log("Error", err);
} if (data) {
console.log("Upload Success", data.Location);
}
});
}
When I try uploading the file this way, I get an Unsupported Body Payload Error...
I used fileStream.createReadStream() in the past to upload files saves in a directory on the server, but creating a fileStream did not work for me, since there is no path parameter to pass here.
EDIT:
The file object is created in the angular frontend of my web application. This it the relevant html code where the file is uploaded by a user:
<div class="form-group">
<label for="file">Choose File</label>
<input type="file" id="file"(change)="handleFileInput($event.target.files)">
</div>
If the event occurs, the handleFileInput(files: FileList) method in the corresponding component is called:
handleFileInput(files: FileList) {
// should result in array in case multiple files are uploaded
this.fileToUpload = files.item(0);
// actually upload the file
this.uploadFileToActivity();
// used to check whether we really received the file
console.log(this.fileToUpload);
console.log(typeof this.fileToUpload)
}
uploadFileToActivity() {
this.fileUploadService.postFile(this.fileToUpload).subscribe(data => {
// do something, if upload success
}, error => {
console.log(error);
});
}
the postFile(fileToUpload: File) method of the file-upload service is used to make the post request:
postFile(fileToUpload: File): Observable<Boolean> {
console.log(fileToUpload.name);
const endpoint = '/api/fileupload/single';
const formData: FormData = new FormData();
formData.append('fileKey', fileToUpload, fileToUpload.name);
return this.httpClient
.post(endpoint, formData/*, { headers: yourHeadersConfig }*/)
.pipe(
map(() => { return true; }),
catchError((e) => this.handleError(e)),
);
}
Here is the the server-side code that receives the file and then calls the uploadFile(file) function:
app.post('/api/fileupload/single', async (req, res) => {
try {
if(!req.files) {
res.send({
status: false,
message: 'No file uploaded'
});
} else {
let file = req.files.fileKey;
uploadFile(file);
//send response
res.send({
status: true,
message: 'File is uploaded',
data: {
name: file.name,
mimetype: file.mimetype,
size: file.size
}
});
}
} catch (err) {
res.status(500).send(err);
}
});
Thank you very much for your help in solving this!
Best regards, Samuel
Best way is stream the file. Assuming you are. reading it from disk. You could do this
const fs = require("fs");
const aws = require("aws-sdk");
const s3Client = new aws.S3();
const Bucket = 'somebucket';
const stream = fs.createReadStream("file.pdf");
const Key = stream.path;
const response = await s3Client.upload({Bucket, Key, Body: stream}).promise();
console.log(response);

How to return zip file from node js api and handle on client side?

I have an API that creates a zip file using the archiver module in which I would like to pass back the zip as a respone and download it on the client side.
This is what my API that creates the zip looks like:
reports.get('/xxx/:fileName', async (req, res) => {
var s3 = new AWS.S3();
var archiver = require('archiver');
var filenames = "xxx"
var str_array = filenames.split(',');
for (var i = 0; i < str_array.length; i++) {
var filename = str_array[i].trim();
localFileName = './temp/' + filename.substring(filename.indexOf("/") + 1);
file = fs.createWriteStream(localFileName, {flags: 'a', encoding: 'utf-8',mode: 0666});
file.on('error', function(e) { console.error(e); });
s3.getObject({
Bucket: config.xxx,
Key: filename
})
.on('error', function (err) {
console.log(err);
})
.on('httpData', function (chunk) {
file.on('open', function(){
file.write(chunk);
});
})
.on('httpDone', function () {
file.end();
})
.send();
}
res.end("Files have been downloaded successfully")
// create a file to stream archive data to.
var output = fs.createWriteStream('example.zip');
var archive = archiver('zip', {
zlib: { level: 9 } // Sets the compression level.
});
// listen for all archive data to be written
// 'close' event is fired only when a file descriptor is involved
output.on('close', function() {
console.log(archive.pointer() + ' total bytes');
console.log('archiver has been finalized and the output file descriptor has closed.');
});
// This event is fired when the data source is drained no matter what was the data source.
// It is not part of this library but rather from the NodeJS Stream API.
// #see: https://nodejs.org/api/stream.html#stream_event_end
output.on('end', function() {
console.log('Data has been drained');
});
// good practice to catch warnings (ie stat failures and other non-blocking errors)
archive.on('warning', function(err) {
if (err.code === 'ENOENT') {
// log warning
} else {
// throw error
throw err;
}
});
// good practice to catch this error explicitly
archive.on('error', function(err) {
throw err;
});
// pipe archive data to the file
archive.pipe(output);
// append files from a sub-directory, putting its contents at the root of archive
archive.directory('./temp', false);
// finalize the archive (ie we are done appending files but streams have to finish yet)
// 'close', 'end' or 'finish' may be fired right after calling this method so register to them beforehand
archive.finalize();
});
Also for reference here is another one of my APIs to show how I am accustomed to sending data back to the client:
reports.get('/xxx/:fileName', async (req, res) => {
var s3 = new AWS.S3();
var params = {
Bucket: config.reportBucket,
Key: req.params.fileName,
Expires: 60 * 5
}
try {
s3.getSignedUrl('getObject', params, function (err, url) {
if(err)throw err;
res.json(url);
});
}catch (err) {
res.status(500).send(err.toString());
}
});
How can I send the zip back as a response and download it on the client side to disk?
Since archive is streaming, I would assume it can be pipe(lined) directly to the response (res):
// Node.js v10+, if res is a proper stream
const {pipeline} = require('stream')
pipeline(archive, res)
// Alternatively (search for caveats of pipe vs. pipeline)
archive.pipe(res)
You should probably set some HTTP headers on res to tell the browser the MIME type and possibly a filename:
res.set({
'Content-Type': 'application/zip',
'Content-Disposition': 'attachment; filename="zip"'
})
Okay so once you wrote your file, example.zip you can easily follow the example mentioned in another answer and do:
var stat = fileSystem.statSync('example.zip');
res.writeHead(200, {
'Content-Type': 'application/zip',
'Content-Length': stat.size
});
var readStream = fileSystem.createReadStream('example.zip');
// We replaced all the event handlers with a simple call to readStream.pipe()
readStream.pipe(res);
This should work perfectly. Credits to OP

Categories