Im trying to upload files to S3 bucket using typescript and aws-sdk package. The ideal outcome is:
Upload to s3 is successful THEN => Load SQS message.
Here is the function for reading file recieved and executing s3Upload
// on each byte of uploading
file.on("data", function (data) {
console.log("File [" + fieldname + "] got " + data.length + " bytes");
});
// whenever the upload is finished into this microservice
file.on("end", function () {
console.log("File [" + fieldname + "] Finished uploading");
});
const s3BucketLink = await saveToS3(filename, file);
### HERE IS IMPORTANT, I BASICALLY WANT TO CONFIRM THAT STATUS IS 200 and only then upload to SQS
if (s3BucketLink.status === 200) {
console.log("its done");
}
console.log("here");
console.log(s3BucketLink);
});
and in my saveToS3 I have this
interface S3Response {
status: number;
filepath: string;
}
export const saveToS3 = async (filename: string, file: any): Promise<S3Response> => {
let status = {
status: 400,
filepath: ""
};
s3.listBuckets(function (err, data) {
if (err) {
console.log("Error", err);
} else {
console.log("Success", data.Buckets);
}
});
const params = {
Bucket: bucketName,
Key: filename, // File name you want to save as in S3
Body: file
};
s3.upload(params, function (err: Error, data: any) {
if (err) {
throw err;
}
console.log(`File uploaded successfully. ${data.Location}`);
status = {
status: 200,
filepath: data.Location
};
});
return status;
};
Im basically trying to set the status 200 and if its the case then go ahead and load sqs.
The current results in console.log would look like this
[Function]
File [File] got 65002 bytes
here
{ status: 400 }
File [File] got 65536 bytes
File [File] got 65536 bytes
File [File] got 65536 bytes
File [File] got 65536 bytes
File [File] got 65536 bytes
File [File] got 65536 bytes
File [File] got 65536 bytes
File [File] got 65536 bytes
File [File] got 65536 bytes
File [File] got 65536 bytes
File [File] got 43814 bytes
File [File] Finished uploading
Success [ { Name: 'name', CreationDate: 2020-05-31T06:50:22.000Z } ]
File uploaded successfully. https://name.s3.amazonaws.com/file_example_MP3_700KB.mp3
Would really appreciate any help on how to achieve the desired outcome :)
It looks like you've got a couple of issues here.
You're executing a couple of asynchronous methods (s3.listBuckets and s3.upload) without waiting for them to complete. So you are starting those actions and then calling return status;, which is why you're getting that value back early before the file has finished uploading. You'll need to wait for those things to complete before returning.
But both of these methods use callbacks, not promises, and you want your saveToS3 method to return a Promise. So you'll need to wrap both of those method calls. Here's a simplified example of what that looks like (with some code omitted). In this example the method returns a Promise which is only resolved when the callback of s3.upload is fired, meaning that operation has completed or returned an error.
export const saveToS3 = (filename: string, file: any): Promise<S3Response> => {
return new Promise((resolve, reject) => {
s3.upload(params, function (err: Error, data: any) {
if (err) {
return reject(err);
}
resolve(status);
});
});
}
This will cause any await saveToS3() statement to wait for the operation to complete.
Related
I am using node.js aws sdk for s3 related methods. I have a method to download the file from s3 bucket.
I am downloading the file using the below code.
const downloadFileBase64 = async (payload) => {
let params = { Bucket: s3BucketName, Key: `${payload.folderName}/${payload.fileName}` };
try {
const response = await s3
.getObject(params, (err) => {
if (err) {
return err;
}
})
.promise();
return {
data: response.Body.toString('base64'),
fileName: payload.fileName
};
} catch (error) {
return Boom.badRequest(error.message);
}
};
Once i get the base64 content i am sending it over an email using sendgrid.
Issue: When i download small files everything is working fine. But when i download large files, some part of the file is missing in multiple pages. I just copy pasted the base64 in few online websites and downloaded the file from there, it's the same issue in those websites also. With this i concluded that there is some issue while returning the response from s3 itself. When i go to s3 and check it in the folder, it's showing proper file.
If you see the above screenshot, its the pdf which is having some random grey background in few pages and some text is also missing from the pdf.
I tried to use another method which just download buffer excluding the base64 conversion as shown below.
const downloadFileBuffer = async (payload) => {
let params = { Bucket: s3BucketName, Key: `${payload.folderName}/${payload.fileName}` };
try {
const response = await s3
.getObject(params, (err) => {
if (err) {
return err;
}
})
.promise();
return {
data: response.Body,
fileName: payload.fileName
};
} catch (error) {
return Boom.badRequest(error.message);
}
};
And once i get the file content in this above response, i am storing temporarily in a folder on server and then reading again and sending over email. But i am still having the same issue.
const fileContent = await docs.downloadFileBuffer({ payload: req.payload.action.dire });
await fs.writeFileSync(`${temp}testinggg.pdf`, fileContent?.data);
const fileData = await fs.readFileSync(`${temp}testinggg.pdf`, { encoding: 'base64' });
Any help on this issue is really appreciated.
After days of research and trying different ways, I found the issue. The issue was with .promise() used in s3.getObject(params, (err) => {}).promise();. Instead of that, I used callback using Promise as shown below. Now the file is properly showing the full content without missing any data.
const downloadFileBuffer = async (payload) => {
let params = { Bucket: s3BucketName, Key: `${payload.folderName}/${payload.fileName}` };
try {
return new Promise((resolve, reject) => {
s3.getObject(params, (err, response) => {
if (err) {
reject(err);
}
resolve({
data: response.Body,
fileName: payload.fileName
});
});
});
} catch (error) {
return Boom.badRequest(error.message);
}
};
Hi I have json response probably of size 150-200MB. Because of its size, I want to save it on aws s3 as json file instead of returning it to the client.
Below this the code I m using currently.
async function uploadFileOnS3(fileData, s3Detail) {
const params = {
Bucket: s3Detail.Bucket,
Key: s3Detail.Key_response,
Body: JSON.stringify(fileData), // big fat js object
};
try {
const stored = await S3.upload(params).promise();
console.log("file uploaded Sucessfully ", stored);
} catch (err) {
console.log(err);
}
console.log("upload exit");
}
I m concern about JSON.stringify(fileData) operation. assuming this function will be part of a aws lambda, won't it take huge resources to parse it as string?
is there any other efficient way to save javascript object as json on aws s3 bucket?
You don't really have to stringify the JSON file. You can pass a stream as a body:
async function uploadFileOnS3(fileData, s3Detail) {
const params = {
Bucket: s3Detail.Bucket,
Key: s3Detail.Key_response,
Body: fileData, // remove stringify from here
};
try {
const stored = await S3.upload(params).promise();
console.log("file uploaded Sucessfully ", stored);
} catch (err) {
console.log(err);
}
console.log("upload exit");
}
exports.handler = async (event) => {
// We create a stream
const stream = fs.createReadStream("/tmp/upload.json");
// Pass the stream to the upload function
await uploadFileOnS3(stream, {
Bucket: "bucket_name",
Key_response: "upload.json"
});
}
I am trying to upload a file from mobile to google bucket using ionic 4. Although a file can upload into the could. I am struggling to get the file properties out of file object.
Here is my method,
async selectAFile() {
const uploadFileDetails = {
name: '',
contentLength: '',
size: '',
type: '',
path: '',
};
this.fileChooser.open().then(uri => {
this.file.resolveLocalFilesystemUrl(uri).then(newUrl => {
let dirPath = newUrl.nativeURL;
const dirPathSegments = dirPath.split('/');
dirPathSegments.pop();
dirPath = dirPathSegments.join('/');
(<any>window).resolveLocalFileSystemURL(
newUrl.nativeURL,
function(fileEntry) {
uploadFileDetails.path = newUrl.nativeURL;
const file: any = getFileFromFileEntry(fileEntry);
//log 01
console.log({ file });
uploadFileDetails.size = file.size;
uploadFileDetails.name = `${newUrl.name
.split(':')
.pop()}.${file.type.split('/').pop()}`;
uploadFileDetails.type = file.type;
async function getFileFromFileEntry(fileEntry) {
try {
return await new Promise((resolve, reject) =>
fileEntry.file(resolve, reject)
);
} catch (err) {
console.log(err);
}
}
},
function(e) {
console.error(e);
}
);
});
});
// here uploadFileDetails is simller to what I declared at the top ;)
// I wan't this to be populated with file properties
// console.log(uploadFileDetails.name) --> //''
const uploadUrl = await this.getUploadUrl(uploadFileDetails);
const response: any = this.uploadFile(
uploadFileDetails,
uploadUrl
);
response
.then(function(success) {
console.log({ success });
this.presentToast('File uploaded successfully.');
this.loadFiles();
})
.catch(function(error) {
console.log({ error });
});
}
even though I can console.log the file in log 01. I am unable to get file properties like, size, name, type out of the resolveLocalFileSystemURL function. basically, I am unable to populate uploadFileDetails object. What am I doing wrong? Thank you in advance.
you actually need 4 Ionic Cordova plugins to upload a file after getting all the metadata of a file.
FileChooser
Opens the file picker on Android for the user to select a file, returns a file URI.
FilePath
This plugin allows you to resolve the native filesystem path for Android content URIs and is based on code in the aFileChooser library.
File
This plugin implements a File API allowing read/write access to files residing on the device.
File Trnafer
This plugin allows you to upload and download files.
getting the file's metadata.
file.resolveLocalFilesystemUrl with fileEntry.file give you all the metadata you need, except the file name. There is a property called name in the metadata but it always contains value content.
To get the human readable file name you need filePath. But remember you can't use returning file path to retrieve metadata. For that, you need the original url from fileChooser.
filePathUrl.substring(filePathUrl.lastIndexOf('/') + 1) is used to get only file name from filePath.
You need nativeURL of the file in order to upload it. Using file path returning from filePath is not going to work.
getFileInfo(): Promise<any> {
return this.fileChooser.open().then(fileURI => {
return this.filePath.resolveNativePath(fileURI).then(filePathUrl => {
return this.file
.resolveLocalFilesystemUrl(fileURI)
.then((fileEntry: any) => {
return new Promise((resolve, reject) => {
fileEntry.file(
meta =>
resolve({
nativeURL: fileEntry.nativeURL,
fileNameFromPath: filePathUrl.substring(filePathUrl.lastIndexOf('/') + 1),
...meta,
}),
error => reject(error)
);
});
});
});
});
}
select a file from the file system of the mobile.
async selectAFile() {
this.getFileInfo()
.then(async fileMeta => {
//get the upload
const uploadUrl = await this.getUploadUrl(fileMeta);
const response: Promise < any > = this.uploadFile(
fileMeta,
uploadUrl
);
response
.then(function(success) {
//upload success message
})
.catch(function(error) {
//upload error message
});
})
.catch(error => {
//something wrong with getting file infomation
});
}
uploading selected file.
This depends on your backend implementation. This is how to use File Transfer to upload a file.
uploadFile(fileMeta, uploadUrl) {
const options: FileUploadOptions = {
fileKey: 'file',
fileName: fileMeta.fileNameFromPath,
headers: {
'Content-Length': fileMeta.size,
'Content-Type': fileMeta.type,
},
httpMethod: 'PUT',
mimeType: fileMeta.type,
};
const fileTransfer: FileTransferObject = this.transfer.create();
return fileTransfer.upload(file.path, uploadUrl, options);
}
hope it helps. :)
I have got a express server, which creates a pdf file.
I am trying to send this file to the client:
const fs = require('fs');
function download(req, res) {
var filePath = '/../../myPdf.pdf';
fs.readFile(__dirname + filePath, function(err, data) {
if (err) throw new Error(err);
console.log('yeyy, no errors :)');
if (!data) throw new Error('Expected data, but got', data);
console.log('got data', data);
res.contentType('application/pdf');
res.send(data);
});
}
On the client I want to download it:
_handleDownloadAll = async () => {
console.log('handle download all');
const response = await request.get(
`http://localhost:3000/download?accessToken=${localStorage.getItem(
'accessToken'
)}`
);
console.log(response);
};
I recieve an body.text like
%PDF-1.4↵1 0 obj↵<<↵/Title (��)↵/Creator (��)↵/Producer (��Qt 5.5.1)↵
but I can't achieve a download.
How can I create a PDF from the data OR directly download it from the server?
I've got it working:
The answer was pretty simple. I just let the browser handle the download with an html anchor tag:
server:
function download(req, res) {
const { creditor } = req.query;
const filePath = `/../../${creditor}.pdf`;
res.download(__dirname + filePath);
}
client:
<a href{`${BASE_URL}?accessToken=${accessToken}&creditor=${creditorId}`} download>Download</a>
The result is the string of the binary. We use base 64 to convert from binary to pdf
var buffer = Buffer.from(result['textBinary'], 'base64')
fs.writeFileSync('/path/to/my/file.pdf', buffer)
You can prompt the browser to download the file by setting the correct content-disposition header:
res.setHeader('Content-disposition', 'attachment; filename=myfile.pdf');
readFile returns a Buffer which is a wrapper around bytes. You're sending Buffer back to the client which is logging them to the console.
The body.text you see is to be expected.
You will need to write these bytes to a file using fs.writeFile or similar. Here's an example:
_handleDownloadAll = async () => {
console.log('handle download all');
const response = await request.get(
`http://localhost:3000/download?accessToken=${localStorage.getItem(
'accessToken'
)}`
);
// load your response data into a Buffer
let buffer = Buffer.from(response.body.text)
// open the file in writing mode
fs.open('/path/to/my/file.pdf', 'w', function(err, fd) {
if (err) {
throw 'could not open file: ' + err;
}
// write the contents of the buffer
fs.write(fd, buffer, 0, buffer.length, null, function(err) {
if (err) {
throw 'error writing file: ' + err;
}
fs.close(fd, function() {
console.log('file written successfully');
});
});
});
};
You may need to experiment with the buffer encoding, it defaults to utf8.
Read this!
The other option you may want to consider is generating the PDF on the server and simply sending the client a link to where it can download this.
I wrote a Hapi.js route to receive an uploaded file and have called it successfully using Postman. Now I want to save the file.
How do I
get the file extension?
save the file to disk?
Here's my route:
{
method: 'POST',
path: this.config.apiPrefix + 'uploadprofilephoto',
config: { payload: { maxBytes: 10485760, /* 10 MB */ output: 'stream', parse: true } },
handler: (request: hapi.Request, reply: hapi.IReply) => {
const result = new Promise<string>( async (resolve, reject) => {
try {
this.profilePhotoRouteHelper.savePhotoAndUploadToAws(jwtData.userId, request.payload['image']);
resolve(responseHelper.getSuccessResponse<string>(null, newJwt));
}
catch (error) {
log.error(error);
resolve(responseHelper.getErrorResponse(ResponseErrorCode.unknownError));
}
});
reply(result);
}
and an idea of how to save:
fs.writeFile(filename, data, [encoding], () => { } );
but I'd rather use promises and await if possible.
Here's the uploaded file:
I found fs-promise, which works well.
const photoId = uuid.v4();
await fsp.writeFile(photoId + '__' + image.hapi.filename, image._data, 'utf8');
And here's how to get file extensions: Node.js get file extension