Error while saving data downloaded from S3 - javascript

I am trying to save a file in a local directory from my s3 bucket. When I run the code everything seems to work fine because no errors are prompted in the console but when I open the directory the file size is just 15 bytes and it's the same story with on file I try to download.
I tried to download a text file and inside I found written [object Object], can anyone help me? This is the function code:
var s3 = new AWS.S3();
s3.getObject(
{ Bucket: "chat-mp-files", Key: conf[1] },
function (error, data) {
if (error != null) {
console.log(err)
} else {
fs.closeSync(fs.openSync(pathstr + '/r/' + conf[1], 'w'));
fs.writeFile(pathstr + '/r/' + conf[1], data, function (err) {
if (err) {
console.log(err);
} else {
console.log("ok");
}
});
});

I have just solved my issue using the official docs section provided by Amazon here

Related

File content missing when i download from s3

I am using node.js aws sdk for s3 related methods. I have a method to download the file from s3 bucket.
I am downloading the file using the below code.
const downloadFileBase64 = async (payload) => {
let params = { Bucket: s3BucketName, Key: `${payload.folderName}/${payload.fileName}` };
try {
const response = await s3
.getObject(params, (err) => {
if (err) {
return err;
}
})
.promise();
return {
data: response.Body.toString('base64'),
fileName: payload.fileName
};
} catch (error) {
return Boom.badRequest(error.message);
}
};
Once i get the base64 content i am sending it over an email using sendgrid.
Issue: When i download small files everything is working fine. But when i download large files, some part of the file is missing in multiple pages. I just copy pasted the base64 in few online websites and downloaded the file from there, it's the same issue in those websites also. With this i concluded that there is some issue while returning the response from s3 itself. When i go to s3 and check it in the folder, it's showing proper file.
If you see the above screenshot, its the pdf which is having some random grey background in few pages and some text is also missing from the pdf.
I tried to use another method which just download buffer excluding the base64 conversion as shown below.
const downloadFileBuffer = async (payload) => {
let params = { Bucket: s3BucketName, Key: `${payload.folderName}/${payload.fileName}` };
try {
const response = await s3
.getObject(params, (err) => {
if (err) {
return err;
}
})
.promise();
return {
data: response.Body,
fileName: payload.fileName
};
} catch (error) {
return Boom.badRequest(error.message);
}
};
And once i get the file content in this above response, i am storing temporarily in a folder on server and then reading again and sending over email. But i am still having the same issue.
const fileContent = await docs.downloadFileBuffer({ payload: req.payload.action.dire });
await fs.writeFileSync(`${temp}testinggg.pdf`, fileContent?.data);
const fileData = await fs.readFileSync(`${temp}testinggg.pdf`, { encoding: 'base64' });
Any help on this issue is really appreciated.
After days of research and trying different ways, I found the issue. The issue was with .promise() used in s3.getObject(params, (err) => {}).promise();. Instead of that, I used callback using Promise as shown below. Now the file is properly showing the full content without missing any data.
const downloadFileBuffer = async (payload) => {
let params = { Bucket: s3BucketName, Key: `${payload.folderName}/${payload.fileName}` };
try {
return new Promise((resolve, reject) => {
s3.getObject(params, (err, response) => {
if (err) {
reject(err);
}
resolve({
data: response.Body,
fileName: payload.fileName
});
});
});
} catch (error) {
return Boom.badRequest(error.message);
}
};

Image upload functionality not working on deployed Heroku app but working on Localhost?

So I created my first big project: https://rate-n-write.herokuapp.com/
In brief, this is a blog app where the user can write reviews and publish them along with pictures.
I have used firebase as the database to store the articles. The app is working fine on localhost. Whenever I am trying to upload an image on Heroku, I get this error
The error is showing up in line number 8 of the following code (editor.js):
uploadInput.addEventListener('change', () => {
uploadImage(uploadInput, "image");
})
const uploadImage = (uploadFile, uploadType) => {
const [file] = uploadFile.files;
if(file && file.type.includes("image")){
const formdata = new FormData();
formdata.append('image', file);
//Error shows up here in the fetch line
fetch('/upload', {
method: 'post',
body: formdata
}).then(res => res.json())
.then(data => {
if(uploadType == "image"){
addImage(data, file.name);
} else{
bannerPath = `${location.origin}/${data}`;
banner.style.backgroundImage = `url("${bannerPath}")`;
}
})
const change_text = document.getElementById("uploadban");
change_text.innerHTML = " ";
} else{
alert("upload Image only");
}
}
This is just a snippet of the whole editor.js file.
Is it because I am trying to upload the file to the project directory? (server.js snippet below):
app.post('/upload', (req, res) => {
let file = req.files.image;
let date = new Date();
// image name
let imagename = date.getDate() + date.getTime() + file.name;
// image upload path
let path = 'public/uploads/' + imagename;
// create upload
file.mv(path, (err, result) => {
if(err){
throw err;
} else{
// our image upload path
res.json(`uploads/${imagename}`)
}
})
})
Do I need to use an online storage service like AWS S3?
Heroku is not suitable for persistent storage of data, the uploaded pictures would be deleted after a while (when the dyno is restarted) read this.
I would suggest using 3rd party object Storage services like
cloudinary or AWS S3

AWS S3 Upload after GET Request to Image, Not Uploading Correctly

I'm trying to upload an image to my AWS S3 bucket after downloading the image from another URL using Node (using request-promise-native & aws-sdk):
'use strict';
const config = require('../../../configs');
const AWS = require('aws-sdk');
const request = require('request-promise-native');
AWS.config.update(config.aws);
let s3 = new AWS.S3();
function uploadFile(req, res) {
function getContentTypeByFile(fileName) {
var rc = 'application/octet-stream';
var fn = fileName.toLowerCase();
if (fn.indexOf('.png') >= 0) rc = 'image/png';
else if (fn.indexOf('.jpg') >= 0) rc = 'image/jpg';
return rc;
}
let body = req.body,
params = {
"ACL": "bucket-owner-full-control",
"Bucket": 'testing-bucket',
"Content-Type": null,
"Key": null, // Name of the file
"Body": null // File body
};
// Grabs the filename from a URL
params.Key = body.url.substring(body.url.lastIndexOf('/') + 1);
// Setting the content type
params.ContentType = getContentTypeByFile(params.Key);
request.get(body.url)
.then(response => {
params.Body = response;
s3.putObject(params, (err, data) => {
if (err) { console.log(`Error uploading to S3 - ${err}`); }
if (data) { console.log("Success - Uploaded to S3: " + data.toString()); }
});
})
.catch(err => { console.log(`Error encountered: ${err}`); });
}
The upload succeeds when I test it out, however after trying to redownload it from my bucket the image is unable to display. Additionally, I notice after uploading the file with my function, the file listed in the bucket is much larger in filesize than the originally uploaded image. I'm trying to figure out where I've been going wrong but cannot find where. Any help is appreciated.
Try to open the faulty file with a text editor, you will see some errors written in it.
You can try using s3.upload instead of putObject, it works better with streams.

DataURL via AJAX to sails endpoint

I have a client side component that produces a DataURL (i.e. a user uploads or snaps a picture and then crops it). I need to post that via an AJAX call to a sails endpoint. From the sails docs, the endpoint is supposed to read the files like so:
req.file('file_name');
I am stuck on how I am supposed to go from DataURI -> AJAX call formatted so that the endpoint will be able to read the file from req.file. I guess I just need to see an implementation of the call being setup in any javascript/framework library so that I can implement.
Thanks a lot.
On the client-side you'll need to convert the DataURL into form data. There are a couple examples here and here and send it to the route in your controller.
Your endpoint will be a route that looks a bit like this:
var uploadHandlier = function(req, res)
{
req.file('avatar').upload(
{
// don't allow the total upload size to exceed ~4MB
maxBytes: 4000000,
dirname: '/tmp' // some temp directory
}, function whenDone(error, uploadedFiles)
{
if (error)
{
if (error.code === 'E_EXCEEDS_UPLOAD_LIMIT')
{
return res.badRequest(
{
msg: error.message
});
}
return res.serverError(error);
}
if (_.isEmpty(uploadedFiles))
{
res.badRequest(
{
msg: "No file was uploaded."
});
return;
}
var filePath = uploadedFiles[0].fd;
var fileType = uploadedFiles[0].type;
if (!_.includes(['image/jpeg', 'image/png', 'image/gif'], fileType))
{
res.badRequest(
{
msg: "Invalid file type."
});
return;
}
// do your thing...
});
};

How to retrieve the Metadata from nodejs aws s3 getObject callback data?

I am trying to upload/download an audio chunk file to/from S3 using AWS node SDK. I have tried base64 approach and it works fine. But I am not able to get the Metadata back which I have bundled as part of upload params.
Below is the code snippet for upload along with meta info:
var myMetaInfo = "AdditionalInfo", dataToUpload = {Bucket: bucketName, Key:storageFolderFullPath , Body: myAudioFile.toString('base64'), Metadata: {metaInfo: myMetaInfo}};
s3.client.putObject(dataToUpload, function(err, data) {
if (!err) {
console.log("Successfully uploaded the file to ::" + dataToUpload.Bucket);
} else {
console.log(" **** ERROR while uploading ::"+err);
}
});
And this is the snippet for downloading the file. Metadata is not part of the callback data.
I tried printing the callback 'data' to console and noticed that only the following params are available
LastModified, ContentType, ContentLength, ETag, Body, RequestId
var dataToDownload = {Bucket: bucketName, Key: storageFolderFullPath}, originalFile, myMetaInfo;
s3.client.getObject(dataToDownload, function(err, data) {
if (!err) {
originalFile = new Buffer(data.Body, 'base64');
myMetaInfo = data.Metadata.metaInfo;
console.log(" Meta info:: " + myMetaInfo);
fs.writeFile(fileStoragePath, originalFile, function(err) {
if (!err) {
console.log(" File written!! ");
} else {
console.log(" Error while writing the file !!" + err);
}
});
} else {
console.log(" **** ERROR while downloading ::"+err);
}
});
Any pointers on what is wrong with my implementation? I have followed the documentation mentioned here
Any help is appreciated.
Is your metaInfo value a string?
Referencing the sdk api docs, Metadata is a string map (ala ~ Metadata: {metaInfo: "myMetaInfoString"}. I've tested your code using a string as the value for metaInfo and it does return correctly under the data.Metadata.metaInfo reference.

Categories