I am trying to send a file through Postman using the form-data type. The request is sent to AWS Lambda using an API. The request content in the Lambda is corrupted with a lot of question marks in the content.
I would like to convert back to the file from the request content and store the file in S3.
Existing code -
const res = multipart.parse(event, false);
var file = res['File'];
var encodedFile = Buffer.from(file["content"], 'binary');
var encodedFilebs64 = Buffer.from(file["content"], 'binary').toString('base64');
const s3 = new AWS.S3();
const params = {
Bucket: config.s3Bucket,
Key: "asset_" + known_asset_id + '.bin',
Body: encodedFile
};
await s3.upload(params).promise().then(function(data) {
console.log(`File uploaded successfully. ${data.Location}`);
}, function(err) {
console.error("Upload failed", err);
});
Response content from Cloudwatch logs -
https://i.stack.imgur.com/SvBfF.png
When converting this to binary and comparing, the file is not same as the original file.
It would be helpful if someone could help me construct the file from response and store it in S3.
I am trying to upload files from a particular folder location to a sample S3 bucket. I am using standard nodejs aws-sdk for this. Files are deepzoom images (.dzi) files.
Files are getting uploaded to my S3 bucket but the contents of the file are not getting uploaded properly. Like I am uploading images of sizes 800B, but after uploading the size of image is only 7B. I tried downloading it to see its content but the file doesn't contains the image but just the file name. This is the code I am running for uploading files:
function read(file, numFiles) {
fs.readFile(file, function (err, data) {
if (err) console.log(err);
const fileContent = Buffer.from(file, "binary");
s3.putObject(
{
Bucket: "sample-bucket",
Key: file,
Body: fileContent,
},
function (resp) {
console.log(arguments);
console.log("Successfully uploaded, ", file);
uploadCount++;
console.log("uploadcount is:", uploadCount);
if (uploadCount == numFiles) {
res.send("All files uploaded");
}
}
).on("httpUploadProgress", (evt) => {
console.log(`Uploaded ${evt.loaded} out of ${evt.total}`);
});
});
}
I am passing files to this read function from another function. I am not sure why this is happening. Any help would be appreciated.
Before uploading image properties:
Property of image uploaded to S3 bucket:
Buffer.from(file) will not return the content of the file but return the buffer of the argument, this time the argument is "file". So the file uploaded to S3 has filename as contents.
Try to change this line
const fileContent = Buffer.from(file, "binary");
to like this.
const fileContent = fs.readFileSync(file);
I am downloading a web page and then I am writing to a file named thisArticle.html, using the below code.
var file = fs.createWriteStream("thisArticle.html");
var request = http.get(req.body.url, response => response.pipe(file) );
After that I am trying to read file and uploading to S3, here is the code that I wrote:
fs.readFile('thisArticle.html', 'utf8', function(err, html){
if (err) {
console.log(err + "");
throw err;
}
var pathToSave = 'articles/ ' + req.body.title +'.html';
var s3bucket = new AWS.S3({ params: { Bucket: 'all-articles' } });
s3bucket.createBucket(function () {
var params = {
Key: pathToSave,
Body: html,
ACL: 'public-read'
};
s3bucket.upload(params, function (err, data) {
fs.unlink("thisArticle.html", function (err) {
console.error(err);
});
if (err) {
console.log('ERROR MSG: ', err);
res.status(500).send(err);
} else {
console.log(data.Location);
}
// ..., more code below
});
});
});
Now, I am facing two issues:
The file is uploading but with 0 bytes (empty)
When I am trying to upload manually via S3 dashboard is uploaded successfully but when I tried to load the URL in the browser it downloads the HTML file instead of serving it.
Any guides if I am missing something?
Set the ContentType to "text/html".
s3 = boto3.client("s3")
s3.put_object(
Bucket=s3_bucket,
Key=s3_key,
Body=html_string,
CacheControl="max-age=0,no-cache,no-store,must-revalidate",
ContentType="text/html",
ACL="public-read"
)
It looks like your upload function is deleting the file with fs.unlink before it gets uploaded. That's why its going up as 0 Bytes.
Also, to make the bucket serve the HTML, you need to turn on webserving as described in the AWS S3 Docs. http://docs.aws.amazon.com/AmazonS3/latest/UG/ConfiguringBucketWebsite.html
Currently, I am using the #google-cloud/storage NPM package to upload a file directly to a Google Cloud Storage bucket. This requires some trickery as I only have the image's base64 encoded string. I have to:
Decode the string
Save it as a file
Send the file path to the below script to upload to Google Cloud Storage
Delete the local file
I'd like to avoid storing the file in the filesystem altogether since I am using Google App Engine and I don't want to overload the filesystem / leave junk files there if the delete operation doesn't work for whatever reason. This is what my upload script looks like right now:
// Convert the base64 string back to an image to upload into the Google Cloud Storage bucket
var base64Img = require('base64-img');
var filePath = base64Img.imgSync(req.body.base64Image, 'user-uploads', 'image-name');
// Instantiate the GCP Storage instance
var gcs = require('#google-cloud/storage')(),
bucket = gcs.bucket('google-cloud-storage-bucket-name');
// Upload the image to the bucket
bucket.upload(__dirname.slice(0, -15) + filePath, {
destination: 'profile-images/576dba00c1346abe12fb502a-original.jpg',
public: true,
validation: 'md5'
}, function(error, file) {
if (error) {
sails.log.error(error);
}
return res.ok('Image uploaded');
});
Is there anyway to directly upload the base64 encoded string of the image instead of having to convert it to a file and then upload using the path?
The solution, I believe, is to use the file.createWriteStream functionality that the bucket.upload function wraps in the Google Cloud Node SDK.
I've got very little experience with streams, so try to bear with me if this doesn't work right off.
First of all, we need take the base64 data and drop it into a stream. For that, we're going to include the stream library, create a buffer from the base64 data, and add the buffer to the end of the stream.
var stream = require('stream');
var bufferStream = new stream.PassThrough();
bufferStream.end(Buffer.from(req.body.base64Image, 'base64'));
More on decoding base64 and creating the stream.
We're then going to pipe the stream into a write stream created by the file.createWriteStream function.
var gcs = require('#google-cloud/storage')({
projectId: 'grape-spaceship-123',
keyFilename: '/path/to/keyfile.json'
});
//Define bucket.
var myBucket = gcs.bucket('my-bucket');
//Define file & file name.
var file = myBucket.file('my-file.jpg');
//Pipe the 'bufferStream' into a 'file.createWriteStream' method.
bufferStream.pipe(file.createWriteStream({
metadata: {
contentType: 'image/jpeg',
metadata: {
custom: 'metadata'
}
},
public: true,
validation: "md5"
}))
.on('error', function(err) {})
.on('finish', function() {
// The file upload is complete.
});
Info on file.createWriteStream, File docs, bucket.upload, and the bucket.upload method code in the Node SDK.
So the way the above code works is to define the bucket you want to put the file in, then define the file and the file name. We don't set upload options here. We then pipe the bufferStream variable we just created into the file.createWriteStream method we discussed before. In these options we define the metadata and other options you want to implement. It was very helpful to look directly at the Node code on Github to figure out how they break down the bucket.upload function, and recommend you do so as well. Finally, we attach a couple events for when the upload finishes and when it errors out.
Posting my version of the answer in response to #krlozadan 's request above:
// Convert the base64 string back to an image to upload into the Google Cloud Storage bucket
var mimeTypes = require('mimetypes');
var image = req.body.profile.image,
mimeType = image.match(/data:([a-zA-Z0-9]+\/[a-zA-Z0-9-.+]+).*,.*/)[1],
fileName = req.profile.id + '-original.' + mimeTypes.detectExtension(mimeType),
base64EncodedImageString = image.replace(/^data:image\/\w+;base64,/, ''),
imageBuffer = new Buffer(base64EncodedImageString, 'base64');
// Instantiate the GCP Storage instance
var gcs = require('#google-cloud/storage')(),
bucket = gcs.bucket('my-bucket');
// Upload the image to the bucket
var file = bucket.file('profile-images/' + fileName);
file.save(imageBuffer, {
metadata: { contentType: mimeType },
public: true,
validation: 'md5'
}, function(error) {
if (error) {
return res.serverError('Unable to upload the image.');
}
return res.ok('Uploaded');
});
This worked just fine for me. Ignore some of the additional logic in the first few lines as they are only relevant to the application I am building.
If you want to save a string as a file in Google Cloud Storage, you can do it easily using the file.save method:
const {Storage} = require('#google-cloud/storage');
const storage = new Storage();
const myBucket = storage.bucket('my-bucket');
const file = myBucket.file('my-file.txt');
const contents = 'This is the contents of the file.';
file.save(contents).then(() => console.log('done'));
:) what an issue !! Have tried it and got the issue Image has uploaded on firebase Storage but not download and just loader is moving around and around... After spending time... Got the success to upload the image on firebase storage with downloading... There was an issue in an access token...
check the screenshot
If you check in the file location section on the right side bottom there is an option "create access token" and not showing any "access token" on there if you create manually access token on there then refresh the page image will showing... So now the question is how to create it by code...
just use below code to create the access token
const uuidv4 = require('uuid/v4');
const uuid = uuidv4();
metadata: { firebaseStorageDownloadTokens: uuid }
Full code is given below for uploading an image to storage image on firebase storage
const functions = require('firebase-functions')
var firebase = require('firebase');
var express = require('express');
var bodyParser = require("body-parser");
const uuidv4 = require('uuid/v4');
const uuid = uuidv4();
const os = require('os')
const path = require('path')
const cors = require('cors')({ origin: true })
const Busboy = require('busboy')
const fs = require('fs')
var admin = require("firebase-admin");
var serviceAccount = {
"type": "service_account",
"project_id": "xxxxxx",
"private_key_id": "xxxxxx",
"private_key": "-----BEGIN PRIVATE KEY-----\jr5x+4AvctKLonBafg\nElTg3Cj7pAEbUfIO9I44zZ8=\n-----END PRIVATE KEY-----\n",
"client_email": "xxxx#xxxx.iam.gserviceaccount.com",
"client_id": "xxxxxxxx",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/firebase-adminsdk-5rmdm%40xxxxx.iam.gserviceaccount.com"
}
admin.initializeApp({
credential: admin.credential.cert(serviceAccount),
storageBucket: "xxxxx-xxxx" // use your storage bucket name
});
const app = express();
app.use(bodyParser.urlencoded({ extended: false }));
app.use(bodyParser.json());
app.post('/uploadFile', (req, response) => {
response.set('Access-Control-Allow-Origin', '*');
const busboy = new Busboy({ headers: req.headers })
let uploadData = null
busboy.on('file', (fieldname, file, filename, encoding, mimetype) => {
const filepath = path.join(os.tmpdir(), filename)
uploadData = { file: filepath, type: mimetype }
console.log("-------------->>",filepath)
file.pipe(fs.createWriteStream(filepath))
})
busboy.on('finish', () => {
const bucket = admin.storage().bucket();
bucket.upload(uploadData.file, {
uploadType: 'media',
metadata: {
metadata: { firebaseStorageDownloadTokens: uuid,
contentType: uploadData.type,
},
},
})
.catch(err => {
res.status(500).json({
error: err,
})
})
})
busboy.end(req.rawBody)
});
exports.widgets = functions.https.onRequest(app);
You have to convert base64 to image buffer then upload as below, you need to provide image_data_from_html variable as the data you extract from HTML event.
const base64Text = image_data_from_html.split(';base64,').pop();
const imageBuffer = Buffer.from(base64Text, 'base64');
const contentType = data.image_data.split(';base64,')[0].split(':')[1];
const fileName = 'myimage.png';
const imageUrl = 'https://storage.googleapis.com/bucket-url/some_path/' + fileName;
await admin.storage().bucket().file('some_path/' + fileName).save(imageBuffer, {
public: true,
gzip: true,
metadata: {
contentType,
cacheControl: 'public, max-age=31536000',
}
});
console.log(imageUrl);
I was able to get the base64 string over to my Cloud Storage bucket with just one line of code.
var decodedImage = new Buffer(poster64, 'base64');
// Store Poster to storage
let posterFile = await client.file(decodedImage, `poster_${path}.jpeg`, { path: 'submissions/dev/', isBuffer: true, raw: true });
let posterUpload = await client.upload(posterFile, { metadata: { cacheControl: 'max-age=604800' }, public: true, overwrite: true });
let permalink = posterUpload.permalink
Something to be aware of is that if you are inside of a Nodejs environment you wont be able to use atob().
The top answer of this post showed me the errors of my ways!
NodeJS base64 image encoding/decoding not quite working
I'm using Sailsjs 0.12.1, node.js 4.2.6
I want to upload the file From front-end(angular.js) through an API and from backend I want to upload the file to the AWS S3 bucket.
front-end I'm sending the file to the api. In backend I'm getting the file with the name but while uploading the file to S3 I'm getting the error
Cannot determine length of [object Object]
I google the error and found the many links but my bad luck.
Back-end
uploadPersonAvtar: function(req, res) {
var zlib = require('zlib');
var file = req.file('image');
var mime = require('mime');
data = {
Bucket: 'bucket',
Key : 'my_key',
Key: file.name,
Body: file,
ContentType: mime.lookup(file.name)
};
// Upload the stream
var s3obj = new AWS.S3(S3options);
s3obj.upload(data, function(err, data) {
if (err) console.log("An error occurred", err);
console.log("Uploaded the file at", data);
})
}
Is my approach is correct
If yes what I'm doing wrong.
I want to know how to use the file object to upload the file.
I can create a read stream but I don't have the file path When I'm creating the file object I'm getting the error: path must be a string
You could look at following example about streaming data : Amazon S3: Uploading an arbitrarily sized stream (upload)
var fs = require('fs');
var zlib = require('zlib');
var body = fs.createReadStream('bigfile').pipe(zlib.createGzip());
var s3obj = new AWS.S3({params: {Bucket: 'myBucket', Key: 'myKey'}});
s3obj.upload({Body: body}).
on('httpUploadProgress', function(evt) { console.log(evt); }).
send(function(err, data) { console.log(err, data) });