This is my code for uploading images to cloudinary.
let imageArr = [];
if (media) {
media && imageArr.push(media.thumbUrl);
}
let resultPromise = imageArr.map(async (img) => {
await cloudinary.uploader.upload(
img,
{ folder: process.env.STORE_NAME }, //Uploads to Specific Store Folder
async (err, result) => {
imageUrls.push(result.secure_url);
}
);
});
But the images are compressing automatically while uploading to 200 * 200 size. The actual size is 1000 * 1000.
https://res.cloudinary.com/sreeragcodes/image/upload/v1626531856/igowvkzckmlafwpipc22.png
Is this coding side error or should I configure something in cloudinary? I checked cloudinary but there's no configuration for this
In the code you shared, you aren't applying any incoming transformation which would be one reason why your images may be resized. If you log into your Cloudinary account and go to your Settings -> Upload tab and scroll down to the Upload Presets section, do you have any default upload preset selected? And if so, does that upload preset have any incoming transformations?
Unless you are applying an incoming transformation as part of a default upload preset, then this transformation wouldn't be applied on the Cloudinary level.
If you're not applying a default upload preset, then I would check the input images and perhaps store the images locally before uploading them to Cloudinary and seeing the size of those.
Note that you are uploading to Cloudinary the thumbUrl of each media. Could it be that this thumbnail URL is has been configured to be 200x200px and would explain why your images are such when uploaded to Cloudinary?
Related
I am trying to upload files directly to Cloudinary using nodejs. I have gotten in to work successfully, but ONLY when I manually set the path of the image I am going to be upload, like so:
cloudinary.uploader.upload('./public/css/img/' + data.image)
but when I do this:
cloudinary.uploader.upload(data.image)
it does not work. A sample of my entire function is below. I need it to work like this because when I allow others (not on my local machine) to upload images, it will not work.
// collected image from a user
const data = {
image: req.query.image,
}
console.log(data)
// upload image here
cloudinary.uploader.upload(data.image)
.then((result) => {
// response.status(200).send({
// message: "success",
// result,
// });
console.log(result.secure_url)
When running the upload call from the root folder it searches for the image in the same folder. You can run the upload from the img folder. e.g cd ./public/css/img/ and then call the upload. or use the full path as you did.
I'm trying to download an image using node.js and puppeteer but I'm running into some issues. I'm using a webscraper to gather the links of the images from the site and then using the https/http package to download the image.
This works for the images using http and https sources but some images have links that look like this (the whole link is very long so I cut the rest):
data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAw8AAADGCAYAAACU07w3AAAZuUlEQVR4Ae3df4yU930n8Pcslu1I1PU17okdO1cLrTD+g8rNcvRyti6247K5NG5S5HOl5hA2uZ7du6RJEGYPTFy1Nv4RUJy0cWVkeQ9ErqqriHNrR8niZuVIbntBS886rBZWCGHVsNEFRQ5BloPCzGn2B+yzZMLyaP........
I'm not sure how to handle these links or how to download the image. Any help would be appreciated.
You need to first decode the url from base64 using node.js Buffer.
// the content type image/png has to be removed first
const data = 'iVBORw0KGgoAAAANSUhEUgAAAw8AAADGCAYAAACU07w3AAAZuUlEQVR4Ae3df4yU930n8Pcslu1I1PU17okdO1cLrTD+g8rNcvRyti6247K5NG5S5HOl5hA2uZ7du6RJEGYPTFy1Nv4RUJy0cWVkeQ9ErqqriHNrR8niZuVIbntBS886rBZWCGHVsNEFRQ5BloPCzGn2B+yzZMLyaP';
const buffer = new Buffer(data);
const base64data = buff.toString('base64');
// after this you will get the url string and continue to fetch the image
These are the base64 encoded images (mostly used for icons and small images).
you can ignore it.
if(url.startsWith('data:')){
//base 64 image
} else{
// an image url
}
if you really want to mess with base64 I can give you a workaround.
import { parseDataURI } from 'dauria';
import mimeTypes from 'mime-types';
const fileContent = parseDataURI(file);
// you probably need an extension for that image.
let ext = mimeTypes.extension(fileContent.MIME) || 'bin';
fs.writeFile("a random file"+"."+ext, fileContent.buffer, function (err) {
console.log(err); // writes out file without error, but it's not a valid image
});
I am using express-fileupload and its working fine and uploading images . I cant find a way to limit the file size, or write a check in which I make sure no file is uploaded having size more than 1MB.
app.post('/upload', function(req, res) {
if (!req.files)
return res.status(400).send('No files were uploaded.');
// The name of the input field (i.e. "sampleFile") is used to retrieve the uploaded file
let sampleFile = req.files.sampleFile;
// Use the mv() method to place the file somewhere on your server
sampleFile.mv('/somewhere/on/your/server/filename.jpg', function(err) {
if (err)
return res.status(500).send(err);
res.send('File uploaded!');
});
});
I tried something like this which is actually clipping the remaining image
app.use(fileUpload({
limits: {
fileSize: 1000000 //1mb
},
}));
I can do this JavaScript by checking the file size of every file, but is there not any build in feature ??? How about multiple file upload on a single click? for multiple files it will go through the loop and check each file size and exclude those files having size greater than 1mb and upload only those having size meet with requirement. So I am wondering apart from writing my own code is there not any build in features ???
It's truncating the remaining image because the size limit was reached and you didn't explicitly set the middleware to abort the upload in this case.
So, try setting the option "abortOnLimit" to true, like this:
app.use(fileUpload({
limits: {
fileSize: 1000000 //1mb
},
abortOnLimit: true
}));
For more information, this is what the documentation tells about using the option 'abortOnLimit':
Returns a HTTP 413 when the file is bigger than the size limit if
true. Otherwise, it will add a truncate = true to the resulting file
structure.
Source link: https://www.npmjs.com/package/express-fileupload
In express-fileupload, I can see the busboy options. Have you tried that?
https://github.com/richardgirges/express-fileupload#using-busboy-options
I wanted to write this in comments but I don't have enough reputation points. :(
So I have followed the Google's official sample for creating a Cloud Storage triggered Firebase Function that will create resized thumbnails from uploaded images and upload them to the Storage as well. Here it is simplified:
exports.generateThumbnail = functions.storage.object().onChange(event => {
// get the uploaded file data (bucket, name, type...)
// return if the file is not an image or name begins with "thumb_"
// download the uploaded image in a temporary local file,
// resize it using ImageMagick
// upload it to storage with the name "thumb_<filename>"
}
However, when the new thumbnail uploads, the function gets triggered again and so forth in a loop. They have avoided that by returning if the uploaded file has a "thumb_" prefix.
You then end up with two images (the original and the thumbnail) and I want to rewrite the existing image with the thumbnail so I only have one image with the original path.
I don't know how to go about this because I don't know how to evade the reupload loop without a name change. I can delete the original image after uploading the thumbnail but the link pointing to the original image is already returned and saved in the Realtime Database (these images are profile pictures for users).
After looking at the bucket.js documentation in the #google-cloud/storage npm module, I have finally managed to overwrite the original file/path with the thumbnail image and also avoid the loop,
It can be done by attaching custom metadata when uploading the thumbnail, and testing for that metadata when the function triggers the next time.
I will just post the changes I made, the rest is the same as in the linked sample.
Here's the testing:
const filePath = event.data.name
const metadata = event.data.metadata
if (metadata.isThumb) {
console.log('Exiting: Already a thumbnail')
return
}
And here's the part when the spawn promise returns:
return spawn(/* ... */)
}).then(_ => {
console.log('Thumbnail created locally.')
metadata.isThumb = true // We add custom metadata
const options = {
destination: filePath, // Destination is the same as original
metadata: { metadata: metadata }
}
// We overwrite the (bigger) original image but keep the path
return bucket.upload(/* localThumb */, options)
})
I would like to resize and compress images after users upload them to my site. It seems like there are quite a few options for resizing images with node (e.g. https://github.com/lovell/sharp), but I would like to compress the images as well in order to save disk space and allow for faster serving times. Is there a library or something else that makes this possible?
Here is a simplified version of my current (functioning) route as it stands today:
var router = require('express').Router();
var bucket = require('../modules/google-storage-bucket');
var Multer = require('multer');
var multer = Multer({
storage: Multer.memoryStorage(),
limits: {
fileSize: 5 * 1024 * 1024 // no larger than 5mb
}
});
// Process the file upload and upload to Google Cloud Storage.
router.post('/', multer.single('file'), (req, res) => {
// rejecting if no file is uploaded
if (!req.file) {
res.status(400).send('No file uploaded.');
return;
}
// Create a new blob in the bucket and upload the file data.
var blob = bucket.file('fileName');
var blobStream = blob.createWriteStream();
blobStream.on('error', (err) => {
console.log('error', err);
res.send(500);
});
blobStream.on('finish', () => {
console.log('everything worked!')
});
blobStream.end(req.file.buffer);
});
module.exports = router;
Sharp has API for adjusting image compression. For example:
https://sharp.pixelplumbing.com/api-output#jpeg
Lists the options for JPEG write. You can adjust a range of knobs on the JPEG compressor to tune it for the performance/compression/quality tradeoff you want.
You can improve compression by up to about 30% in the best case, which honestly feels a bit marginal to me. It's unlikely to make a significant difference to your page size or your load times.
In terms of storage savings, the best solution is not to store images at all --- sharp is fast enough that you can simply generate all of your images on demand (most dynamic image resizing web proxies use sharp as the engine). That's a change that really will save you money and make your pages look better.
From my experience I would recommend imagemin. I've used it as a Gulp plugin, so you could 100% use in your project. But also you have to download the third-party modules: imagemin-pngquant and imagemin-jpegtran.
Hope you appreciate it :)