I would like to resize and compress images after users upload them to my site. It seems like there are quite a few options for resizing images with node (e.g. https://github.com/lovell/sharp), but I would like to compress the images as well in order to save disk space and allow for faster serving times. Is there a library or something else that makes this possible?
Here is a simplified version of my current (functioning) route as it stands today:
var router = require('express').Router();
var bucket = require('../modules/google-storage-bucket');
var Multer = require('multer');
var multer = Multer({
storage: Multer.memoryStorage(),
limits: {
fileSize: 5 * 1024 * 1024 // no larger than 5mb
}
});
// Process the file upload and upload to Google Cloud Storage.
router.post('/', multer.single('file'), (req, res) => {
// rejecting if no file is uploaded
if (!req.file) {
res.status(400).send('No file uploaded.');
return;
}
// Create a new blob in the bucket and upload the file data.
var blob = bucket.file('fileName');
var blobStream = blob.createWriteStream();
blobStream.on('error', (err) => {
console.log('error', err);
res.send(500);
});
blobStream.on('finish', () => {
console.log('everything worked!')
});
blobStream.end(req.file.buffer);
});
module.exports = router;
Sharp has API for adjusting image compression. For example:
https://sharp.pixelplumbing.com/api-output#jpeg
Lists the options for JPEG write. You can adjust a range of knobs on the JPEG compressor to tune it for the performance/compression/quality tradeoff you want.
You can improve compression by up to about 30% in the best case, which honestly feels a bit marginal to me. It's unlikely to make a significant difference to your page size or your load times.
In terms of storage savings, the best solution is not to store images at all --- sharp is fast enough that you can simply generate all of your images on demand (most dynamic image resizing web proxies use sharp as the engine). That's a change that really will save you money and make your pages look better.
From my experience I would recommend imagemin. I've used it as a Gulp plugin, so you could 100% use in your project. But also you have to download the third-party modules: imagemin-pngquant and imagemin-jpegtran.
Hope you appreciate it :)
Related
This is my code for uploading images to cloudinary.
let imageArr = [];
if (media) {
media && imageArr.push(media.thumbUrl);
}
let resultPromise = imageArr.map(async (img) => {
await cloudinary.uploader.upload(
img,
{ folder: process.env.STORE_NAME }, //Uploads to Specific Store Folder
async (err, result) => {
imageUrls.push(result.secure_url);
}
);
});
But the images are compressing automatically while uploading to 200 * 200 size. The actual size is 1000 * 1000.
https://res.cloudinary.com/sreeragcodes/image/upload/v1626531856/igowvkzckmlafwpipc22.png
Is this coding side error or should I configure something in cloudinary? I checked cloudinary but there's no configuration for this
In the code you shared, you aren't applying any incoming transformation which would be one reason why your images may be resized. If you log into your Cloudinary account and go to your Settings -> Upload tab and scroll down to the Upload Presets section, do you have any default upload preset selected? And if so, does that upload preset have any incoming transformations?
Unless you are applying an incoming transformation as part of a default upload preset, then this transformation wouldn't be applied on the Cloudinary level.
If you're not applying a default upload preset, then I would check the input images and perhaps store the images locally before uploading them to Cloudinary and seeing the size of those.
Note that you are uploading to Cloudinary the thumbUrl of each media. Could it be that this thumbnail URL is has been configured to be 200x200px and would explain why your images are such when uploaded to Cloudinary?
I want to implement a big file downloading (approx. 10-1024 Mb) from the same server (without external cloud file storage, aka on-premises) where my app runs using Node.js and Express.js.
I figured out how to do that by converting the entire file into Blob, transferring it over the network, and then generating a download link with window.URL.createObjectURL(…) for the Blob. Such approach perfectly works as long as the files are small, otherwise it is impossible to keep the entire Blob in the RAM of neither server, nor client.
I've tried to implement several other approaches with File API and AJAX, but it looks like Chrome loads the entire file into RAM and only then dumps it to the disk. Again, it might be OK for small files, but for big ones it's not an option.
My last attempt was to send a basic Get-request:
const aTag = document.createElement("a");
aTag.href = `/downloadDocument?fileUUID=${fileName}`;
aTag.download = fileName;
aTag.click();
On the server-side:
app.mjs
app.get("/downloadDocument", async (req, res) => {
req.headers.range = "bytes=0";
const [urlPrefix, fileUUID] = req.url.split("/downloadDocument?fileUUID=");
const downloadResult = await StorageDriver.fileDownload(fileUUID, req, res);
});
StorageDriver.mjs
export const fileDownload = async function fileDownload(fileUUID, req, res) {
//e.g. C:\Users\User\Projects\POC\assets\wanted_file.pdf
const assetsPath = _resolveAbsoluteAssetsPath(fileUUID);
const options = {
dotfiles: "deny",
headers: {
"Content-Disposition": "form-data; name=\"files\"",
"Content-Type": "application/pdf",
"x-sent": true,
"x-timestamp": Date.now()
}
};
res.sendFile(assetsPath, options, (err) => {
if (err) {
console.log(err);
} else {
console.log("Sent");
}
});
};
When I click on the link, Chrome shows the file in Downloads but with a status Failed - No file. No file appears in the download destination.
My questions:
Why in case of sending a Get-request I get Failed - No file?
As far as I understand, res.sendFile can be a right choice for small files, but for big-ones it's better to use res.write, which can be split into chunks. Is it possible to use res.write with Get-request?
P.S. I've elaborated this question to make it more narrow and clear. Previously this question was focused on downloading a big file from Dropbox without storing it in the RAM, the answer can be found:
How to download a big file from Dropbox with Node.js?
Chrome can't show nice progress of downloading because the file is downloading on the background. And after downloading, a link to the file is created and "clicked" to force Chrome to show the dialog for the already downloaded file.
It can be done more easily. You need to create a GET request and let the browser download the file, without ajax.
app.get("/download", async (req, res, next) => {
const { fileName } = req.query;
const downloadResult = await StorageDriver.fileDownload(fileName);
res.set('Content-Type', 'application/pdf');
res.send(downloadResult.fileBinary);
});
function fileDownload(fileName) {
const a = document.createElement("a");
a.href = `/download?fileName=${fileName}`;
a.download = fileName;
a.click();
}
I have followed a tutorial to produce a Twitter bot using node.js, github and Heroku. Everything works great, the bot pulls a random image from a folder at timed intervals and tweets the image.
I'm trying to change the process so that instead of pulling images from a local folder (called 'images'), it pulls them from a web hosted folder. For example, rather than get the images from the local /images folder, I'd like it to pull the image from http://mysite/images. I have tried changing what I think are the relevant bits of code below, but am having no luck. Could anybody offer some advice please?
The whole code is below, but for reference, the bits I have tried changing are:
var image_path = path.join(__dirname, '/images/' +
random_from_array(images))
and
fs.readdir(__dirname + '/images', function(err, files) {
In both cases above I tried changing the /images folder to http://mysite/images but it doesn't work. I get an error stating that no such folder can be found. I have tried changing/deleting the __dirname part too but to no avail.
Any help appreciated!
Full code below:
const http = require('http');
const port=process.env.PORT || 3000
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/html');
res.end('<h1>Hello World</h1>');
});
server.listen(port,() => {
console.log(`Server running at port `+port);
});
var Twit = require('twit')
var fs = require('fs'),
path = require('path'),
Twit = require('twit'),
config = require(path.join(__dirname, 'config.js'));
var T = new Twit(config);
function random_from_array(images){
return images[Math.floor(Math.random() * images.length)];
}
function upload_random_image(images){
console.log('Opening an image...');
var image_path = path.join(__dirname, '/images/' +
random_from_array(images)),
b64content = fs.readFileSync(image_path, { encoding: 'base64' });
console.log('Uploading an image...');
T.post('media/upload', { media_data: b64content }, function (err, data,
response) {
if (err){
console.log('ERROR:');
console.log(err);
}
else{
console.log('Image uploaded!');
console.log('Now tweeting it...');
T.post('statuses/update', {
/* You can include text with your image as well. */
// status: 'New picture!',
/* Or you can pick random text from an array. */
status: random_from_array([
'New picture!',
'Check this out!'
]),
media_ids: new Array(data.media_id_string)
},
function(err, data, response) {
if (err){
console.log('ERROR:');
console.log(err);
}
else{
console.log('Posted an image!');
}
}
);
}
});
}
fs.readdir(__dirname + '/images', function(err, files) {
if (err){
console.log(err);
}
else{
var images = [];
files.forEach(function(f) {
images.push(f);
});
/*
You have two options here. Either you will keep your bot running, and
upload images using setInterval (see below; 10000 means '10 milliseconds',
or 10 seconds), --
*/
setInterval(function(){
upload_random_image(images);
}, 30000);
/*
Or you could use cron (code.tutsplus.com/tutorials/scheduling-tasks-
with-cron-jobs--net-8800), in which case you just need:
*/
// upload_random_image(images);
}
});
Well, my first answer to a question about building a twitter bot would probably be: "Don't do that!" (Because the world doesn't need more twitter bots.) But, putting that aside...
Your code is using the "fs" library, which is exactly what you needed for grabbing stuff from the local file system. That was fine. But now you want to grab stuff from web servers, which "fs" is not going to be able to do. Instead, you need a library that gives you the ability to make an HTTP or HTTPS request across the web and bring you back some data. There are different libraries that do this. Looks like you are already bringing in the "http" library, so I think you are on the right track there, but you seem to be using it to set up a server, and I don't think that's what you want. Rather, you need to use http as a client, and replace your fs.readFileSync() calls with the appropriate calls from the http library (if that's the one you choose to use) to pull in the data you want from whatever server has the data.
Hope that helps. And I hope your twitter bot is going to be a good little bot, not an evil one!
I am using express-fileupload and its working fine and uploading images . I cant find a way to limit the file size, or write a check in which I make sure no file is uploaded having size more than 1MB.
app.post('/upload', function(req, res) {
if (!req.files)
return res.status(400).send('No files were uploaded.');
// The name of the input field (i.e. "sampleFile") is used to retrieve the uploaded file
let sampleFile = req.files.sampleFile;
// Use the mv() method to place the file somewhere on your server
sampleFile.mv('/somewhere/on/your/server/filename.jpg', function(err) {
if (err)
return res.status(500).send(err);
res.send('File uploaded!');
});
});
I tried something like this which is actually clipping the remaining image
app.use(fileUpload({
limits: {
fileSize: 1000000 //1mb
},
}));
I can do this JavaScript by checking the file size of every file, but is there not any build in feature ??? How about multiple file upload on a single click? for multiple files it will go through the loop and check each file size and exclude those files having size greater than 1mb and upload only those having size meet with requirement. So I am wondering apart from writing my own code is there not any build in features ???
It's truncating the remaining image because the size limit was reached and you didn't explicitly set the middleware to abort the upload in this case.
So, try setting the option "abortOnLimit" to true, like this:
app.use(fileUpload({
limits: {
fileSize: 1000000 //1mb
},
abortOnLimit: true
}));
For more information, this is what the documentation tells about using the option 'abortOnLimit':
Returns a HTTP 413 when the file is bigger than the size limit if
true. Otherwise, it will add a truncate = true to the resulting file
structure.
Source link: https://www.npmjs.com/package/express-fileupload
In express-fileupload, I can see the busboy options. Have you tried that?
https://github.com/richardgirges/express-fileupload#using-busboy-options
I wanted to write this in comments but I don't have enough reputation points. :(
I'm building a Meteor app that communicates with a desktop client via HTTP requests with https://github.com/crazytoad/meteor-collectionapi
The desktop client generates images at irregular time intervals, and I want the Meteor site to only display the most recently generated image (ideally in real time). My initial idea was to use a PUT request to a singleton collection with the base64 imagedata, but I don't know how to turn that data into an image in the web browser. Note: the images are all pretty small (much less than 1 MB) so using gridFS should be unnecessary.
I realize this idea could be completely wrong, so if I'm completely on the wrong track, please suggest a better course of action.
You'll need to write a middleware to serve your images with proper MIME type. Example:
WebApp.connectHandlers.stack.splice (0, 0, {
route: '/imageserver',
handle: function(req, res, next) {
// Assuming the path is /imageserver/:id, here you get the :id
var iid = req.url.split('/')[1];
var item = Images.findOne(iid);
if(!item) {
// Image not found
res.writeHead(404);
res.end('File not found');
return;
}
// Image found
res.writeHead(200, {
'Content-Type': item.type,
});
res.write(new Buffer(item.data, 'base64'));
res.end();
},
});