I've been looking for a solution to optimize images for better performance.
After trying some solutions (like sharp and imagemin, which resulted in never smaller, sometimes bigger files), I am now looking to experiment with Google's pagespeed. Unfortunately, pagespeed seems to only support Apache and Nginx, which I would not like to add to my stack just for image optimization (also, I would prefer making the optimization once on upload, and not on server request, even if cached).
I will be very grateful for any information that might help me implement this in native node.js, and for any other (working!) image optimization recommendations.
I suggest you only use node for dynamic content (e.g: your application server logic).
For static content, such as images, stylesheets and others... just serve them with a regular web server like nginx. There you can use ngx_pagespeed.
Try npm module lwip.
It's a standalone library(re-built when installing) and no runtime dependencies.
I use its resize and scale operations to create thumb images in my file service project.
Check if this is what you need. :)
Related
The goal is to have a .html file that can be opened directly (e.g. run as file://path-to-file.../file.html in Chrome browser).
Their is a /user-json folder, where users can drop in their own .json files and the .html file loads up the json data.
I am currently using axios to a) find all files in that /user-json directory, and then b) load them in for use in the javascript.
The issue is that CORS/Browsers blocks these axios requests due to security policies.
A work around is to use --allow-file-access-from-files, but I want to provide an easy way for anyone to just drop-in their own "user-json" data into that /user-json folder, and have the .html detect and load it up, without needing to play around with their chrome browser settings or even running a web server.
Any ideas for getting this to work or alternative methods? Please don't suggest running the files on a server as I'm away of that option - I'm looking for a way users can provide their own files that the locally opened .html file can detect and load in.
There are many ways you can go about this. It's mostly a matter of picking your favourite abstraction.
Easiest would be to leverage a service like CORS Anywhere. Though of course that isn't the most robust solution, and if you're concerned with the aesthetics of your code, it likely won't appeal (note: there is an NPM package).
Alternatively, if you want to host your own solution, you can spin up a free instance, like a Cloudflare Worker; they even have an example of how to accomplish what you're after here.
Passing your requests through nginx is also possible, and arguably much more succinct and performant.
Any way you go about it, you'll have to route your requests through some kind of proxy, because a stock Chrome browser is not gonna cut it.
Bon voyage.
I am working on a website which is based on angularjs and rails in the backend.
The site is currently in production/live
The issue which I am having is that after the assets have been precompiled with the help of rake assets:precompile,The overall js file size goes above 1Mb.Hence it takes time for the site to load.
This is a major issue and since the site is fully ajax based,I cannot implement page caching.
Also have tried gzip on my nginx server but this is not helping.
This is hampering the performance of the site and would welcome any sort of help or suggestions if possible.
Thanks
I don't know about RoR or the rake assets you mentioned but here is a few leads and how I proceed (Lately, I've been starting to use Grunt) :
Concat your js files into 1 js file. It's easier to process one request rather than many little ones.
Minify your js files and make sure to use minified lib version.
Try to adopt a smart approach to load your libraries and your own files. For instance, if you only need graphics in your admin dashboard, make sure not to load d3.js on your front page. I know the Jquery ecosystem is full of useful plugins but I've seen way too many developers taking shortcuts and claiming they need Jquery when others viable alternatives exist.
Serving file using gzip is a good idea. This should reduces the size of your files significantly.
Also, Could you provide a link to your website ?
First, my html locates things relatively on the server. Do I need to change these to the full urls so it knows I don't mean localhost?
Where is the line between what you package in the app and what you grab from the server? Do you package all of your JS and CSS. Do you also grab scripts from the server sometimes? Do you package images and also grab some images dynamically?
Any other things to keep in mind when starting out a port of a web app to phone gap? Just having a hard time figuring out exactly where the lines are usually drawn as far as assets.
Thanks very much.
Do you care what happens when your user loses his connection to the internet from his phone? If not, then you can reference CSS and images from the server. However, if you use the min version of jquery, you can just embed it all in your application and then you won't have to worry as much about lack of connectivity.
Multiple sites reference combining JavaScript and CSS files to improve web page performance, including examples of using ANT build scripts to concatenate the files prior to deployment.
I've search, and haven't found any information how to automate updating references to those files in HTML and other documents. I am looking to avoid hacking together something error prone, and want to learn from others who have automated builds in the past.
Are there automated tools in the wild to complete this task that I'm not seeing? Are there recommended processes to update the script and link tags in HTML? Can these solutions be integrated with ANT or similar build tools?
There sure is and it's a smart thing to do.
I found a PHP solution, don't know it that's okay for you, but if it isn't you can still read it's source (it's not difficult) and learn a lot. The solution works like this:
Rewrite your requests like this: from css/main.css and css/skin.css to css/main.css,skin.css (of course you can put many more).
Use apache's mod_rewrite to redirect this request to a script (in our case combine.php), that will combine all files to a single one.
The script combines all the files and sends the combined file. Then it saves it to a cache folder.
Next time around it checks if there is an up-to-date version of the cache and serves that one. If the latest file modification time has changed, it discards the cache.
The solution works great and it even makes use of HTTP cache headers and spits out an [ETags], which you should do anyway.
You are correct this is a great way to speed up page loading. It will even work in conjunction with a CDN, which the other poster recommended.
Here is a small script that will pack multiple files in to one for deployment. It supports both JS and CSS, and will even "minify" them by removing whitespace, etc. Just hook this in to your build and deploy scripts.
juicer: http://cjohansen.no/en/ruby/juicer_a_css_and_javascript_packaging_tool
What even better, it will follow JS and CSS import statements, so you only need to point your HTML files to the loader file and it will work in both development and production. (Assuming you replace the loader file with the combined file on deployment.)
There are others, including some run-time solutions. But it sounds like you have a build process in place anyway.
As far as HTML updating, if you still need it, since automated deployments are very popular in the Ruby world, and you may find some standalone utilities to help even for non-ruby projects. (As above) Methinks this would be best handled by your own project's template language, though. (With a static resource revision id, or such.)
Good luck, and let us know what you find.
I think what you really want is a CDN Content Delivery Network.
Read about it here
http://developer.yahoo.com/performance/rules.html
http://en.wikipedia.org/wiki/Content_delivery_network
I've been using yuicompressor.jar on my test server for on-the-fly minimisation of changed JavaScript files. Now that I have deployed the website to the public server, I noticed that the server's policies forbid the use of exec() or its equivalents, so no more java execution for me.
Is there a decent on-the-fly JS compressor implemented in PHP? The only thing resembling this that I was able to find was Minify, but it's more of a full-blown compression solution with cache and everything. I want to keep the files separate and have the minimised files follow my own naming conventions, so Minify is a bit too complex for this purpose.
The tool, like yuicompressor, should be able to take either a filename or JavaScript as input and should either write to a file or output the compressed JavaScript.
EDIT: To clarify, I'm looking for something that does not have to be used as a standalone (i.e. it can be called from a function, rather than sniffing my GET variables). If I just wanted a compressor, Minify would obviously be a good choice.
EDIT2: A lot has changed in the five years since I asked this question. Today I would strongly recommend separating the front-end workflow from the server code. There are plenty of good tools for JS development around and except for the most trivial jQuery enhancements it's a better idea to have a full workflow with automated bundling, testing and linting in place and just deploy the minified bundles rather than the raw files.
Yes there is, it's called minify.
The only thing in to worry about in the way of complexity is setting up a group, and there's really nothing to it. Edit the groupsConfig.php file if you want multiple JS/CSS in one <script> or <link> statement:
return array(
'js-common' => array('//js/jquery/jquery-1.3.2.min.js', '//js/common.js', '//js/visuals.js',
'//js/jquery/facebox.js'),
'css-common' => array('//css/main.css', '//css/layout.css','//css/facebox.css')
);
To include the above 'js-common' group, do this:
<script type="text/javascript" src="/min/g=js-common"></script>
(i know i was looking for the exact same thing not knowing how to deal directly with the jar file using php - that's how i ended up here so i'm sharing what i found)
Minify is a huge library with tons of functionalities. However the minifying part is a very tiny class : http://code.google.com/p/minify/source/browse/trunk/min/lib/Minify/YUICompressor.php
& very very easy to use :
//set the path to the jar file
Minify_YUIcompressor::$jarFile=_ROOT.'libs/java/yuicompressor.jar';
//set the path to a writable temp folder
Minify_YUIcompressor::$tempDir=_ROOT.'temp/';
//minify
$yourcssminified=Minify_YUIcompressor::minifyCss($yourcssstringnotminified,$youroptions)
same process for js, if you need more functionalities just pick from the library & read the source to see how you can make direct call from your app.
I didn't read the question well, since minify is based on using the jar files, the op can't use it anyway with his server config
Minify also include other minifying methods than yui, for example:
http://code.google.com/p/minify/source/browse/trunk/min/lib/JSMinPlus.php?r=443&spec=svn468
Try Lissa:
Lissa is a generic CSS and JavaScript loading utility. Lissa is an extension of the YUI PHP Loader aimed at solving one of the current loader limitations; combo loading. YUI PHP Loader ships with a combo loader that is capable of reducing HTTP requests and increasing performance by outputting all the YUI JavaScript and/or CSS requirements as a single request per resource type. Meaning even if you needed 8 YUI components which ultimately boil down to say 13 files you would still only make 2 HTTP requests; one for the CSS and another for the JavaScript. That's great, but what about custom non-YUI resources. YUI PHP Loader will load them, but it loads them as separate includes and thus they miss out on benefits of the combo service and the number of HTTP requests for the page increases. Lissa works around this limitation by using the YUI PHP Loader to handle the loading and sort of YUI and/or custom resource dependencies and pairs that functional with Minify.