We'd like to experiment with coffeescript and eventually convert all js code to coffee. As we are using require.js I assume the simplest approach regarding the loading part for local development is to use the require.js coffeescript plugin and adjust the module loading accordingly, e.g.
var myModule = require('cs!myModule');
Does this procedure, which if my understanding is correct implies that all .coffee files are compiled on the fly, run the risk of becoming a performance issue quickly and therefore might slow development down significantly?
If so, what alternative do you suggest?
I guess whether or not this will become a performance issue depends largely on the size and structure of your application. From my experience, compiling coffeescript doesn't take a great deal of time, but I've only ever used it with fairly small projects (5-10 files, ~50 lines each).
Since Require.js allows you to split up your code nicely into modules that will only be loaded when they're required, it should be possible to structure your app in such a way that only a few coffeescript files need loaded & compiled for each page load.
The only alternative that I have tried is to run the coffeescript compiler from the command line in watch mode. In this mode it will watch your coffeescript files, and then compile them into javascript whenever it detects changes. (Though as an aside I've found that this isn't perfect either - the compiler sometimes seems to stops watching my folder, leaving me scratching my head for a few minutes over why my changes have had no effect)
Personally, I'd recommend just using the require.js coffeescript plugin for development - if it becomes too much of a performance problem then you could easily just switch to using the command line compiler in watch mode. Converting your require calls should just be a case of a simple search & replace I'd imagine.
Related
Not specifically a Rails question, but a question within a Rails app.
In my app I am using the jsbundling-rails gem configured with esbuild.
This gem adds a build line to my package.json file. It works and compiles all my JS and runs fine. However, I found that the generated file is rather large so I started looking at ways to optimise it.
My esbuild statement at this point looks like:
"build": "esbuild app/javascript/*.* --bundle --sourcemap --outdir=app/assets/builds"
Firstly I thought I could try making my imports conditional. Eg, only import them when they are actually required. I asked another question on how to do that here.
I learned quite a lot digging into that, but at the end of the day it's made no difference to the output of my JS weirdly.
Chrome currently says that my main JS file has 91% code unused. It looks like all the imports are still being compiled together, whether they are statically or dynamically imported. Why can this be?
I then looked further into esbuild, I spotted the --splitting flag. It sounded reasonably correct so I updated my build script to be:
"build": "esbuild app/javascript/*.* --bundle --splitting --format=esm --sourcemap --outdir=app/assets/builds"
This caused a huge amount of outputted JS files (I think they are referred to as "chunks".
I ran my app, and the JS failed to load. The console stated that
Uncaught SyntaxError: Cannot use import statement outside a module
I wasn't 100% sure why was the case but I just guessed that I needed to add type: "module" to my javascript_include_tag in my Rails application layout view.
This made the JS load (which is good :-) )
BUT... The percentage of unused JS code is still 84% of my application.js
So..... my questions are as follows:
Are my dynamic importing of modules working?
Why does static or dynamic importing appear to make no difference?
How can I effectively reduce the size of the output code and reduce the unused percentage of JS on my home page?
This all started because I ran Google's Lighthouse test on my site and it reported my Structure and Accessibility to be practically perfect but performance was < 40. I am aiming to solve this.
I look forward to hearing from you with ideas on how I can try to fix this and improve my Lighthouse Performance score.
Are my dynamic importing of modules working? Why does static or
The paradigm you want is called Propshaft. Try looking at the SProckets -> Propshaft
https://github.com/rails/propshaft/blob/main/UPGRADING.md
Why dynamic importing appear to make no difference?
Because even though ESBuild allows for ES Module syntax (import statements), it still bundles everything into "1" big file. (1 big file for each javascript_include_tag you have, of course).
How can I effectively reduce the size of the output code and reduce the unused percentage of JS on my home page?
The sprockets paradigm was built for the HTTP1 web when keeping connections open and progressive download wasn't realistic. HTTP2 changed all that and now it's more efficient to do code splitting as you want to do. But the Rails world is still very behind and most apps still use Sprockets and try to optimize/minify was much as possible.
I'd recommend you take this course of action:
(1) Try the old-style way first. Remove anything unnecessary, split your app into different sections and load different manifest files for different sections. Use minification. See how far that gets you.
(2) Start experimenting with the new Propshaft for a few weeks until you fully understand it. If you feel it is solid, migrate to that.
I am developing pretty big SPA (final ~30MB) and unfortunately one of requirements is that an app has to be released as one big html file. I use webpack to connect all pieces together.
Currently I am facing a problem with performance (some libraries are quite big ones). They "eat" a lot of ram and affects loading time due to code evaluation in browser. I would like to postpone it and evalute only these modules which are necessary at main screen of app.
My idea is to use the same mechanism like webpack does for sourcemaps:
https://webpack.js.org/configuration/devtool/ (eval-source-map)
Webpack simply puts code within eval("code of module") which prevents automatic evaluation by Javascript engine. Of course this code can't be minified and there is also sourcemap attached as base64 to the end. I would like to do same without sourcemaps and including uglification. Moreover I have an idea to reduce size of application by compressing sources so eventually it would be eval(gz.decompress("code of module")).
It will be a huge change in application so before I am going to reinvent a wheel I would like to ask you:
Does it make sense from problem point of view?
Do you know any existing solutions?
Do you suggest to use any existing components from webpack like:
https://webpack.github.io/docs/code-splitting.html
or write own solution from scratch (loader/plugin).
Don't do that what you want!
If you do want to find a weird trick to get what you want, try including your big JS file dynamically. See here or google jquery getscript. No additional Webpack actions required.
If not, please, continue reading.
You're dealing with the problem from the wrong perspective.
First, make sure you are doing all the obvious HTML/HTTP stuff:
You're downloading the gzip-ed version of the file (if not, google http script gzip)
You're including the <script> tag at the end of the body. This will start downloading and parsing JS only after HTML has been rendered.
Then, the most important, try to figure out where is the 30MB coming from. It's unlikely a fair sum of all your big fat dependencies. Usually, it's a particular bloated library (or two). Make sure you use got instead of request because the least is bloated. Find alternatives for the out-sized dependencies.
No single SPA in the world should have a 30MB JS bundle. I'm assuming your project isn't very large because otherwise it would be business critical and you would invest into providing a decent back-end strategy (e.g. code splitting, dead code elimination, etc.).
1) The similar problem can be solved with Webpack code splitting functionality.
The idea is that you don't load route specific code and libraries until the user accesses the specific page.
2) Take a look at this: script-ext-html-webpack-plugin, looks very promising to do these kinds things. For example, defer options would be for modules or scripts that you want to delay the execution. Async would be for scripts that you want to execute as HTML gets executed. Be careful though about race conditions.
3) You mentioned that you use libraries that are so big, make sure you use Webpack with tree shaking. If you use the old Webpack (version 1.*) which does not have tree shaking, you should try to optimize imports manually. For example, instead of import _ from 'lodash' it would be import map from 'lodash/map'.
4) You also mentioned that it is the ram that is the problem, so how compression can help ram? compression can help the browser to retrieve it faster.
5) The other idea would be:
Load the scripts that you need for the home page
execute them. at this point, the user sees the functioning page
then behind the scenes load other scripts slowly without the user to notice it.
evaluate loaded code as it will become needed for the user.
I'm developing a modular framework in javascript and am looking for a way to automatically optimize/combine a set of javascripts as a precompile step.
I'm already using grunt so a grunt-task would probably make sense.
The framework consists of modules in their own files (as in the rectangular 'widgets' we're all used to) that in turn may require other javascripts.
All this is wired using Require.js which works great.
However, I came across the following constraint when trying to use r.js which comes with require.js
The optimizer will only combine modules that are specified in arrays
of string literals that are passed to top-level require and define
calls, or the require('name') string literal calls in a simplified
CommonJS wrapping. So, it will not find modules that are loaded via a
variable name:
The thing is: modules may inherit from eachother, and even composition of other modules is possible through configuration (with the technical need to load the referenced modules sitting in their own js-files).
This doesn't work with the mentioned constraint above.
I'm sure I could cook up something myself with enough time, but perhaps someone has already done something like this. (r.js but more flexible).
A doable solution imho would be to:
let the precompile-task run the page for which the js needs to be optimized once (but on the server in Node instead of on the client, the framework is able to do this)
and somehow track all the libraries loaded in by require.js
read out require.js somehow and voila there's your list of js-scripts to load.
hand this to r.js through the include it provides and r.js handles it from there.
there are more pagetypes btw. But in r.js it seems possible to define common libraries, so they don't get included in the per-page-optimized file.
Does this sound plausible? Anyone ever tried something like this?
This seems overly complicated. In r.js build there is option onBuildRead, where you can modify source so that it would be acceptable to optimizer. Also you may look into Internal API: onResourceLoad. Where you can capture all loaded dependencies and then make a call to do custom build.
To load your page you would have to use PhantomJS, so that it acts as a browser and executes JS. Then signal node to produce custom build for that page. But then need to switch resources on that page to use custom build. I guess you can make it configurable and do that when in production.
It does sound that it is possible, not sure if it is feasible.
Our file structures is pretty good, organizing functionality in separate folders. My question is how do others work on applications that involves upwards of 500 JavaScript files.
We have written a maven plugin to concatenate these files together (also runs YUI compressor). However, this involves 3-10seconds of compiling for every change.
Is this step necessary for organization of a large application, I feel like a well structured HTML file pulling in all these resources would save me 45minutes every day.
For my own framework projects, typically monitoring, testing, or in-page services to orchestrate other toolkits (but not as high as your file count), my approach has been to target the individual and dynamically loaded files during development. For test, I'll run one build to compress and version the individual files, and test the individual files again because, depending on the concatenation order, compression technique, and browser, I may wind up with a script error and it's a pain to dig it out of one monster file. Third, I'll concatenate together and test once more.
In the HTML reference, I'll either target the uncompressed file, which loads specified dependencies, or the compound file. A separate bootstrap file names the dependencies, which are either included in the compound file, or loaded dynamically as needed.
This way I can add or change a file, and start developing and testing without rebuilding.
The solution is likely to concatenate and compress for user testing and production only.
For development, it's probably best to simply import them all into the HTML file. It speeds up the dev process, and also simplifies debugging. It also allows the browser to cache some of those files.
When you can't rely on cached copies (which, with 500 files, I don't think will be very often), it will slow down load times.
You can likely save a lot of time by only running the compressor in production. The YUI compressor is notoriously slow, because it uses Java Rhino interpreter to actually parse the JavaScript and analyze it etc.
I'm currently developing an application that will be run on local network in B2B environment. So I can almost forget about micro(mini?) optimizations in terms of saving bandwidth because Hardware Is Cheap, Programmers Are Expensive.
We have a well structured Object oriented js code in the project and obviously lots of js classes. If all the classes will be stored in separated files then it will be quite easy to navigate through this code and hence maintain it.
But this will bring my browser to generate a couple dozens of HTTP requests to get all the js files/classes I need on the page. Even in local environment it is not super fast on first load(with empty cache), and later when you modify it and cache has to be invalidated.
Possible solutions:
violate rule "one class per file"
use YUI compressor all the time(in development & production) for generating one big js file.
But if we choose YUI compressor for this(no minify action in dev environment, and minify for production) - then we need to reload/recompile this big js file on every modification in any js file.
What would you recommend for solving this problem?
Keep all the .js files separate. Keep your "one class per file" rule.
Then, use a server-side technology to aggregate the script into one request.
Options:
Use an ASPX or PHP or whatevver server-side scripting thing you have, to aggregate all the JS into one request. The request for a .js is no longer a static file, but with caching on the server it should be relatively cheap to serve.
Use Server Side Includes in a consolidated .js file.
<!--#include virtual="/class1.js"-->
<!--#include virtual="/class2.js"-->
Your approach of having separate files for each class is good - practices that make development easier are always good.
Here's some tips for making the loading faster:
Compress your code. As you say, you could use YUICompressor, or the newly released Google Closure Compiler.
When concatenating multiple files into one, think of what you need and when: If you only need files A, B and C when the app starts, but not Z and X, put only A, B and C into a single file. Load another file with Z and X concurrently after A/B/C.
You can use Firefox plugins YSlow and Page Speed to test for load performance bottlenecks
As you mention, you would need to rerun the compressor each time you make a change. I don't think this is a big problem - on a decent machine, it should run pretty fast even with a lot of files. Alternatively, you could use a daily build process using some tool, which could build the latest revision from your source control (you do use scm, right?), and run unit tests and deploy if everything goes OK.
I would recommend using Ant or some other automation tool to create a build script. This will make it as simple as running one command to build your compressed script, reducing the repetitive work you would otherwise need to do. You could even have Ant deploy your code to the server.
You may have the best of both worlds - a development environment with one class per js file without the need to compile/deploy for every iteration AND one (or several) concatenated larger js files (minified if desired) in production.
Depending on your build environment this may be setup in a number of different ways, but using Ant may be the easiest way. Using Ant you can run tasks for both concatenation and minification (running YUICompressor through the Java task). This will produce the concatenated and minified large js file.
However, to maintain productivity you want to avoid doing this for every code iteration. Changing the tags from one to several (for every class file) is out of the question.
So, you load your big js file as expected:
<script src="application.js"></script>
When deploying to production this file is the concatenated/minified version of all your js files.
However, during development this file is a bootstrap/loader file that simply loads all your individual js-files (illustrative example using jQuery).
$.getScript('/class1.js');
$.getScript('/class2.js');
$.getScript('/class3.js');
$.getScript('/class4.js');
$.getScript('/classn.js');
....
If you are using YUI 3 look into the module behavior and how to specify dependencies.
Using different Ant targets the generation and copying of these files to the correct location may easily be managed.
And now you may simply reload your browser whenever you need to test a change in a file, but get the performance benefit during production. All without sacrificing productivity or maintainability.