I have a couple of Angular UI libraries used in many applications. These libraries have more than 50 modules in them but the applications usually do a barrel import. I want to analyze their production bundle and find out which of the modules are used in the build.
When I use source-map-explorer or webpack-bundle-analyzer I get to see the library's root module only, is there any way from there to drill down further? Alternatively, I am thinking of creating a custom decorator in the library to print a message to console when used but the drawback I see is that the message will appear only on runtime not from build.
Is there a good way to derive this statistic?
Related
I've been tasked with the development of a NPM package with a custom component (in this case a react component) that makes uses of other dependencies such as plate, slate, etc.
I'm in the process of preparing the output dist but it's not clear to me what the best practices are when doing so:
Should all dependencies be resolved and bundled into a big .js file or this can be ignored ? (I'm using rollup resolve here). I'm afraid this would produce a huge file including the source of all the dependencies but as I stated I'm really not familiar with the process...
In the other hand, is it common NOT to resolve such dependencies and let the final consumer of the component do so ? (I'm only assuming here)
It's all about pros and cons... and what is possible. For example React itself can only exist in one version in an entire project so you should never include that.
Dependencies that are needed but not included should be added as peerDependencies in your package.json and it is the responsibility of the consumer to download them. The downside with including dependencies (as dependencies so that they will be downloaded automatically by the consumer) is that the bundle of the consumer might be bigger than it needs to be. Here you should take into account who will consume it; is it for internal use in your organization or public use? Do you know anything about the context it will be used in? It's best not to include dependencies since it will contribute to smaller resulting bundle for the consumer but if it's unlikely that the depending dependencies are present in the build environment of the consumer, you might as well add it to your package. The situation that you want to avoid is that your package includes a different version of the same package that the consumer is already using; then the resulting bundle may contain two versions of a lot of code that could potentially be reduced to one version (if the version used by the consumer and by your package are compatible). Of course all of this potentially becomes worse and more likely with large common dependencies than with small uncommon dependencies.
An example: in my organization we use Material-UI. We have a package with React components using Material-UI which we consume in other projects. Since Material-UI will always be present in the projects, it's bad practice to include it in the package, even though it will place a higher responsibility on the consumers (us) to align different versions of the package with whatever version of Material-UI that we're using in the applicable project. Given another consumption context, including it in the package might have made more sense.
According to me, you should never bundle your package since it makes tree shaking more complicated for the consumer. This applies to esm packages (cjs is not tree-shakeable). In cjs on the other hand it's devastating with bundled packages since it prevents the consumer from making more specific imports to avoid importing a lot of unused code, e.g.
import Comp from "package/Component"
instead of
import { Comp } from "package"
There's almost never a good reason to embed dependencies inside library bundles. That's how you end up with multiple copies of dependencies in web apps. At best, web apps' bundles end up being unnecessarily bloated. At worst, dependencies that break when they're duplicated (e.g. React) yield all sorts of unexpected behaviour.
The surefire way to prevent dependency duplication is to avoid bundling libraries at all. If you inspect your node_modules folder in any of your web app projects, you'll likely find numerous third-party dependencies that aren't bundled. And their lack of bundling doesn't have an impact on your web app. That proves that library bundling is pointless.
Existing App:
My application is using require.js.
Benefit of my application is that other people can extend my application via writing 3rd party plugins using require.js
For example: (a 3rd party plugin registration)
// registering a new plugin
{
url:'#new-page-url',
js:'plugin-folder/new-page-url.js'
}
So when #new-page-url is hit anywhere, requirejs consumes js file from plugin-folder/new-page-url.js
Please note that after I compile my application, it does not include third party source as they can be fetched on the fly with requriejs
Question:
I have been looking into webpack and since it compiles everything before distribution (bundle.js as starting file).How can a third party plugin work on the fly like above example ?
Please note that after I compile my application, it does not include third party source as they can be fetched on the fly with requriejs
Yes, that's the problem.
I'm in a situation similar to yours where a large application of mine written as a collection of AMD modules can load editing modes at runtime, which are also AMD modules. The modes are not generally bundled with the application.
AFAIK it is not possible to replicate only with Webpack the possibility of doing the equivalent of a RequireJS' require([a]) where a is a variable whose value cannot be known at build time but is determined at run time. (And for the benefit of readers who may not be familiar with RequireJS, I'll add that yes, I do mean the first argument to be [a] and not just a. RequireJS makes a distinction between the two.)
Webpack needs to know at build time which modules it is going to bundle together. ("Needs to know" means it needs to know the name and find the module's source.) So it does not support dynamically loading modules that only get to be known at runtime. If you read the documentation, or tutorial, you'll run into discussions of dynamic loading with Webpack but these do not allow doing the equivalent of the require([a]) case. Webpack can split a bundle into chunks and load chunks as needed but for this to work, Webpack still needs to know ahead of time the whole set of modules it is ever going to need. This does not allow loading at runtime a module that was unknown at build time. There's also require.context, but while it allows you to determine at run time which specific module you want, the set from which the module comes is determined at build time. If at build time you know that you're going to be using one of A, B, C, that's fine. But if you don't know at build time the name of your module, or cannot feed its source to Webpack, you're out of luck.
This being said, it should be possible to build your application's core with Webpack and have this core perform direct calls to a module loader like RequireJS or SystemJS. This is the direction I'm moving into with my own application but I've not crossed that bridge yet.
If you code an application using ES6 modules and classes, is there any need to use a module loader framework, or is the best practice to just use a build tool to concatenate all the code into a file (or files) and include those using a normal script tag?
Yes. Somebody, somewhere along the line has to load the module.
I think you're conflating compiling modules ahead of time vs loading them individually. Webpack is a module loader that outputs a single file for the browsers to use later, while the System API and requirejs et al load a number of individual files.
There are performance factors on both sides, particularly longer build time (when precompiling) vs longer load time (with multiple files).
Webpack, Browserify, and most other module loaders (with the notable exception of the System API) allow you to define some loaders for certain file types and automagically compile your (S)CSS or templates on the way through, as well as running other tools to uglify or obfuscate your code. The ES6 System API does not provide these features, but is a more robust runtime loader than most.
This boils down to two trade-offs:
support for non-JS modules (styles, templates) vs build time
single request and longer build vs many requests and short/no build
Evaluate them for your users (high-bandwidth vs mobile), environment (if you have two dozen CI agents, who cares if the build takes an extra 3s?), and stack (if you have a lot of template files, compiling them AOT could be important).
I have a question regarding following TypeScript plugin for SystemJS :
https://github.com/frankwallis/plugin-typescript/
Here is its description
A plugin for SystemJS which enables you to System.import TypeScript files directly. The files are compiled in the browser and compilation errors written to the console.
I wonder what would be the use cases of such plugin.
Why would developers import directly ts files and compile them in the browser instead of compiling them during development and import js files ?
Won't it reduce performance and load time to do it in browser ?
Is it supposed to be used only in development environment ?
plugin-typescript author here. In-browser compilation is strictly a development tool, in production you would use systemjs-builder (in combination with plugin-typescript) to create a single file containing all of the transpiled javascript.
Since the plugin was originally developed, a number of new workflows have become available when using typescript & systemjs (typescript single-file transpilation, vscode, systemjs hot-reloading, typescript system.register output, to name a few...) - Which one is right for you will depend on the size of your application, the platform/server you are using, and your own personal preferences.
No one in their right mind would compile/transpile in the browser for production; it's the equivalent of sending a turtle to get your mail because you don't like walking.
This is strictly a development tool for helping TypeScript devs avoid having to constantly compile after every change, with the added benefit of providing features like hot reloading.
I want to ask if it is possible (and generally a good idea) to use npm to handle front-end dependencies (Backbone, jQuery).
I have found that Backbone, jQuery and so on are all available through npm but I would have to set another extraction point (the default is node_modules) or symlink or something else...
Has somebody done this before?
Is it possible?
What do I have to change in package.json?
+1 for using Browserify. We use it here at diy.org and love it. The best introduction and reasoning behind Browserify can be found in the Browserify Handbook. Topics like CommonJS & AMD solutions, build pipelines and testing are covered there.
The main reason Browserify works so well is it transparently works w/ NPM. As long as a module can be required it can be Browserified (though not all modules are made to work in the browser).
Basics:
npm install jquery-browserify
main.js
var $ = require('jquery-browserify');
$("img[attr$='png']").hide();
Then run:
browserify main.js > bundle.js
Then include bundle.js in your HTML doc and the code in main.js will execute.
Short answer: sort of.
It is largely up to the module author to support this, but it isn't common. Socket.io is an example of such a supporting module, as demonstrated on their landing page. There are other solutions however. These are the two I actually know anything about:
http://ender.no.de/ - Ender JS, self-described NPM analogue for client modules. A bit too involved for my tastes.
https://github.com/substack/node-browserify - Browserify, a utility that will walk your dependencies and allow you to output a single script by emulating the node.js module pattern. You can use a jake|cake|rake|make build script to spit out your application.js, and even automate it if you want to get fancy. I used this briefly, but decided it was a bit clunky, and became annoying to debug. Also, not all dual-environment npm modules like to be run through browserify.
Personally, I am currently opting for using RequireJS ( http://requirejs.org/ ) and manually managing my modules, similar to how Mozilla does with their BrowserQuest sample application ( https://github.com/mozilla/BrowserQuest ). Note that this comes with the challenge of having to potentially shim modules like backbone or underscore which removed support for AMD style module loaders. You can find an example of what is involved in shimming here: http://tbranyen.com/post/amdrequirejs-shim-plugin-for-loading-incompatible-javascript
Really it seems like it is going to hurt no matter what, which is why native module support is such a hot topic.
Our team maintains a tool called Lineman for building front-end projects. The tool is node-based, so a project relies on a lot of npm modules that operate server-side to build your assets, but out-of-the-box it expects to find your client-side dependencies in copied and committed to vendor/js.
However, a bunch of folks (myself included) have tried integrating with browserify, and we've run into a lot of complexity and problems, ranging from (a) npm modules being maintained by a third party which are either out of date or add unwanted changes, to (b) actual libraries that start failing when loaded traditionally whenever a top-level function named require is even defined, due to AMD/Require.js baggage.
My short-term recommendation is to hold off and stick with good ol' fashioned script concatenation until the dust settles. Until you have problems big enough or complex enough to warrant it, I suspect you'll spend more time debugging and remediating your build than you otherwise would. And I think most of us agree the best use of your time is focusing on your application code, not its build tools.
You might want to take a look at http://jspm.io/ which is a browser package manager. Has nice ES6 support too.
I personally use webmake for my small projects. It is an alternative to browserify in the way it brings npm dependencies into your browser, and it's apparently lighter.
I didn't have the opportunity to compare in details browserify and webmake, but I noticed webmake doesn't work well with modules internally using global variables such as socket.io (which is full of bloat anyway IMO).
I would be cautious about RequireJS, which has been recommended above. Because it is an AMD loader, your browser will load your JS files asynchronously. It will induces more exchanges between your client and server and may degrade the UX of people browsing from mobile networks / under bad WiFi. Moreover, if you succeed to keep your JS code simple and tiny, asynchronous loading is absolutely not needed !