I'm creating an app using Parcel and #material-ui/styles. My app has the #material-ui/styles dependency. I'm also importing my own npm package that I store locally. This package has also the #material-ui/styles dependency but it's peer dependency. I would assume that this package would use the #material-ui/styles package from my app but there are two different instances of the same package which causes the It looks like there are several instances of "#material-ui/styles" initialized in this application. This may cause theme propagation issues, broken class names and makes your application bigger without a good reason. error.
I was describing it here: https://github.com/mui-org/material-ui/issues/15745 but no one is even trying to help me. It's probably not related directly with the package I'm using but the way how bundlers work. I don't know why Parcel is bundling this package two times instead of just doing it once.
The same problem appears when I try to use Webpack. I always thought that peer dependency will work the way I described.
Here is a reproduction repository: https://github.com/lukejagodzinski/mui-styles-reproduction
Does anyone know how to solve this problem?
I ran into the same issue and this helped me: https://github.com/parcel-bundler/parcel/issues/1838#issuecomment-492369750
That basically will remove the duplicated dependency on build time.
Also be aware that you are using TS, so there's one additional complexity on this issue.
I bought an HTML template recently, which contains many plugins placed inside a bower_components directory and a package.js file inside. I wanted to install another package I liked, but decided to use npm for this purpose.
When I typed:
npc install pnotify
the node_modules directory was created and contained about 900 directories with other packages.
What are those? Why did they get installed along with my package? I did some research and it turned out that those were needed, but do I really need to deliver my template in production with hundreds of unnecessary packages?
This is a very good question. There are a few things I want to point out.
The V8 engine, Node Modules (dependencies) and "requiring" them:
Node.js is built on V8 engine, which is written in C++. This means that Node.js' dependencies are fundamentally written in C++.
Now when you require a dependency, you really require code/functions from a C++ program or js library, because that's how new libraries/dependencies are made.
Libraries have so many functions that you will not use
For example, take a look at the express-validator module, which contains so many functions. When you require the module, do you use all the functions it provides? The answer is no. People most often require packages like this just to use one single benefit of it, although all of the functions end up getting downloaded, which takes up unnecessary space.
Think of the node dependencies that are made from other node dependencies as Interpreted Languages
For example, JavaScript is written in C/C++, whose functions and compilers are in turn originally written in assembly. Think of it like a tree. You create new branches each time for more convenient usage and, most importantly, to save time . It makes things faster. Similarly, when people create new dependencies, they use/require ones that already exist, instead of rewriting a whole C++ program or js script, because that makes everything easier.
Problem arises when requiring other NPMs for creating a new one
When the authors of the dependencies require other dependencies from here and there just to use a few (small amount) benefits from them, they end up downloading them all, (which they don't really care about because they mostly do not worry about the size or they'd rather do this than explicitly writing a new dependency or a C++ addon) and this takes extra space. For example you can see the dependencies that the express-validator module uses by accessing this link.
So, when you have big projects that use lots of dependencies you end up taking so much space for them.
Ways to solve this
Number 1
This requires some expert people on Node.js. To reduce the amount of the downloaded packages, a professional Node.js developer could go to the directories that modules are saved in, open the javascript files, take a look at their source code, and delete the functions that they will not use without changing the structure of the package.
Number 2 (Most likely not worth your time)
You could also create your own personal dependencies that are written in C++, or more preferably js, which would literally take up the least space possible, depending on the programmer, but would take/waste the most time, in order to reduce size instead of doing work. (Note: Most dependencies are written in js.)
Number 3 (Common)
Instead of Using option number 2, you could implement WebPack.
Conclusion & Note
So, basically, there is no running away from downloading all the node packages, but you could use solution number 1 if you believe you can do it, which also has the possibility of screwing up the whole intention of a dependency. (So make it personal and use it for specific purposes.) Or just make use of a module like WebPack.
Also, ask this question to yourself: Do those packages really cause you a problem?
No, there is no point to add about 900 packages dependencies in your project just because you want to add some template. But it is up to you!
The heavyness of a template is not challenging the node.js ecosystem nor his main package system npm.
It is a fact that javascript community tend to make smallest possible module to be responsible for one task, and just one.
It is not a bad thing I guess. But it could result of a situation where you have a lot of dependencies in your project.
Nowadays hard drive memory is cheap and nobody cares any more about making efficient/small apps.
As always, it's only a matter of choice.
What is the point of delivering hundreds of packages weighing hundreds of MB for a few kB project.
There isn't..
If you intend to provide it to other developers, just gitignore (or remove from shared package) node_modules or bower_components directories. Developers simply install dependencies again as required ;)
If it is something as simple as an HTML templates or similar stuff, node would most likely be there just for making your life as a developer easier providing live reload, compiling/transpiling typescript/babel/SCSS/SASS/LESS/Coffee... ( list goes on ;P ) etc.
And in that case dependencies would most likely only be dev_dependencies and won't be required at all in production environment ;)
Also many packages come with separate production and dev dependencies, So you just need to install production dependencies...
npm install --only=prod
If your project does need many projects in production, and you really really wanna avoid that stuff, just spend some time and include css/js files your your project needs(this can be a laborious task).
Update
Production vs default install
Most projects have different dev and production dependencies,
Dev dependencies may include stuff like SASS, typescript etc. compilers, uglifiers (minification), maybe stuff like live reload etc.
Where as production version will not have those things reducing the size node_modules directory.
** No node_modules**
In some html template kind of projects, you may not need any node_modules in production, so you skip doing an npm install.
No access to node_modules
Or in some cases, when server that serves exists in node_modules itself, access to it may be blocked (coz there is no need to access these from frontend).
What are those? Why did they get installed along with my package?
Dependencies exists to facilitate code reuse through modularity.
... do I need to deliver my template in production with hundreds of unnecessary packages?
One shouldn't be so quick to dismiss this modularity. If you inline your requires and eliminate dead code, you'll lose the benefit of maintenance patches for the dependencies automatically being applied to your code. You should see this as a form of compilation, because... well... it is compilation.
Nonetheless, if you're licensed to redistribute all of your dependencies in this compiled form, you'll be happy to learn those optimisations are performed by a compiler which compile Javascript to Javascript. The Closure Compiler, as the first example I stumbled across, appears to perform advanced compilation, which means you get dead code removal and function inlining... That seems promising!
This does however have another side effect when you are required to justify the licensing of all npm modules..so when you have hundreds of npm modules due to dependencies this effort also becomes a more cumbersome task
Very old question but I happened to come across very similar situation just as RA pointed out.
I tried to work with node.js framework using vscode and the moment when I tried to install start npm using npm init -y, it generated so many different dependencies. In my case, it was vscode extension ESlint that I added to prior to running npm init -y
Uninstalling ESlint
Restarted vscode to apply that uninstallation
removed previously generated package.json and node-modules folder
do npm init -y again
This solved my problem of starting out with so many dependencies.
I am thinking of extending the format of package.json to include dynamic package (plugin) loading on client side and I would like to understand whether this idea contradicts with npm vision or not. In other words I want to load a bunch of modules, that share common metadata, in browser runtime. Solutions like system.js and jspm are good for modules management, but what I seek is dynamic packages management on client side.
Speaking in details I would like to add a property like "myapp-clientRuntimeDependencies" that would allow to specify dependencies that would be loaded by browser instead of standard prepackaging (npm install->browserify-like solution).
package.json example:
{
name: "myapp-package",
version: "",
myapp-clientRuntimeDependencies: {
"myapp-plugin": "file:myapp-plugin",
"myapp-anotherplugin": "file:myapp-anotherplugin"
},
peerDependencies: {
"myapp-core": "1.0.0"
}
}
The question:
Does this idea contradict with "npm" and "package.json" vision? If yes then why?
Any feedback from npm community is very much appreciated.
References:
Extending package.json: http://blog.npmjs.org/post/101775448305/npm-and-front-end-packaging
EDIT:
The question was not formulated too well, the better way to ask this is:
What is the most standard way (e.g. handled by some existing tools, likely to be supported by npm) to specify run-time dependencies between 2 dynamically loaded front-end packages in package.json?
What is the most standard way to attach metadata in JSON format to front-end packages, that are loaded dynamically?
I wouldn't say that it conflicts with the vision of package.json, however it does seem to conflict a bit with how it's typically used. As you seem to be aware, package.json is normally used pre-runtime. In order to load something from package.json into your runtime, you'd have to load the package.json into your frontend code. If you're storing configurations that you don't want visible to frontend via a simple view source, this could definitely present a problem.
One thing that didn't quite click with me on this: you said that system.js and jspm are good for module management but that you were looking for package management. In the end, packages and modules tend to be synonymous, as a package becomes a module.
I may be misunderstanding what it is that you're looking for, but from what I can gather, I recommend you take a look at code-splitting...which is essentially creating separate js files that will be loaded dynamically based on what is needed instead of bundling all your javascript into a single file. Here's some docs on how to do this with webpack (I'm sure browserify does it as well).
If I understand correctly, your question is about using the package.json file to include your own app configuration. In the example you describe, you would use such configuration to let your app know which dependencies can be loaded at runtime.
There is really nothing preventing you from inserting your own fields in the package.json file, except for the risk of conflict with names that are used by npm for other meanings. But if you use a very specific name (like in your example), you should be safe enough. Actually, many lint and build tools have done so already. It is even explicitly written in the post you refer to:
If your tool needs metadata to make it work, put it in package.json. It would be rude to do this without asking, but we’re inviting you to do it, so go ahead. The registry is a schemaless store, so every field you add is on an equal footing to all the others, and we won’t strip out or complain about new fields (as long as they don’t conflict with existing ones).
But if you want to be even safer, you could resort to use a different file (like Bower did for example).
I've got a complex architectural problem with bower. I'm building an online platform where user create pages using dynamic widgets which contain JS-code. Those widget have predefined format, description, icons etc., they will packaged into archive (like apk's, war's, jar's, ear's but with front-end code). Users will be able to dynamically add widgets when website is already deployed.
We're using bower and the problem is the following: widgets should also be able to specify their bower dependencies.
Simplified directory layout is the following:
bower.json
gulpfile.js (used for website building)
bower_components # our own deps + deps from all the widgets
widgets
widget1 # any name is possible here
widget2
widget-random
another-widget # for each of the widgets above the layout is the same
bower.json (or simplified version like dependencies.json which contains only dependencies list).
many other files
After widget is uploaded bower.json should be merged with all the deps from other widgets, gulp build will run and rebuild the whole thing.
How do I merge all the bower.json's into a single one? Especially when there is the same dependency twice e.g. one widget depends on "jquery": "<=2.1.0" and another widget depends on "jquery": "^2.1.0". They are both compatible but what string do I write in bower.json? If I write both bower uses only the second and will install the latest jquery - 2.1.1 which is already not compatible with the first widget. And that's a simpler use case.
We can actually assume that there will be not every possible semver spec variation, like <= for example. I can also force widget-writers use my own dependency specification but I can't think how to design it.
Any help is appreciated!
Other approached for widget dependency solution are accepted but note: they cannot have they're own versions of libs because in runtime multiple widgets are loaded. I can't have two jqueries at once, for example, just because two widgets use specs like in the example above.
UPD: I know about RequireJS and I'm actually using it. But, first, I need to download the dependency itself so I could use it with RequreJS later on.
The solution was the following:
each widget itself is a bower package with it's bower.json; The project's bower.json is renamed to bower-base.json and bower.json is generated from bower-base.json with all the widgets added to dependencies property. Then bower automatically handles widget's dependencies.
Is there a reason this has to be done with bower? Bower isn't really intended for runtime dependencies. For that you want to use something like RequireJS.
Trying to port Crowducate from Meteor 0.8 to 1.0. I ran "meteor update". Results can be seen in this branch: https://github.com/Crowducate/crowducate.me/commit/bc1c8fa81a23fda586980d4803803ef701c762c5
So my questions:
Why was a version file created (instead of updating the packages file)?
Does the version file somehow override the package file, do I need both files?
More of info can be found in these github-issues:
Porting to Meteor 1.0 and flatten external packages into repo
Any help appreciated.
Meteor determines what functionality should be added to a project using the packages file. It contains the names of packages such as email or iron:router. It is agnostic of the version of Meteor you are running, which will eventually lead to serious issues unless you have a mapping of which versions of the packages are good to go (i.e. known to work well together).
The versions file further specifies which actual versions of the packages you are using in a project. You may specify a version using meteor add package:name#x.y.z.
There is a third (hidden) versions information inside each package that defines what other packages it plays well with (see for example this). They define which minimum version they require, something better will probably also work. Here is where a smart versioning scheme comes into play.
Meteor packages use semantic versioning, so you can better tell whether things will break with an upgrade. Semantic versioning means each release consists of major.minor.patch, e.g. x.y.z or 1.1.0. Patches don't change functionality, so any change to z will be harmless. Changes to the minor or y should not break the existing API. New functionality may be added or existing APIs may be changed/deprecated. Changes to major/x are likely to introduce breaking changes and also break dependent packages.
You can find some more info on Arunoda's page: https://meteorhacks.com/meteor-packaging-system-understanding-versioning.html
Technically you are right, why have redundant information in both files, packages seems superfluous when all required info is inside versions already. You will notice that packages only lists the packages you explicitely added to your project while versions includes all dependencies as well. Meteor is smart enough to know that if you remove a package to not bundle the unneeded package dependencies any more. You need both files to better differentiate what was added by the user and what was added automatically using the package manager.