Yarn 2: Difference between Zero Installs and normal install? - javascript

Regarding Zero Installs, the Yarn 2 documentation says:
While not a feature in itself, the term "Zero Install" encompasses a lot of Yarn features tailored around one specific goal - to make your projects as stable and fast as possible by removing the main source of entropy from the equation: Yarn itself. [...]
I read the whole story, but didn't really understand that fully.
What is the difference between Yarn 2 Zero Installs and Yarn 2 normal install?

The difference is that using Yarn normal install, you won't need to commit the node_modules (all your deps), whereas, using Zero-Install you will have to take care of all your dependencies.
This certainly makes your dependency on remote repositories less, however, it requires more responsibility, as said in the docs:
Note that, by design, this setup requires that you trust people
modifying your repository. In particular, projects accepting PRs from
external users will have to be careful that the PRs affecting the
package archives are legit (since it would otherwise be possible to a
malicious user to send a PR for a new dependency after having altered
its archive content).
After all, zero-install is a great feature. It solves the "I cloned/switched branch and now a dependency is missing" problem, it speeds up CI significantly and it lowers our dependence on our on-prem npm registry.

Related

mat stepper bar is not in the npm registry

Our app depends on this npm package, all of a sudden it stopped working, and not able to install it again through npm i. How can I fix this?
Error while installing it: mat stepper bar is not in the npm registry
Below is the message from their official webpage
Mat-stepper-bar is a JavaScript package in the npm registry that has been compromised. Our team is working on it :(
If you have any questions, please email us at ngmicroapp#gmail.com
Package status: deleted?
It looks like the package was deleted from NPM recently, since it is still in Google's cache.
Recourse
Unfortunately, there is not much you can do. There is no repository listed in the README or in the published package details - if you knew that the source code lived on a repo in GitHub, you could install the package from there, instead.
Public Mirror?
Your best bet is to try to find a public NPM mirror that still has the package. Aliyun seems to have one, although I am not very familiar with that site and do not know if that is a reliable source.
To prevent this in the future
Use a local NPM mirror
Going forward, you should keep a local NPM mirror if you use volatile packages.
Vet your packages
A better rule of thumb, however, is to avoid relying on relatively unused and undeveloped packages.
Before using a package, ensure that it:
meets basic package cleanliness requirements, such as listing a repository
is well-documented
is well-tested
has a consistent development history, which makes it less likely to be abandoned
has a minimum number stars on GitHub OR forks OR npm downloads
Using a package that fails to meet these requirements adds technical debt to your product, as you are more likely to encounter bugs, take longer to understand undocumented functions, or may discover that the package is renamed or deleted.

What's the point of having a "compatible version" (^version) declared in package.json if package-lock.json locks it?

I know the main advantages of package-lock.json and I agree with that. It not only locks the downloaded version in the last install, but also the uri... and that's required on most cases for being possible to replicate the most similar project as possible.
But one thing that seems weird to me is that package.json has the feature of declaring a dependency like dependency: ^1.0.0, that should make npm to download the most recent and compatible version of that package in each installation.
I'm working at a project that I actually need this. Otherwise every time my dependency releases a patch, it will be required to make a new commit updating package.json only changing the version, so my pipeline can also overwrite package-lock.json.
In short, it seems that while package.json uses a feature... package-lock.json prevents that one.
Am I missing something?
The point of package-lock.json is to accurately represent the tree as it actually exists at a point in time, so that someone cloning the project gets exactly the same tree you had.
If you want to upgrade that dependency to a newer version, just use npm update and then commit the updated package-lock.json. Other members of your team will get that update as part of the normal process of picking up the latest.
More in the npmjs.com page on package locks.
Let's consider as scenario where you and I are on a team and our project uses nifty-lib, with package.json saying "nifty-lib": "^0.4.0", and we don't share package-lock.json. Perhaps I've been working on the project a couple of months longer than you have and I got nifty-lib v0.4.0 when I installed it. But when you picked it up and installed, you got v0.4.1 (a bugfix update which, sadly, introduced a new bug). At some point, you notice what seems like a bug in our project, but I can't replicate it. We spin in place for a while trying to figure out why it happens to you and not to me. In the end, we realize it's because it's actually a bug in nifty-lib that they introduced in v0.4.1. Hopefully we then get 0.4.2 or something (or if there isn't one, we fix the bug and do a PR, meanwhile rolling back to 0.4.0 across the project).
If we'd been sharing package-lock.json, we wouldn't have spun in place wondering why the problem happened to you and not to me, because you would have had the same version of nifty-lib as me. As part of our normal cycle, we'd do npm update periodically, and if a new bug showed up in our tests, we'd know from the commit history that it was because of a bug in a dependency.
Now, for "me" and "you" read "dev" and "production". :-)
Which is why package-lock.json locks the version, but package.json lets you say "this or better". package-lock.json keeps your team unified on versions, but you can intentionally update with npm update, which shows up in the commit history so you can track down regressions to it.
As I mentioned in a comment above, the short answer is it makes updateing your dependencies easier.
However, another way I like to think about the two files is: package.json is the file the human reads, while package-lock.json is the file the computer reads.
NPM is a package / dependency manager. So, in your package.json file, you write out "these libraries are needed for my library to work." As a feature, you have a range of versions you can list a dependency at. This helps when you run npm update on a specific package. It'll look to see what is the latest version that matches within your *package.json**, and updates you lockfile.
The package-lock.json lockfile is useful because it verbosely describes what your node_modules/ folder looks like so it can be accurately recreated when someone else installs your library. Additionally, since this file is generated automatically, you don't have to worry about maintaining it.
Of course, all of this just happens to be how NPM (and conversely how most package managers) handle this. That is, there isn't a technical reason why we couldn't have one file to describe both the version range that would be allowed when running updates, and a verbose lockfile portion that pins versions to allow for a recreatable dependency tree.
Basically, it is just a convenience. You have one file to succinctly list what dependencies your projects needs. It is readable and easily updatable. The other file, the lockfile, is automatically generated and ensures each npm install gives you the exact same node_modules/ folder as before.

npm installs many dependencies

I bought an HTML template recently, which contains many plugins placed inside a bower_components directory and a package.js file inside. I wanted to install another package I liked, but decided to use npm for this purpose.
When I typed:
npc install pnotify
the node_modules directory was created and contained about 900 directories with other packages.
What are those? Why did they get installed along with my package? I did some research and it turned out that those were needed, but do I really need to deliver my template in production with hundreds of unnecessary packages?
This is a very good question. There are a few things I want to point out.
The V8 engine, Node Modules (dependencies) and "requiring" them:
Node.js is built on V8 engine, which is written in C++. This means that Node.js' dependencies are fundamentally written in C++.
Now when you require a dependency, you really require code/functions from a C++ program or js library, because that's how new libraries/dependencies are made.
Libraries have so many functions that you will not use
For example, take a look at the express-validator module, which contains so many functions. When you require the module, do you use all the functions it provides? The answer is no. People most often require packages like this just to use one single benefit of it, although all of the functions end up getting downloaded, which takes up unnecessary space.
Think of the node dependencies that are made from other node dependencies as Interpreted Languages
For example, JavaScript is written in C/C++, whose functions and compilers are in turn originally written in assembly. Think of it like a tree. You create new branches each time for more convenient usage and, most importantly, to save time . It makes things faster. Similarly, when people create new dependencies, they use/require ones that already exist, instead of rewriting a whole C++ program or js script, because that makes everything easier.
Problem arises when requiring other NPMs for creating a new one
When the authors of the dependencies require other dependencies from here and there just to use a few (small amount) benefits from them, they end up downloading them all, (which they don't really care about because they mostly do not worry about the size or they'd rather do this than explicitly writing a new dependency or a C++ addon) and this takes extra space. For example you can see the dependencies that the express-validator module uses by accessing this link.
So, when you have big projects that use lots of dependencies you end up taking so much space for them.
Ways to solve this
Number 1
This requires some expert people on Node.js. To reduce the amount of the downloaded packages, a professional Node.js developer could go to the directories that modules are saved in, open the javascript files, take a look at their source code, and delete the functions that they will not use without changing the structure of the package.
Number 2 (Most likely not worth your time)
You could also create your own personal dependencies that are written in C++, or more preferably js, which would literally take up the least space possible, depending on the programmer, but would take/waste the most time, in order to reduce size instead of doing work. (Note: Most dependencies are written in js.)
Number 3 (Common)
Instead of Using option number 2, you could implement WebPack.
Conclusion & Note
So, basically, there is no running away from downloading all the node packages, but you could use solution number 1 if you believe you can do it, which also has the possibility of screwing up the whole intention of a dependency. (So make it personal and use it for specific purposes.) Or just make use of a module like WebPack.
Also, ask this question to yourself: Do those packages really cause you a problem?
No, there is no point to add about 900 packages dependencies in your project just because you want to add some template. But it is up to you!
The heavyness of a template is not challenging the node.js ecosystem nor his main package system npm.
It is a fact that javascript community tend to make smallest possible module to be responsible for one task, and just one.
It is not a bad thing I guess. But it could result of a situation where you have a lot of dependencies in your project.
Nowadays hard drive memory is cheap and nobody cares any more about making efficient/small apps.
As always, it's only a matter of choice.
What is the point of delivering hundreds of packages weighing hundreds of MB for a few kB project.
There isn't..
If you intend to provide it to other developers, just gitignore (or remove from shared package) node_modules or bower_components directories. Developers simply install dependencies again as required ;)
If it is something as simple as an HTML templates or similar stuff, node would most likely be there just for making your life as a developer easier providing live reload, compiling/transpiling typescript/babel/SCSS/SASS/LESS/Coffee... ( list goes on ;P ) etc.
And in that case dependencies would most likely only be dev_dependencies and won't be required at all in production environment ;)
Also many packages come with separate production and dev dependencies, So you just need to install production dependencies...
npm install --only=prod
If your project does need many projects in production, and you really really wanna avoid that stuff, just spend some time and include css/js files your your project needs(this can be a laborious task).
Update
Production vs default install
Most projects have different dev and production dependencies,
Dev dependencies may include stuff like SASS, typescript etc. compilers, uglifiers (minification), maybe stuff like live reload etc.
Where as production version will not have those things reducing the size node_modules directory.
** No node_modules**
In some html template kind of projects, you may not need any node_modules in production, so you skip doing an npm install.
No access to node_modules
Or in some cases, when server that serves exists in node_modules itself, access to it may be blocked (coz there is no need to access these from frontend).
What are those? Why did they get installed along with my package?
Dependencies exists to facilitate code reuse through modularity.
... do I need to deliver my template in production with hundreds of unnecessary packages?
One shouldn't be so quick to dismiss this modularity. If you inline your requires and eliminate dead code, you'll lose the benefit of maintenance patches for the dependencies automatically being applied to your code. You should see this as a form of compilation, because... well... it is compilation.
Nonetheless, if you're licensed to redistribute all of your dependencies in this compiled form, you'll be happy to learn those optimisations are performed by a compiler which compile Javascript to Javascript. The Closure Compiler, as the first example I stumbled across, appears to perform advanced compilation, which means you get dead code removal and function inlining... That seems promising!
This does however have another side effect when you are required to justify the licensing of all npm modules..so when you have hundreds of npm modules due to dependencies this effort also becomes a more cumbersome task
Very old question but I happened to come across very similar situation just as RA pointed out.
I tried to work with node.js framework using vscode and the moment when I tried to install start npm using npm init -y, it generated so many different dependencies. In my case, it was vscode extension ESlint that I added to prior to running npm init -y
Uninstalling ESlint
Restarted vscode to apply that uninstallation
removed previously generated package.json and node-modules folder
do npm init -y again
This solved my problem of starting out with so many dependencies.

Why does NPM success when submodules fail to build?

Often while using npm I've come across errors that appear to mean nothing - Visual Studio projects failing to build, build tools (eg: python.exe / CL.exe) not being available on the command line etc.
Some examples of packages I've seen fail to build many times:
kerberos
node-gyp
bcrypt
These throw big error messages with stack traces etc to the console during npm install, clearly having failed completely; however, NPM carries along happy as Larry and 9 times out of 10 my Javascript application and all its dependencies work fine.
Does npm install re-build every single dependency recursively, using whatever compilers are available on the local machine?
If so, and considering the huge number of dependencies even simple packages can have, how am I able to do ANYTHING without a full suite of programming languages and compilers installed?
Why is it that these dependencies failing doesn't necessarily mean my final project will be unusable?
If a dependency failing to build is "ok", why bother having the dependency at all?
I haven't been able to find clear answers on any of this, due to the overwhelming number of resources found when searching for terms like "npm build fail".
npm will succeed if those dependencies are actually marked as optional. The ws module is an example of this where they have optional dependencies on two compilable addons. If they fail to build, then ws just uses pure js fallback implementations.
The reason that addons are sometimes added as optional dependencies is that the they (more often than not) perform faster than the pure js alternatives, even for something as "simple" as UTF-8 validation or XOR'ing the contents of a Buffer.

What is the perfect workflow to work on A, B & C in parallel where A depends on B and B on C?

I wonder what is the perfect workflow if one needs to work on project A, B and C in parallel where A depends on B and B depends on C.
Currently I have everything in one repository which speeded up the early development. So my working directory looks like this:
/A/
/A/B
/A/B/C
So A is the project that is driving the development but it also means B and C are in parallel evolvement.
However, I want to release the projects B and C individually as well as they are quite useful for others.
However, I'm torn on how to do this without ruining my development speed. npm is great for distribution and dependency management but during development you definitely don't want to move temporally versions across the internet just to get the files to update in a different folder on your machine :)
On the other hand you also don't want to manually copy them over. Heck, all this I have to switch directories to work on B and now copy it over to /A/B is scary and seems error prone.
So, git submodule seemed like the answer as it's essentially enabling exactly that: You would keep your directory layout just as it is. Then when you make changes to files in B you can directly test them out in A without having to copy something over. And when you think it's ready you can just commit and push from the three different folders. Everything goes into three different repositories automatically. Seems like heaven yet everybody hates git submodule for various reasons.
I have pretty much the same problem when working on grunt plugins. I have the grunt plugin in it's own repository and then when I'm working on it I have to copy it over in one of the projects where I use it to drive the development. And then at the end of the day I copy it back to the grunt plugin working directory, make commits and push them. Thank god, I'm not the author of thousands of grunt plugins so I can deal with that but for this project I'm currently working on, I would definitely like to find a better solution.
So I wonder, what is the answer?
Note that Git submodules point to a specific version of the external repository, i.e. to a specific commit.
To update all Git submodules to the latest version, you’d still have to run a command:
git submodule foreach git pull origin master
Depending on your situation, you could possibly use npm instead of Git submodules. In that case, you simply list your dependencies in package.json, then run npm install in the root of the repository to fetch them. If a dependency is updated and a new version is published, you just run npm update again and it will match the version requirements set in the package.json file. You could also use npm to point to a specific commit, much like how Git submodules work:
{
"devDependencies": {
"dependency-a": "git://github.com/the-user-name/the-project-name.git#b3c24169432a924d05020c85f852d4a1736e66d4"
}
}
Or, if you want to use a bleeding edge version of a dependency, like the master branch of a given Git repository, you could use:
{
"devDependencies": {
"dependency-a": "git://github.com/the-user-name/the-project-name.git#master"
}
}
I've tried all number of these workflows and have decided that git submodules are not the answer. The main reason for me being that they couple projects together at a scm level and it is not intuitive as to where exactly in the revision history this coupling happens. Especially in a detached head scenario.
I rely on npm for depency managment with a combination of npm link and git URLs. npm link is great when I want to test something locally. git URLs are perfect to resolve dependencies during pull requests. Eventually I always want a published module with a version number I can match to a git tag for future reference and issue tracking. I use npm version to do this in one step. Allows for projects to depend on each other at varying levels of coupling during my development cycle.
Would using git subtree be the answer here? This article Alternatives To Git Submodule: Git Subtree did a good job explaining the pros and cons of using git subtree versus git submodules.

Categories