Jam vs Bower, what's the difference? - javascript

There are two package managers for client-side Javascript, but how do they compare? Could someone explain which one excels at what?
Jam
Bower

As others mentioned in comments already there are a few alternatives in this space beyond just Jam and Bower.
Component
Ender
Volo
Both aim to provide a way to package up your assets and manage the dependencies between them for the client. Both Bower and Jam appear to have made their debut in 2012 -- Sept and May respectively.
Both are available through node/npm and if all you want to do is resolve dependencies between public libraries like backbone, underscore, jquery, etc. for your application then either solution will work and allow you some basic options to control version, where to put it in your project, and checking for updates.
As for what's different -- Bower is a bit lower level than Jam which makes it more usable to a wider audience. You can create bower components for more than just JavaScript libraries. Jam focuses more solely on AMD style JavaScript libraries. With Jam, you can specify your dependencies in the package.json file you would use with npm components already whereas Bower has chosen component.json by convention. The limitation with Bower is that it only fetches your dependencies, you still need a build system if you want to use Require.js or other solutions which Jam has chosen so you get for free. Bower is getting support from Twitter and a few other projects (Ender, Yeoman).
Apologies if this is incorrect, but one additional limitation of Jam is that it does not allow you to create your own components for distribution in a private repository. This is something Bower allows you to configure as an endpoint in .bowerrc but I have as yet found a way to do that in Jam. Perhaps I haven't searched well enough but it appears there is at least one fork for private-jam.
A few other good reads:
http://yeoman.io/packagemanager.html
http://dailyjs.com/2013/01/28/components/

Using the following breakdown of some of the popular package managers you can decide what you want to use in you development. It compares factos as
whether the manager uses package.json or other form of descriptor
what features does it support (scaffolding, compilation, having central registry)
speed
form of packages support (js only, js and css, js html and css)
module types supported
and of course some notes based on personal point of view
https://github.com/wilmoore/frontend-packagers

Related

Alternatives to structure and use javascript libraries without npm

We have been working on our projects all tied to a single lerna repository, as:
lerna
|----- app1 - application 1
|----- app2 - application 2
|----- appN - application N
|----- commondb (common database libraries for app1, app2 to appN)
|----- commonux (common ux libraries for app1, app2 to appN)
|----- commonauth (common authentication libraries for app1, app2 to appN)
As the code grew a lot, lerna is really full of packages (40+ packages) and too much code.
We're now trying to split lerna into smaller pieces and we're looking for alternatives. Doing that, applications would need a way to import common libraries as we do today.
Certainly NPM seens to be a solution (making each common package independent and publishing it on NPM), but we want to keep our code in our environment without third party services or clouds (we have our own git server instance).
What are the current options to manage javascript libraries that we can make use of? What would be the recommended one in such a scenario?
Your decision can be greatly affected by answering the following - do you want your apps to be running the same version of the shared libraries? Or do want autonomy within the libraries, and to be able to publish and manage different versions of the libraries, where it is the responsibility on the consumer app to manage which version of the library it is using?
If it is the former, my suggestion would be to stick with a Mono-repo approach, maybe consider something like NX, where it has some nice tooling for only linting, testing, building and deploying only the affected modules, whilst sharing a common single package.json and therefore common libraries shared across multiple apps and libs
Otherwise you are looking at potentially managing multiple repos, multiple versions of each library, multiple pipelines, multiple workspace configs.
You could simply use pnpm workspaces: protocol which allows you to create a monorepo structure but without even requiring you to publish when used this way: "foo": "workspace:*". So if you wish to use this approach, then you could have a monorepo structure and just keep local packages, it will use the local code without downloading anything. I used this concept in a Vue pnpm Workspace demo. With this approach you could stay with local packages and/or go with Lerna to publish if you also want to (I prefer Lerna-Lite which is what I use) since they now both support pnpm/yarn workspace: protocol. This approach is flexible and allows you to switch to publishing (to npm or a private registry) at any point in the time in the future if you wish to, in other words there's no major code refactoring to add Lerna after the fact when using a pnpm workspace structure.

Why publish the TypeScript declaration file on DefinitelyTyped for a Javascript library?

I published two Javascript libraries on npm and users have asked for TypeScript type definitions for both of them. I don't use TypeScript myself and I have no plans to rewrite those libraries in TypeScript, but I'd still like to add the type definition files if only for better IntelliSense code completion. I'm looking for some advice with this.
I started with reading the docs of the DefinitelyTyped project and the documentation on publishing a declaration file for an npm package. Both sources state that "publishing to the #types organization on npm" is the preferred approach for projects not written in TypeScript.
Why is that preferred over publishing type definitions alongside the library itself via the types field in package.json? I really don't see the point in involving a third party in this. It seems like updating the type definitions and versioning them is just more complicated this way.
Quotes from the documentation referenced above (emphasis mine)
From DefinitelyTyped:
If you are the library author and your package is written in TypeScript, bundle the autogenerated declaration files in your package instead of publishing to Definitely Typed.
From typescriptlang.org:
Now that you have authored a declaration file following the steps of this guide, it is time to publish it to npm. There are two main ways you can publish your declaration files to npm:
bundling with your npm package, or
publishing to the #types organization on npm.
If your package is written in TypeScript then the first approach is favored. Use the --declaration flag to generate declaration files. This way, your declarations and JavaScript will always be in sync.
If your package is not written in TypeScript then the second is the preferred approach.
Both seem to say:
if (isAuthor && lang === "typescript")
bundle();
else
publishOnDefinitelyTyped();
Type declaration publishing guides seem a bit outdated and sparse in several areas.
I'll try to compare both scenarios in detail.
1. Types bundled together with the npm package
1.1. From package consumer perspective
1.1.1. Pros
In general, package bundled types are more convenient thanks to streamlined dependency management.
no need for additional #types dependency (adding package dependency)
no need to synchronize versions between the package and it's types (upgrading package dependency)
1.1.2. Cons
limited ways to opt-out from using package bundled types
Involves cases when the consumer needs to modify or substitute type declarations.
The process can be considerably problematic in projects with opinionated build setups,
due to the already limited configuration options.
1.2. From package author perspective
1.2.1. Pros
library owner can release patches and updates of type declarations at his will, at any frequency or schedule
no restrictions on third party or external type dependencies
providing concurrent support for multiple versions is carried out in the same manner as for the actual code
1.2.2. Cons
Having types bundled with package means that there are in fact two API contracts published each time a version is released.
Example:
Let's assume a library which aims to conform to semver versioning.
latest release -> 1.0.0
major version is bumped due to a breaking change -> 2.0.0
a critical bug in type declarations is reported, the release is broken for a group of users with typescript projects
a fix in types is a breaking change
The options for next version are:
A. 2.X.X -> violates semver rules for type declarations
B. 3.0.0 -> violates semver rules for the actual code
It's likely that there are numerous variations of such scenario.
2. Publishing to Definitely typed repository
2.1. From package consumer perspective
2.1.1. Pros
simple opt-out via removal of #types dependency
2.1.2. Cons
package consumer is responsibile for keeping the package and related types versions in sync
2.2. From package author perspective
2.2.1. Pros
types have no impact on the package's release cycle
DT repo comes with two extra traits:
dts-lint library for type assertions and type testing
an in-deep performance and compiler footprint analysis, including a diff between the latest package version and the package after PR modifications.
First tool can be incorporated into another package repo with minor effort.
I'm not sure if the analysis can be replicated in one's own repository, but it contains a lot of valuable data.
2.2.2. Cons
non-standard way of supporting past releases
type release schedule constrained by DT review and release cycles
Assuming that the DefinitelyTyped PR creator is the #types package owner, it usually
takes between one to two days before the PR is merged. Additionally, there's a minor
delay before types-publisher updates PR related #types npm package.
Additional review process is involved in cases when the PR is author's first contribution to a given package.
using external dependencies
TypeScript handbook says:
If your type definitions depend on another package:
Don’t combine it with yours, keep each in their own file.
Don’t copy the declarations in your package either.
Do depend on the npm type declaration package if it doesn’t package its declaration files.
Judging by the amount of redundant utility types, these are hardly respected.
The author of type declarations is allowed to use adjacent DT repository types.
Depending on packages from beyond this list requires them to be on types-publisher whitelist.
New packages can be whitelisted by submitting a PR to types-publisher.
It took over two weeks for my PR to be merged. I don't know if that's usual since I had submitted a single PR.
DT repo volume
I have no cross-IDE comparison or experience, but as far as JetBrains IDEs are concerned, the memory footprint of fully indexed DT repo project made the IDE unusable.
Disabling recompilation on changes helps to some extent. The frustrating IDE experience can be worked around by removing DT repo contents that are not relevant to the package of interest.

npm installs many dependencies

I bought an HTML template recently, which contains many plugins placed inside a bower_components directory and a package.js file inside. I wanted to install another package I liked, but decided to use npm for this purpose.
When I typed:
npc install pnotify
the node_modules directory was created and contained about 900 directories with other packages.
What are those? Why did they get installed along with my package? I did some research and it turned out that those were needed, but do I really need to deliver my template in production with hundreds of unnecessary packages?
This is a very good question. There are a few things I want to point out.
The V8 engine, Node Modules (dependencies) and "requiring" them:
Node.js is built on V8 engine, which is written in C++. This means that Node.js' dependencies are fundamentally written in C++.
Now when you require a dependency, you really require code/functions from a C++ program or js library, because that's how new libraries/dependencies are made.
Libraries have so many functions that you will not use
For example, take a look at the express-validator module, which contains so many functions. When you require the module, do you use all the functions it provides? The answer is no. People most often require packages like this just to use one single benefit of it, although all of the functions end up getting downloaded, which takes up unnecessary space.
Think of the node dependencies that are made from other node dependencies as Interpreted Languages
For example, JavaScript is written in C/C++, whose functions and compilers are in turn originally written in assembly. Think of it like a tree. You create new branches each time for more convenient usage and, most importantly, to save time . It makes things faster. Similarly, when people create new dependencies, they use/require ones that already exist, instead of rewriting a whole C++ program or js script, because that makes everything easier.
Problem arises when requiring other NPMs for creating a new one
When the authors of the dependencies require other dependencies from here and there just to use a few (small amount) benefits from them, they end up downloading them all, (which they don't really care about because they mostly do not worry about the size or they'd rather do this than explicitly writing a new dependency or a C++ addon) and this takes extra space. For example you can see the dependencies that the express-validator module uses by accessing this link.
So, when you have big projects that use lots of dependencies you end up taking so much space for them.
Ways to solve this
Number 1
This requires some expert people on Node.js. To reduce the amount of the downloaded packages, a professional Node.js developer could go to the directories that modules are saved in, open the javascript files, take a look at their source code, and delete the functions that they will not use without changing the structure of the package.
Number 2 (Most likely not worth your time)
You could also create your own personal dependencies that are written in C++, or more preferably js, which would literally take up the least space possible, depending on the programmer, but would take/waste the most time, in order to reduce size instead of doing work. (Note: Most dependencies are written in js.)
Number 3 (Common)
Instead of Using option number 2, you could implement WebPack.
Conclusion & Note
So, basically, there is no running away from downloading all the node packages, but you could use solution number 1 if you believe you can do it, which also has the possibility of screwing up the whole intention of a dependency. (So make it personal and use it for specific purposes.) Or just make use of a module like WebPack.
Also, ask this question to yourself: Do those packages really cause you a problem?
No, there is no point to add about 900 packages dependencies in your project just because you want to add some template. But it is up to you!
The heavyness of a template is not challenging the node.js ecosystem nor his main package system npm.
It is a fact that javascript community tend to make smallest possible module to be responsible for one task, and just one.
It is not a bad thing I guess. But it could result of a situation where you have a lot of dependencies in your project.
Nowadays hard drive memory is cheap and nobody cares any more about making efficient/small apps.
As always, it's only a matter of choice.
What is the point of delivering hundreds of packages weighing hundreds of MB for a few kB project.
There isn't..
If you intend to provide it to other developers, just gitignore (or remove from shared package) node_modules or bower_components directories. Developers simply install dependencies again as required ;)
If it is something as simple as an HTML templates or similar stuff, node would most likely be there just for making your life as a developer easier providing live reload, compiling/transpiling typescript/babel/SCSS/SASS/LESS/Coffee... ( list goes on ;P ) etc.
And in that case dependencies would most likely only be dev_dependencies and won't be required at all in production environment ;)
Also many packages come with separate production and dev dependencies, So you just need to install production dependencies...
npm install --only=prod
If your project does need many projects in production, and you really really wanna avoid that stuff, just spend some time and include css/js files your your project needs(this can be a laborious task).
Update
Production vs default install
Most projects have different dev and production dependencies,
Dev dependencies may include stuff like SASS, typescript etc. compilers, uglifiers (minification), maybe stuff like live reload etc.
Where as production version will not have those things reducing the size node_modules directory.
** No node_modules**
In some html template kind of projects, you may not need any node_modules in production, so you skip doing an npm install.
No access to node_modules
Or in some cases, when server that serves exists in node_modules itself, access to it may be blocked (coz there is no need to access these from frontend).
What are those? Why did they get installed along with my package?
Dependencies exists to facilitate code reuse through modularity.
... do I need to deliver my template in production with hundreds of unnecessary packages?
One shouldn't be so quick to dismiss this modularity. If you inline your requires and eliminate dead code, you'll lose the benefit of maintenance patches for the dependencies automatically being applied to your code. You should see this as a form of compilation, because... well... it is compilation.
Nonetheless, if you're licensed to redistribute all of your dependencies in this compiled form, you'll be happy to learn those optimisations are performed by a compiler which compile Javascript to Javascript. The Closure Compiler, as the first example I stumbled across, appears to perform advanced compilation, which means you get dead code removal and function inlining... That seems promising!
This does however have another side effect when you are required to justify the licensing of all npm modules..so when you have hundreds of npm modules due to dependencies this effort also becomes a more cumbersome task
Very old question but I happened to come across very similar situation just as RA pointed out.
I tried to work with node.js framework using vscode and the moment when I tried to install start npm using npm init -y, it generated so many different dependencies. In my case, it was vscode extension ESlint that I added to prior to running npm init -y
Uninstalling ESlint
Restarted vscode to apply that uninstallation
removed previously generated package.json and node-modules folder
do npm init -y again
This solved my problem of starting out with so many dependencies.

Using package.json for client side packages, that could be loaded dynamically in browser

I am thinking of extending the format of package.json to include dynamic package (plugin) loading on client side and I would like to understand whether this idea contradicts with npm vision or not. In other words I want to load a bunch of modules, that share common metadata, in browser runtime. Solutions like system.js and jspm are good for modules management, but what I seek is dynamic packages management on client side.
Speaking in details I would like to add a property like "myapp-clientRuntimeDependencies" that would allow to specify dependencies that would be loaded by browser instead of standard prepackaging (npm install->browserify-like solution).
package.json example:
{
name: "myapp-package",
version: "",
myapp-clientRuntimeDependencies: {
"myapp-plugin": "file:myapp-plugin",
"myapp-anotherplugin": "file:myapp-anotherplugin"
},
peerDependencies: {
"myapp-core": "1.0.0"
}
}
The question:
Does this idea contradict with "npm" and "package.json" vision? If yes then why?
Any feedback from npm community is very much appreciated.
References:
Extending package.json: http://blog.npmjs.org/post/101775448305/npm-and-front-end-packaging
EDIT:
The question was not formulated too well, the better way to ask this is:
What is the most standard way (e.g. handled by some existing tools, likely to be supported by npm) to specify run-time dependencies between 2 dynamically loaded front-end packages in package.json?
What is the most standard way to attach metadata in JSON format to front-end packages, that are loaded dynamically?
I wouldn't say that it conflicts with the vision of package.json, however it does seem to conflict a bit with how it's typically used. As you seem to be aware, package.json is normally used pre-runtime. In order to load something from package.json into your runtime, you'd have to load the package.json into your frontend code. If you're storing configurations that you don't want visible to frontend via a simple view source, this could definitely present a problem.
One thing that didn't quite click with me on this: you said that system.js and jspm are good for module management but that you were looking for package management. In the end, packages and modules tend to be synonymous, as a package becomes a module.
I may be misunderstanding what it is that you're looking for, but from what I can gather, I recommend you take a look at code-splitting...which is essentially creating separate js files that will be loaded dynamically based on what is needed instead of bundling all your javascript into a single file. Here's some docs on how to do this with webpack (I'm sure browserify does it as well).
If I understand correctly, your question is about using the package.json file to include your own app configuration. In the example you describe, you would use such configuration to let your app know which dependencies can be loaded at runtime.
There is really nothing preventing you from inserting your own fields in the package.json file, except for the risk of conflict with names that are used by npm for other meanings. But if you use a very specific name (like in your example), you should be safe enough. Actually, many lint and build tools have done so already. It is even explicitly written in the post you refer to:
If your tool needs metadata to make it work, put it in package.json. It would be rude to do this without asking, but we’re inviting you to do it, so go ahead. The registry is a schemaless store, so every field you add is on an equal footing to all the others, and we won’t strip out or complain about new fields (as long as they don’t conflict with existing ones).
But if you want to be even safer, you could resort to use a different file (like Bower did for example).

Meteor.js 1.0: what is the difference between the packages and versions file?

Trying to port Crowducate from Meteor 0.8 to 1.0. I ran "meteor update". Results can be seen in this branch: https://github.com/Crowducate/crowducate.me/commit/bc1c8fa81a23fda586980d4803803ef701c762c5
So my questions:
Why was a version file created (instead of updating the packages file)?
Does the version file somehow override the package file, do I need both files?
More of info can be found in these github-issues:
Porting to Meteor 1.0 and flatten external packages into repo
Any help appreciated.
Meteor determines what functionality should be added to a project using the packages file. It contains the names of packages such as email or iron:router. It is agnostic of the version of Meteor you are running, which will eventually lead to serious issues unless you have a mapping of which versions of the packages are good to go (i.e. known to work well together).
The versions file further specifies which actual versions of the packages you are using in a project. You may specify a version using meteor add package:name#x.y.z.
There is a third (hidden) versions information inside each package that defines what other packages it plays well with (see for example this). They define which minimum version they require, something better will probably also work. Here is where a smart versioning scheme comes into play.
Meteor packages use semantic versioning, so you can better tell whether things will break with an upgrade. Semantic versioning means each release consists of major.minor.patch, e.g. x.y.z or 1.1.0. Patches don't change functionality, so any change to z will be harmless. Changes to the minor or y should not break the existing API. New functionality may be added or existing APIs may be changed/deprecated. Changes to major/x are likely to introduce breaking changes and also break dependent packages.
You can find some more info on Arunoda's page: https://meteorhacks.com/meteor-packaging-system-understanding-versioning.html
Technically you are right, why have redundant information in both files, packages seems superfluous when all required info is inside versions already. You will notice that packages only lists the packages you explicitely added to your project while versions includes all dependencies as well. Meteor is smart enough to know that if you remove a package to not bundle the unneeded package dependencies any more. You need both files to better differentiate what was added by the user and what was added automatically using the package manager.

Categories