I want to write a node cli application and im wondering how i should structure the application. Im fairly new to node and im a confused with all the design patterns used when building such a application.
I want to be able to call the application from the command line, but also use it as a node module for better testing.
Currently i have one file with lots of functions that get called directly from the cli, but i feel this is rather difficult to maintain.
Is there any good writing on how to do such things? i looked at rimraf but it confused me even more. Thanks for your time
I don't know if there is a "right" way to do it but I can tell you how I have dealt with a problem similar to yours. I wanted to create a CLI and a visual studio code plugin so people would be able to use the functionality both from VSC and from the CLI (for those that don't use VSC), so the approach I took was to put all the logic in its own package and then create two other packages that included the first one, one for CLI and one VSC plugin that required the "logic" package.
In the CLI package you would only have code strictly related to command handling and then the real meat happens in the logic package. In my case the VSC plugin package had very few lines of code, just configuration and the calls to the needed functions.
Then regarding the structure of the code some recommendations:
expose only what is strictly necessary
isolate your code in different files/classes based on common functionality (and go to point 1)
test your code
lint your code
But those are common sense and language independent recommendations.
There's no one "standard" way to structure Node.js apps, however you will notice that many authors follow similar patterns. Instead of having one file containing all code, it should be split out into modules, grouped by function. Have a look at this repo on Github, it has some very good suggestions about Node.js best practice https://github.com/i0natan/nodebestpractices#1-project-structure-practices.
A couple more pointers I would add: Ensure you're logging any errors, consider using something like Winston.js for this purpose. Also have some mechanism in place to restart the service if a critical error occurs, e.g. Forever.js.
Ensure likewise you're unit testing, there are some good test frameworks, Jasmine, Mocha, Cucumber.js.
Related
In order to refactor a client-side project, i'm looking for a safe way to find (and delete) unused code.
What tools do you use to find unused/dead code in large react projects? Our product has been in development for some years, and it is getting very hard to manually detect code that is no longer in use. We do however try to delete as much unused code as possible.
Suggestions for general strategies/techniques (other than specific tools) are also appreciated.
Thank you
Solution:
For node projects, run the following command in your project root:
npx unimported
If you're using flow type annotations, you need to add the --flow flag:
npx unimported --flow
Source & docs: https://github.com/smeijer/unimported
Outcome:
Background
Just like the other answers, I've tried a lot of different libraries but never had real success.
I needed to find entire files that aren't being used. Not just functions or variables. For that, I already have my linter.
I've tried deadfile, unrequired, trucker, but all without success.
After searching for over a year, there was one thing left to do. Write something myself.
unimported starts at your entry point, and follows all your import/require statements. All code files that exist in your source folder, that aren't imported, are being reported.
Note, at this moment it only scans for source files. Not for images or other assets. As those are often "imported" in other ways (through tags or via css).
Also, it will have false positives. For example; sometimes we write scripts that are meant to simplify our development process, such as build steps. Those aren't directly imported.
Also, sometimes we install peer dependencies and our code doesn't directly import those. Those will be reported.
But for me, unimported is already very useful. I've removed a dozen of files from my projects. So it's definitely worth a shot.
If you have any troubles with it, please let me know. Trough github issues, or contact me on twitter: https://twitter.com/meijer_s
Solution for Webpack: UnusedWebpackPlugin
I work on a big front-end React project (1100+ js files) and stumbled upon the same problem: how to find out which files are unused anymore?
I've tested the next tools so far:
findead
deadfile
unrequired
None of them really worked. One of the reason is that we use "not standard" imports. In additional to the regular relative paths in our imports we also use paths resolved by the webpack resolve feature which basically allows us to use neat import 'pages/something' rather than cumbersome import '../../../pages/something'.
UnusedWebpackPlugin
So here is the solution I've finally come across thanks to Liam O'Boyle (elyobo) #GitHub:
https://github.com/MatthieuLemoine/unused-webpack-plugin
It's a webpack plugin so it's gonna work only if your bundler is webpack.
I personaly find it good that you don't need to run it separately but instead it's built into your building process throwing warnings when something is not ok.
Our research topic: https://github.com/spencermountain/unrequired/issues/6
Libraries such as unrequired and deadcode only support legacy code.
In order to find the unused assets, to remove manually, you can use deadfile
library:https://m-izadmehr.github.io/deadfile/
Out of box support for ES5, ES6, React, Vue, ESM, CommonJs.
It supports import/require and even dynamic import.
It can simply find unused files, in any JS project.
Without any config, it supports ES6, React, JSX, and Vue files:
First of all, very good question, in large project coders usually try many lines of code test and at the end of result, hard to find the unused code.
There is two possible that must be work for you - i usually do whenever i need to remove and reduce the unused code into my project.
1st way WebStorm IDE:
If you're using the web-storm IDE for JS development or React JS / React Native or Vue js etc it's tell us and indicate us alote of mention with different color or red warning as unused code inside the editor
but it's not works in your particular scenario there is another way to remove the unused code .
2nd Way unrequired Library: (JSX is not supported)
The second way to remove the unused code inside the project is unrequired library you can visit here : unrequired github
another library called depcheck under NPM & github here
Just follow their appropriate method - how to use them you will fix this unused issue easily
Hopefully that helps you
I think the easiest solution for a create-react-app bootstrapped application is to use ESLint. Tried using various webpack plugins, but ran into out of memory issues with each plugin.
Use the no-unused-modules which is now a part of eslint-plugin-import.
After setting up eslint, installing eslint-plugin-import, add the following to the rules:
"rules: {
...otherRules,
"import/no-unused-modules": [1, {"unusedExports": true}]
}
My approach is an intensive use of ESlint and make it run both on IDE ad before every push.
It points out unused variables, methods, imports and so on.
Webpack (which has too nice plugins for dead code detection) take care about avoiding to bundle unimported code.
findead
With findead you can find all unused components in your project. Just install and run:
Install
npm i -g findead
Usage
findead /path/to/search
This question recalls me that react by default removes the deadcode from the src when you run the build command.
Notes:
you need to run build command only when you want to ship your app to production.
TL;DR; Is it possible to implement package inheritance in Node? Or is there a recommended tool to help with this?
I am working in an enterprise environment containing 60+ (and growing) web apps. We’ve also modularized components such as header/footer/sign-in/etc. so the web apps don’t need to repeat the code, and they can just pull them in as dependencies. We also have library modules to provide things like common logging, modeling, and error handling. We also have one main library that provides a foundation for maintenance like testing, and linting.
What I am trying to do is get access to the dependencies this main library uses in upper level modules.
lib-a
|
—> lib-b
|
—> babel, chai, mocha, etc.
I would like to have lib-a “inherit” babel, chai, mocha, etc. from lib-b rather than having to specifically add the dependencies. That way all my libraries, and eventually web apps will have the same version, and I won’t have to repeat the same dependencies in every package.json. Nor will I need to go through the headache of having N number of teams update the 60-100 apps/libs/whatnot, and having to deal with them complaining about maintenance.
I do understand this goes against the core of npm, but on the level we are using this it’s becoming a maintenance headache. Going more DRY would certainly have it’s benefits at this point.
So to repeat the original question at the top - Is it possible to implement package inheritance in Node? Or are there any recommended tools to help with this?
I have seen the following tools. Has anyone ever used them? or have thoughts on them. Are there others?
https://github.com/FormidableLabs/builder
https://github.com/Cosium/dry-dry
It's a bad idea. You should assume that you don't have control over the dependencies. How else would anybody be able to make changes to the dependencies?
Suppose lib a from your example uses mocha. Since it depends on lib b which also depends on mocha, you could decide to not list mocha in lib a's package.json.
If somebody refactors lib b to not use mocha anymore, lib a will fall all a sudden. That's not good.
We work with equally many projects. We use Greenkeeper, RenovateBot, and some tools that apply changes to all our repos at once. In the long run that's probably a better strategy for you than going against Node's dependency model.
I wrote a clojurescript project. It is a reagent component. Now i want to use this component in other clojurescript project. That is what i do: I compiled my cljs project and then i put a result compiled file to js folder in other project. Further i require that file from index.html. At the end i invoke my component from cljs file
(.slider-view (.-views js/swipe) (clj->js [[:p "1"]
[:p "2"]
[:p "3"]]))
and it works. But i have a question. My project and project where i connect my component have common requirements. For example React and ReactDOM. How to exclude this two references from my project and then connect it from another project? Is there alternative approaches? For example require cljs namespace from another cljs project directly
Even if you could prevent JavaScript libraries from being included twice, you will still have huge chunks of the ClojureScript standard library compiled in twice. As you suggested, you will want to put your first project on the classpath of your second project (by adding it to your project.clj or equivalent) and then include that namespace directly.
Building on Justin's quite correct answer...
What you describe is exactly the normal ClojureScript dependency relationship. In this case, the dependency is something you have created, rather than someone else's existing library, but it is still the same situation.
There is one important subtlety to note. You may want to develop both of your projects simultaneously. This may seem tricky, since dependencies normally require a version number, deploying, and all the rest of the heavy lifting that is just too painful for quick development.
Fortunately, the standard ClojureScript tooling offers a solution to this, called the Leiningen checkouts directory, which works by letting you put a symlink to your dependency inside your main project's directory.
Details are in the checkout-dependencies section of the Leiningen documentation; or just web search for "Clojure checkouts", if that ever decays.
As you get further into these techniques, you may hit some more subtle problems involving the Figwheel tooling. When you reach that stage, this article may also be useful.
I bought an HTML template recently, which contains many plugins placed inside a bower_components directory and a package.js file inside. I wanted to install another package I liked, but decided to use npm for this purpose.
When I typed:
npc install pnotify
the node_modules directory was created and contained about 900 directories with other packages.
What are those? Why did they get installed along with my package? I did some research and it turned out that those were needed, but do I really need to deliver my template in production with hundreds of unnecessary packages?
This is a very good question. There are a few things I want to point out.
The V8 engine, Node Modules (dependencies) and "requiring" them:
Node.js is built on V8 engine, which is written in C++. This means that Node.js' dependencies are fundamentally written in C++.
Now when you require a dependency, you really require code/functions from a C++ program or js library, because that's how new libraries/dependencies are made.
Libraries have so many functions that you will not use
For example, take a look at the express-validator module, which contains so many functions. When you require the module, do you use all the functions it provides? The answer is no. People most often require packages like this just to use one single benefit of it, although all of the functions end up getting downloaded, which takes up unnecessary space.
Think of the node dependencies that are made from other node dependencies as Interpreted Languages
For example, JavaScript is written in C/C++, whose functions and compilers are in turn originally written in assembly. Think of it like a tree. You create new branches each time for more convenient usage and, most importantly, to save time . It makes things faster. Similarly, when people create new dependencies, they use/require ones that already exist, instead of rewriting a whole C++ program or js script, because that makes everything easier.
Problem arises when requiring other NPMs for creating a new one
When the authors of the dependencies require other dependencies from here and there just to use a few (small amount) benefits from them, they end up downloading them all, (which they don't really care about because they mostly do not worry about the size or they'd rather do this than explicitly writing a new dependency or a C++ addon) and this takes extra space. For example you can see the dependencies that the express-validator module uses by accessing this link.
So, when you have big projects that use lots of dependencies you end up taking so much space for them.
Ways to solve this
Number 1
This requires some expert people on Node.js. To reduce the amount of the downloaded packages, a professional Node.js developer could go to the directories that modules are saved in, open the javascript files, take a look at their source code, and delete the functions that they will not use without changing the structure of the package.
Number 2 (Most likely not worth your time)
You could also create your own personal dependencies that are written in C++, or more preferably js, which would literally take up the least space possible, depending on the programmer, but would take/waste the most time, in order to reduce size instead of doing work. (Note: Most dependencies are written in js.)
Number 3 (Common)
Instead of Using option number 2, you could implement WebPack.
Conclusion & Note
So, basically, there is no running away from downloading all the node packages, but you could use solution number 1 if you believe you can do it, which also has the possibility of screwing up the whole intention of a dependency. (So make it personal and use it for specific purposes.) Or just make use of a module like WebPack.
Also, ask this question to yourself: Do those packages really cause you a problem?
No, there is no point to add about 900 packages dependencies in your project just because you want to add some template. But it is up to you!
The heavyness of a template is not challenging the node.js ecosystem nor his main package system npm.
It is a fact that javascript community tend to make smallest possible module to be responsible for one task, and just one.
It is not a bad thing I guess. But it could result of a situation where you have a lot of dependencies in your project.
Nowadays hard drive memory is cheap and nobody cares any more about making efficient/small apps.
As always, it's only a matter of choice.
What is the point of delivering hundreds of packages weighing hundreds of MB for a few kB project.
There isn't..
If you intend to provide it to other developers, just gitignore (or remove from shared package) node_modules or bower_components directories. Developers simply install dependencies again as required ;)
If it is something as simple as an HTML templates or similar stuff, node would most likely be there just for making your life as a developer easier providing live reload, compiling/transpiling typescript/babel/SCSS/SASS/LESS/Coffee... ( list goes on ;P ) etc.
And in that case dependencies would most likely only be dev_dependencies and won't be required at all in production environment ;)
Also many packages come with separate production and dev dependencies, So you just need to install production dependencies...
npm install --only=prod
If your project does need many projects in production, and you really really wanna avoid that stuff, just spend some time and include css/js files your your project needs(this can be a laborious task).
Update
Production vs default install
Most projects have different dev and production dependencies,
Dev dependencies may include stuff like SASS, typescript etc. compilers, uglifiers (minification), maybe stuff like live reload etc.
Where as production version will not have those things reducing the size node_modules directory.
** No node_modules**
In some html template kind of projects, you may not need any node_modules in production, so you skip doing an npm install.
No access to node_modules
Or in some cases, when server that serves exists in node_modules itself, access to it may be blocked (coz there is no need to access these from frontend).
What are those? Why did they get installed along with my package?
Dependencies exists to facilitate code reuse through modularity.
... do I need to deliver my template in production with hundreds of unnecessary packages?
One shouldn't be so quick to dismiss this modularity. If you inline your requires and eliminate dead code, you'll lose the benefit of maintenance patches for the dependencies automatically being applied to your code. You should see this as a form of compilation, because... well... it is compilation.
Nonetheless, if you're licensed to redistribute all of your dependencies in this compiled form, you'll be happy to learn those optimisations are performed by a compiler which compile Javascript to Javascript. The Closure Compiler, as the first example I stumbled across, appears to perform advanced compilation, which means you get dead code removal and function inlining... That seems promising!
This does however have another side effect when you are required to justify the licensing of all npm modules..so when you have hundreds of npm modules due to dependencies this effort also becomes a more cumbersome task
Very old question but I happened to come across very similar situation just as RA pointed out.
I tried to work with node.js framework using vscode and the moment when I tried to install start npm using npm init -y, it generated so many different dependencies. In my case, it was vscode extension ESlint that I added to prior to running npm init -y
Uninstalling ESlint
Restarted vscode to apply that uninstallation
removed previously generated package.json and node-modules folder
do npm init -y again
This solved my problem of starting out with so many dependencies.
I'm trying to get into Grunt, which I am new to, but I do not understand its utility.
I understand that it is a taskrunner. I understand that it can be used to do things like bundle, uglify, jshint, minify, etc etc etc, anything that can be turned into a scripted task.
But I don't see what advantage this gives. Nearly all of these can be run from the command line anyway, which is to say you could just combine them using a simple shell script. It seems to me that setting up grunt + gruntfiles and writing tasks is more work than writing a shell script, rather than less.
What am I missing about this?
Grunt is basically a build / task manager written on top of NodeJS. I would call it the NodeJS stack equivalent of ANT for Java. Here are some common scenarios you would want to use grunt under:
You have a project with javascript files requiring minification, and generally generating a front end build seperately (in case you're using say JAVA for your backend). (grunt-contrib-uglify)
When you save code on your machine during development, you want the browser to reload your page automatically (might seem like a small thing, but believe me this has saved me lots of time). (Live reload)
When a developer saves code on his machine, he wants a comprehensive list of JS errors / general best practice violations to be shown. (grunt-contrib-jshint)
You have a project with SASS/ LESS files which need to be compiled to CSS files on the developers machine during development, For example whenever he saves a SASS file, you want it to be compiled to a CSS file automatically, for inclusion in your page. (grunt-contrib-sass)
You have a team of front end developers who're working on the UI, and a team of backend developers working on the backend, you want the front end devs to use the backend REST API's without having to compile & deploy code everytime on their own machines. In case you were wondering, this isn't possible with a typical web server setup because XHR isn't allowed to be cross-domain by browser. Grunt can setup a proxy for you redirecting XHR requests on your own system within the grunt connect server to another system! (grunt-contrib-proxy, grunt-contrib-connect)
I do not think your shell script can do ALL of these. To summarize, yes, setting up a Gruntfile.js is tedious for someone who has had little exposure to javascript / is new to nodeJS, I went through the same pains as a learner, but Grunt is an amazing piece of software. DO invest the time to setup a proper Gruntfile.js for your front end project, and you'll thank god for making your life a lot easier :)
The Advantage vs shell script:
If you write shell script for every one of these tasks, it is tedious to maintain and then customize for every one of your needs. Gruntfile.js is actually pretty easy. there is a config that you init it with, specifying what tasks you want to perform, the sources and targets for each.
The integration with project seed generators on Yeoman, Gulp is another major factor to consider. Yeoman and Gulp come with Gruntfile.js' with intelligent defaults. For someone who is the sole UI contributor on his team, this is priceless to me!
For someone who is working on frontend technologies, if you have more than one person working with you, its rather easy for them to get to know Grunt, which is already well documented with a lot of answers on SO, than to get to know your shell scripts. This might be a factor in large teams.
The numerous plugins for Grunt extend base functionality. Unless your shell script is VERY popular, and VERY modular, I dont see plugins being built for it. This also extends to inclusion of new front end technologies in your project. Say, if you want to use typescript in your project tomorrow, your shell script will need to incorporate this and account for it with your own effort. With Grunt, its just as simple as "npm install " and adding a config.
Even though I agree with most advantages pointed in Accepted Answer, I still have to consider the disadvantages that are highlighted by Keith Cirkel in Why we should stop using Grunt & Gulp
Thus, some advantages are rebut by Grunt overheads and at least you should consider all this in your final decision of using Grunt, or not.