I'm trying to get into Grunt, which I am new to, but I do not understand its utility.
I understand that it is a taskrunner. I understand that it can be used to do things like bundle, uglify, jshint, minify, etc etc etc, anything that can be turned into a scripted task.
But I don't see what advantage this gives. Nearly all of these can be run from the command line anyway, which is to say you could just combine them using a simple shell script. It seems to me that setting up grunt + gruntfiles and writing tasks is more work than writing a shell script, rather than less.
What am I missing about this?
Grunt is basically a build / task manager written on top of NodeJS. I would call it the NodeJS stack equivalent of ANT for Java. Here are some common scenarios you would want to use grunt under:
You have a project with javascript files requiring minification, and generally generating a front end build seperately (in case you're using say JAVA for your backend). (grunt-contrib-uglify)
When you save code on your machine during development, you want the browser to reload your page automatically (might seem like a small thing, but believe me this has saved me lots of time). (Live reload)
When a developer saves code on his machine, he wants a comprehensive list of JS errors / general best practice violations to be shown. (grunt-contrib-jshint)
You have a project with SASS/ LESS files which need to be compiled to CSS files on the developers machine during development, For example whenever he saves a SASS file, you want it to be compiled to a CSS file automatically, for inclusion in your page. (grunt-contrib-sass)
You have a team of front end developers who're working on the UI, and a team of backend developers working on the backend, you want the front end devs to use the backend REST API's without having to compile & deploy code everytime on their own machines. In case you were wondering, this isn't possible with a typical web server setup because XHR isn't allowed to be cross-domain by browser. Grunt can setup a proxy for you redirecting XHR requests on your own system within the grunt connect server to another system! (grunt-contrib-proxy, grunt-contrib-connect)
I do not think your shell script can do ALL of these. To summarize, yes, setting up a Gruntfile.js is tedious for someone who has had little exposure to javascript / is new to nodeJS, I went through the same pains as a learner, but Grunt is an amazing piece of software. DO invest the time to setup a proper Gruntfile.js for your front end project, and you'll thank god for making your life a lot easier :)
The Advantage vs shell script:
If you write shell script for every one of these tasks, it is tedious to maintain and then customize for every one of your needs. Gruntfile.js is actually pretty easy. there is a config that you init it with, specifying what tasks you want to perform, the sources and targets for each.
The integration with project seed generators on Yeoman, Gulp is another major factor to consider. Yeoman and Gulp come with Gruntfile.js' with intelligent defaults. For someone who is the sole UI contributor on his team, this is priceless to me!
For someone who is working on frontend technologies, if you have more than one person working with you, its rather easy for them to get to know Grunt, which is already well documented with a lot of answers on SO, than to get to know your shell scripts. This might be a factor in large teams.
The numerous plugins for Grunt extend base functionality. Unless your shell script is VERY popular, and VERY modular, I dont see plugins being built for it. This also extends to inclusion of new front end technologies in your project. Say, if you want to use typescript in your project tomorrow, your shell script will need to incorporate this and account for it with your own effort. With Grunt, its just as simple as "npm install " and adding a config.
Even though I agree with most advantages pointed in Accepted Answer, I still have to consider the disadvantages that are highlighted by Keith Cirkel in Why we should stop using Grunt & Gulp
Thus, some advantages are rebut by Grunt overheads and at least you should consider all this in your final decision of using Grunt, or not.
Related
What are those reasons that people install Node.js on PC, and can a Node.js website be developed without installing Node.js on PC, if yes, what are the disadvantages?
Thanks
There can be many reasons, but some common ones:
When developing for any language, you'll need to be able to run and test your code, and running it locally makes it easier to do so.
Additionally, although you can use remote debugging, debugging is faster and easier to set up locally.
Many of the tools used by web developers are also developed in JavaScript and to run those locally, an engine capable of doing so is required. It makes sense that these tools are developed in JavaScript, not just because their primary user group will be able to understand and extend their code, but also because many of these tools need to be able to perform tasks and integrate with components that require something like a JavaScript engine to begin with.
In many situations, running a local environment on your development system will be cheaper and easier to maintain than having a separate test server; and you don't want to run your untested code on a production server.
Not directly related, but npm and Node.js have many uses beyond serving as a back-end to JavaScript-driven websites. Many people that have Node.js installed have nothing to do with web development, but have it for one of the many other reasons.
To answer your 2nd question: "can a Node.js website be developed without installing Node.js on PC?" Yes, but I can see very little reason to want to do so, unless you must. The advantage might be that you can avoid having a complicated piece of software with a large footprint and possibly some security concerns on your development machine. But the disadvantages for the average developer likely far outweigh that - more so if you just sandbox the entire development environment in case security is your main concern.
If you are writing everything from scratch, you don't need a package manager, but there is so much great stuff out there that you can use rather than writing it yourself! If you want to use it, you need a package manager that allows you to download (an optionally specific version of) specific packages, which may use packages themselves, and YOUR source code repository doesn't need to store a copy of every package you use, and every package used by the packages you use, because, as long as your source code specifies which packages it uses in a way your package manager understands, all you need to do is specify a manifest of (an optionally specific version of) specific packages your code uses directly.
"NPM" is one such package manager. "bower" is another but that uses NPM under the hood. Maven is a package manager I've seen in Java projects, and NuGet for MS projects, but for JavaScript projects it's usually NPM. And NPM uses node.
We are planning to switch new technologies like react for my CMS project which is under development for 10 years.
Until now everything was simple and plain on the front end.
First include jquery.js then if necessary include the components and third party scripts, then code and dance with the DOM.
But now while trying to jump into a higher level of technology and different approach, things can easily get very complicated for me.
After spending more than 10 hours with React documents and tutorials I have a very good understanding about what it is and how it works.
But I realized that I am very unfamiliar with some popular concepts. I never used node.js, never used npm, babel, webpack, and may other many "new" things I have seen every where. I am face to face with these tools because of React and I am convinced that these are the inevitable for modern front end development.
Now the question
Our CMS runs on PHP and depends on MooTools heavily at the front end. Instead of a complete rewrite of a 10 years old CMS I just want to try new technologies partially for some cases. Decided to starting with React.
For the case I want to integrate ag-Grid to React also.
What I did not understand is that how to bring all these tools together.
I won't be able to use the simply include js way of react because of ag-Grid.
In the examples the code written has some JSX. Which means that we write JSX and run it translated for the browser to test if it is ok.
Each time before testing do I need to translate these files?
And more over if the files are translated does debugging become very
complicated?
Can babel make it on the run time? If yes is it a good practice.
There are lots of file in the node_modules folder. Which of them
should I include for production?
All sources on the net are very theoretical and assumes a knowledge. Need some guidance for best practices.
There are lots of questions and not a single step by step guide from beginning to production.
JSX is an extension over spec-compliant JavaScript. It is syntactic sugar for React.createElement(...) and is optional in React development.
React can be written in plain ES5:
React.createElement("div", { foo: "foo" });
Instead of JSX:
<div foo="foo" />
Or with helper functions like h that achieve the same goal, e.g. react-hyperscript.
The fact that there is PHP backend application doesn't prevent from developing React frontend application with JSX. This may require to configure React project to not use built-in Express web server and build client-side application to custom location, i.e. existing app's public folder. In case create-react-app is used, this may require to eject the project).
Each time before testing do I need to translate these files?
They should be transpiled to plain JavaScript (ES5 if it targets older browsers). They can be translated on every change in source files when client-side project runs in watch mode (conventionally npm start).
And more over if the files are translated does debugging become very
complicated?
This is what source maps are for.
Can babel make it on the run time? If yes is it a good practice.
It's possible to use Babel at runtime, and this isn't a good practice, even in development environment.
There are lots of file in the node_modules folder. Which of them
should I include for production?
The contents of node_modules doesn't matter. Almost all of them are development dependencies that are needed to build client-side app. This is the task for a bundler, which is Webpack in create-react-app template. It builds project dependencies to plain JS in dist folder.
I bought an HTML template recently, which contains many plugins placed inside a bower_components directory and a package.js file inside. I wanted to install another package I liked, but decided to use npm for this purpose.
When I typed:
npc install pnotify
the node_modules directory was created and contained about 900 directories with other packages.
What are those? Why did they get installed along with my package? I did some research and it turned out that those were needed, but do I really need to deliver my template in production with hundreds of unnecessary packages?
This is a very good question. There are a few things I want to point out.
The V8 engine, Node Modules (dependencies) and "requiring" them:
Node.js is built on V8 engine, which is written in C++. This means that Node.js' dependencies are fundamentally written in C++.
Now when you require a dependency, you really require code/functions from a C++ program or js library, because that's how new libraries/dependencies are made.
Libraries have so many functions that you will not use
For example, take a look at the express-validator module, which contains so many functions. When you require the module, do you use all the functions it provides? The answer is no. People most often require packages like this just to use one single benefit of it, although all of the functions end up getting downloaded, which takes up unnecessary space.
Think of the node dependencies that are made from other node dependencies as Interpreted Languages
For example, JavaScript is written in C/C++, whose functions and compilers are in turn originally written in assembly. Think of it like a tree. You create new branches each time for more convenient usage and, most importantly, to save time . It makes things faster. Similarly, when people create new dependencies, they use/require ones that already exist, instead of rewriting a whole C++ program or js script, because that makes everything easier.
Problem arises when requiring other NPMs for creating a new one
When the authors of the dependencies require other dependencies from here and there just to use a few (small amount) benefits from them, they end up downloading them all, (which they don't really care about because they mostly do not worry about the size or they'd rather do this than explicitly writing a new dependency or a C++ addon) and this takes extra space. For example you can see the dependencies that the express-validator module uses by accessing this link.
So, when you have big projects that use lots of dependencies you end up taking so much space for them.
Ways to solve this
Number 1
This requires some expert people on Node.js. To reduce the amount of the downloaded packages, a professional Node.js developer could go to the directories that modules are saved in, open the javascript files, take a look at their source code, and delete the functions that they will not use without changing the structure of the package.
Number 2 (Most likely not worth your time)
You could also create your own personal dependencies that are written in C++, or more preferably js, which would literally take up the least space possible, depending on the programmer, but would take/waste the most time, in order to reduce size instead of doing work. (Note: Most dependencies are written in js.)
Number 3 (Common)
Instead of Using option number 2, you could implement WebPack.
Conclusion & Note
So, basically, there is no running away from downloading all the node packages, but you could use solution number 1 if you believe you can do it, which also has the possibility of screwing up the whole intention of a dependency. (So make it personal and use it for specific purposes.) Or just make use of a module like WebPack.
Also, ask this question to yourself: Do those packages really cause you a problem?
No, there is no point to add about 900 packages dependencies in your project just because you want to add some template. But it is up to you!
The heavyness of a template is not challenging the node.js ecosystem nor his main package system npm.
It is a fact that javascript community tend to make smallest possible module to be responsible for one task, and just one.
It is not a bad thing I guess. But it could result of a situation where you have a lot of dependencies in your project.
Nowadays hard drive memory is cheap and nobody cares any more about making efficient/small apps.
As always, it's only a matter of choice.
What is the point of delivering hundreds of packages weighing hundreds of MB for a few kB project.
There isn't..
If you intend to provide it to other developers, just gitignore (or remove from shared package) node_modules or bower_components directories. Developers simply install dependencies again as required ;)
If it is something as simple as an HTML templates or similar stuff, node would most likely be there just for making your life as a developer easier providing live reload, compiling/transpiling typescript/babel/SCSS/SASS/LESS/Coffee... ( list goes on ;P ) etc.
And in that case dependencies would most likely only be dev_dependencies and won't be required at all in production environment ;)
Also many packages come with separate production and dev dependencies, So you just need to install production dependencies...
npm install --only=prod
If your project does need many projects in production, and you really really wanna avoid that stuff, just spend some time and include css/js files your your project needs(this can be a laborious task).
Update
Production vs default install
Most projects have different dev and production dependencies,
Dev dependencies may include stuff like SASS, typescript etc. compilers, uglifiers (minification), maybe stuff like live reload etc.
Where as production version will not have those things reducing the size node_modules directory.
** No node_modules**
In some html template kind of projects, you may not need any node_modules in production, so you skip doing an npm install.
No access to node_modules
Or in some cases, when server that serves exists in node_modules itself, access to it may be blocked (coz there is no need to access these from frontend).
What are those? Why did they get installed along with my package?
Dependencies exists to facilitate code reuse through modularity.
... do I need to deliver my template in production with hundreds of unnecessary packages?
One shouldn't be so quick to dismiss this modularity. If you inline your requires and eliminate dead code, you'll lose the benefit of maintenance patches for the dependencies automatically being applied to your code. You should see this as a form of compilation, because... well... it is compilation.
Nonetheless, if you're licensed to redistribute all of your dependencies in this compiled form, you'll be happy to learn those optimisations are performed by a compiler which compile Javascript to Javascript. The Closure Compiler, as the first example I stumbled across, appears to perform advanced compilation, which means you get dead code removal and function inlining... That seems promising!
This does however have another side effect when you are required to justify the licensing of all npm modules..so when you have hundreds of npm modules due to dependencies this effort also becomes a more cumbersome task
Very old question but I happened to come across very similar situation just as RA pointed out.
I tried to work with node.js framework using vscode and the moment when I tried to install start npm using npm init -y, it generated so many different dependencies. In my case, it was vscode extension ESlint that I added to prior to running npm init -y
Uninstalling ESlint
Restarted vscode to apply that uninstallation
removed previously generated package.json and node-modules folder
do npm init -y again
This solved my problem of starting out with so many dependencies.
I am writing an application with a Node.js backend and a single-page web app front-end.
I am keeping the client and server logic in the same project for simplicity and speed of development.
I am considering how best to organise the artifacts.
The Node.js part is straightforward because it doesn't need to go through a battery of pre-processors (transpilation, minification, concatenation etc).
The front-end needs to be transformed per the above, and I guess placed in a dist folder.
The current hierarchy of files is like so:
my-app
- src
- client
- server
Should I put the dist folder for the client artifacts under src/client?
Has anyone tried this and found problems with this approach?
I am using Heroku (a deployment system that uses git).
Committing the built artifacts for the client feels wrong, but if I want to deploy it by pushing to Heroku I think I need to commit them. Is this correct?
This question, as is, invites opinionated answers, so I'll start by saying this is by no means the only way to go, but in my opinion, it is the easiest to work with and makes the most sense.
The production client code, after pre-processing, should be located in my-app/dist or my-app/dst, which could either mean distribution or destination, depending on how you look at it. Either way, my recommendation is to commit this folder, as it saves you a lot of hassle debugging remotely.
For example, if your code works locally but not remotely, using something like the postinstall hook to generate your dist folder adds yet another suspect to check when trying to determine what the issue is with your program.
Another advantage of committing the dist folder is it allows you to specify all the packages you use for your build process as devDependencies rather than dependencies. This is a huge plus, and makes deployment a lot faster, as well as less memory usage on your heroku process.
That being said, I still recommend (as you already probably plan to do) using an automated watch task to build your dist folder for ease of development, even if you decide you don't want to use that same build process remotely and opt for committing the dist directory instead. You could add that as a custom npm command, e.g. npm run build and have that invoke your gulp task.
One last thing. For those of you using templating languages like pug or dust or ejs instead of a framework like react or angular, I recommend determining whether you can run any of your templates to build static HTML files that will be served in production.
If not, you should at least compile your templates (not to be confused with running them) by following the recommendations provided by your particular templating language. Typically, they'll suggest using their command line utility to generate the compiled templates, so that they don't have to be compiled every time they're invoked in production. This will make your node.js server respond faster to requests at the expense of using more memory to cache the compiled templates.
If you're planning to go this route, I would edit nodejs app.js/index.js to serve static file and point the directory to dist/.
Also, you would need to tell express to forward all non-api requests to the frontend.
I currently have ReactJS + NodeJS/ExpressJS + Webpack onto EC2, Amazon Web Services (AWS) under one project and would like to get it deployed together at once, in one project.
What are some suggestions on how to go about doing so? Done the research, and I've only seen tutorials on deploying one in specific, whether it be just ReactJS or just NodeJS. Any insights or leads would be greatly appreciated.
Will accept/upvote answer. Thank you in advance
You don't "deploy" ReactJS, it's just a static file or files like any other JS libraries in your applications. You also don't deploy Webpack. Webpack should run on a developer machine (or in CI/CD stack or build system).
As for the NodeJS part just use Elastic Beanstalk.
I do not commit builds to source control. I see that a lot and it can make things easier, but you can also forget to rebuild as you have to do it manually, and it adds a lot of bloat to your repo.
I believe builds should be run as part of the deployment process. Assuming you are using git, you can add script hooks/post-receive in a remote repo there. When you push to that remote, the script will run. This is where I do my webpack build.
You may want to look into https://github.com/git-deploy/git-deploy for context, but I do this manually.
In my projects, on the deployment machine I do git --init --bare /var/git/myproject.git then add the script in /var/git/hooks/post-receive. The hook checks out the code into /var/www/myproject, runs the build, which fills in the /var/www/myproject/build. Then it removes the old /var/www/myproject/public and renames build to public. And done.
I'm coming from more of an operations background and would say that if your goal includes keeping that site up as much as possible then use Packer to generate AMI's and CloudFormation to build an Application Load Balancer (the newer, cheaper brother of ELB) in front of an AutoScalingGroup which keeps the EC2 instances up and running.
I'm currently working on a large scale project doing exactly what you describe. First off, there are so many different ways to do this, so what you really need is some general guidelines to get started, then we can dig a little deeper into details when some initial decisions are made, if you'd like. If you've already got the app deploying and running in two separate steps, but are just looking to combine those, I can definitely help. I'd just need to know how you're currently building/deploying. If you're just getting started on building your pipeline and need to set up the process from scratch, then read on:
First off you'll want to set up some kind of build server that will install your npm dependancies and run your webpack build. Most likely you'll want a separate webpack config that's just for your build server, this'll give you a build optimized for production or qa/staging environments. This config should split out vendor files that you won't update all the time, pull out seperate css files with extract text plugin and uglify the files. If you have an isomorphic React app, or are using es6 features not supported in your version of node, then you'll need a webpack build for your server code as well. This is really different from the hot reloading build you'll want to have on your local machine while you're actually coding the app. I'll be happy to show some examples if you'd like of our webpack config files for both local development and our CI build. You may also need a build.sh or makefile to do something with the compiled .js files that your webpack build creates, but that'll depend on your deployment which I'll cover later. You can run your production build locally as your getting your config just right and fire up the app from those files to test it's all working. Additionally, since you'll likely want to be able to automate all of this, you probably want to run your tests and linting right before you build your app, we run eslint and mocha/jdom to run our enzyme/expect specs as part of our build. Once that's all working nicely, you'll most likely want to set up a build server that can run your builds automatically. My team is using Jenkins for this, which is a little more work to set up, but it's free (aside from the ec2 box we run it on). There are also a ton of subscription based build/continuous integration servers, such as Travis and CodeShip. There's plenty of articles on the pros and cons of these different products and how to set them up. The bottom line is you'll want to have a build server that can pull down your code from source control, install npm deps, lint, test and build your app. If anything fails it should fail your build and if your build succeeds you'll have some sort of archive that you'll later deploy to an ec2 instance. In our shop we use a build.sh file to tarball up our build archive (basically a folder with our node server files as well as our minified client files, css files and any fonts or images needed to run the app) and upload it to an S3 bucket that we deploy from. We like this fairly old school method because the tarball will never change, so we have ultra reliable roll backs.
What you do with your build archive will depend on how you want to do deployments. We have a custom deployment system using puppet, but there are plenty of products that do this such as elastic beanstalk, that would be much easier to set up. You'll want some kind of process supervisor to actually run your node app, so unless you have a dev ops team that wants to build custom pipelines, using AWS built in features will probably be the easiest way to get started. As usual, there are so many ways to do this, but the basic principal is that you need something to download your build archive and run/supervise your node process. You also may want to be able to create and configure ec2 boxes on the fly (Puppet, chef, etc.), or even use containers (Docker) which allow you to move complete stacks around as single units. Using automation to create and configure servers is crucial if you need to scale your app, but it is complicated and may not be necessary for smaller projects. This is definitely an area where you can start simply and add complexity later on, as long as you have good long term goals and make sure to take the necessary step to prepare for future complexity.
All of this can get you pretty far in the weeds, so it's best to find the simplest thing that will serve your needs as you get started and then add complexity as real life situations demand so. I'll be happy to elaborate on any of these details if you provide a little more context about how big and well funded of a project your working on. If it's a little side project to learn the tech, I'd have very different advice then if you're trying to build an app that'll have a lot of traffic and/or complex features.
This could get 100 different answers and they could all end up being good ideas. First, you mention react + nodejs - keep in mind that these solve different tasks. React is going to be frontend and served out via static files. Nodejs is focused more around the server-side and would be the code that serves data. They can easily work together. You might use Express for the webserver (nodejs) to serve the HTML/React pages.
Unfortunately, I saw that you mentioned webpack, so you are going to have to 'build' your application with something - either via webpack, gulp, grunt, etc. This is where source control and build servers are great - but if you're new to it, it might be more complex than you need.
If you have just basic EC2 images as webservers and only 1-2, then the biggest hurdle is just pushing up your code. Something like https://deploybot.com/ could work as it can push your git repo down to multiple hosts via ftp, etc. If you wanted to get a bit fancier, you could look at something like Jenkins or some of the other items.
Docker is a great choice and if you are going to be dealing with multiple developers, server environments, deployments - it's worth the time. Otherwise, keep it simple and just get your code on the EC2 instance ;).