What is "npm run build" in create-react-app? - javascript

I could not find any explanation regarding the work of "npm run build",
It is simple and easy to use and i get the "build" folder that works great,
But, in create-react-app, what happens exactly behind the scene?
Is it a complete different use of a build tool?
If not, is it utilizing other build tools?

Developers often break JavaScript and CSS out into separate files. Separate files let you focus on writing more modular chunks of code that do one single thing. Files that do one thing decrease your cognitive load as maintaining them is a quite cumbersome task.
What happens exactly behind the scene?
When it’s time to move your app to production, having multiple JavaScript or CSS files isn’t ideal. When a user visits your site, each of your files will require an additional HTTP request, making your site slower to load.
So to remedy this, you can create a “build” of our app, which merges all your CSS files into one file, and does the same with your JavaScript. This way, you minimize the number and size of files the user gets. To create this “build”, you use a “build tool”. Hence the use of npm run build.
As you have rightly mentioned that running the command (npm run build) creates you a build directory. Now suppose you have a bunch of CSS and JS files in your app:
css/
mpp.css
design.css
visuals.css
...
js/
service.js
validator.js
container.js
...
After you run npm run build your build directory will be:
build/
static/
css/
main.css
js/
main.js
Now your app has very few files. The app is still the same but got compacted to a small package called build.
Final Verdict:
You might wonder why a build is even worth it, if all it does is save your users a few milliseconds of load time. Well, if you’re making a site just for yourself or a few other people, you don’t have to bother with this. Generating a build of your project is only necessary for high traffic sites (or sites that you hope will be high traffic soon).
If you’re just learning development, or only making sites with very low traffic, generating a build might not be worth your time.

It's briefly explained here: https://github.com/facebookincubator/create-react-app#npm-run-build-or-yarn-build.
It correctly bundles React in production mode and optimizes the build for the best performance.
The build is minified and the filenames include the hashes.
Behind the scenes, it uses babel to transpile your code and webpack as the build tool to bundle up your application.

Related

ng build -prod vs ng build --prod --build-optimizer=true

My Angular project is #Angular4.3.3
ng build -prod
Takes 77 seconds to make a build
ng build --prod --build-optimizer=true
Takes 190 seconds to make a build, No vendor chunk, less in size(but not a big difference in size though)
Chunk differences on console image:
I read Bundling & Tree-Shaking but still don't get the clear difference between builds created by those commands.
Why there are these two different ways and what is the difference in performance or any other way?
--build-optimizer and --vendor-chunk
From Angular CLI Docs:
When using Build Optimizer the vendor chunk will be disabled by
default. You can override this with --vendor-chunk=true.
Total bundle sizes with Build Optimizer are smaller if there is no separate vendor chunk because having vendor code in the same chunk
as app code makes it possible for Uglify to remove more unused code.
First of all why is vendor chunk useful in the first place?
vendor.js is most useful during development because you're updating your code far more frequently than you're downloading a new framework or updating npm packages.
Therefore compile time is faster during development with vendor chunk enabled.
As for why is --vendor-chunk even an option? This is off the top of my head but:
If your app has a lot of users on a slow connection and you frequently update it then it may be advantageous to have a larger vendor chunk that remains unchanged for longer. When updating your app then the chunks will be smaller. This will not give you fully optimized (tree shaken) app, but in very specific circumstances it could be useful. [This assumes you're using fingerprinting where the filename is literally a checksum/hash of the contents of the file - so if it doesn't change then the file can be cached.]
Very occasionally there can be subtle bugs in your code that only become apparent when minimizing code in a certain way. This may be due to relying on a method/class name that gets 'optimized out'. So you may have to enable vendor chunk in production if you have such a bug (while you fix it).
Enable vendor chunk deliberately to make your app slower to load - then tell your boss you're working late to optimize it - and disable it ;-)
Why not? People like to play!

Version control strategy for webpack-encore project

I'm learning to use webpack-encore and noticed it is installed only as a dev dependency. Does that mean I should compile my js and css files on development and push them to the repository, and then to production?
That seems to me what the docs are implying, but wouldn't that mean a merge-conflict hell? Compiled files would be impossible to merge.
Also wouldn't that be contrary to version control philosophy? As far as I know, you don't publish binaries in compiled languages (i.e. C/C++), you push the code and expect the server to compile them. I know this isn't the same type of "compilation" in javascript, but what is the expected behavior of the production server in this case? To receive the files ready to serve them, or to compile them at the time of release?
Thanks in advance
Does that mean I should compile my js and css files on development and push them to the repository, and then to production?
Not exactly - it depends on how you deploy.
When you deploy, you need to run ./node_modules/.bin/encore production to build your assets. Once you've done this, only your built assets (e.g. web/build) need to be transferred to production.
You could run this command locally (or on some "build" server) and the transfer all the files to production. Or, you could use a git pull on production, and then run this command on production (the downside being that you would need Node.js installed on production).
You shouldn't / don't need to commit your built files to your repository. But... if it simplifies your deploy (i.e. you want to do a git pull and be done), there's no real problem with that.
I just added a PR to answer these in the FAQ (http://symfony.com/doc/current/frontend/encore/faq.html) - here's the PR until it's deployed: https://github.com/symfony/symfony-docs/pull/8109
Cheers!
Solution 1:
Run yarn run encore production locally
Check out which files have been created / modified
Add them to VCS
Commit
Push / deploy
Solution 2:
Push / deploy
Run yarn run encore production remotely during deployment
To my eyes the 2nd solution is way better, because you don't need an extra human-checking before deployment, everything is automated.
But this has a strong drawback: building assets can be a slow process, and when I deploy, my production is down during 5 to 20 seconds until assets are built.
Here's the HTTP 500 error:
An exception has been thrown during the rendering of a template ("Asset manifest file "[...]/web/build/manifest.json" does not exist.").
It looks like the manifest.json file is deleted at the beginning of the process, and created from scratch later on.
Something that should be improved?

How make deployment onto Amazon Web Services (AWS) with ReactJS/NodeJS together?

I currently have ReactJS + NodeJS/ExpressJS + Webpack onto EC2, Amazon Web Services (AWS) under one project and would like to get it deployed together at once, in one project.
What are some suggestions on how to go about doing so? Done the research, and I've only seen tutorials on deploying one in specific, whether it be just ReactJS or just NodeJS. Any insights or leads would be greatly appreciated.
Will accept/upvote answer. Thank you in advance
You don't "deploy" ReactJS, it's just a static file or files like any other JS libraries in your applications. You also don't deploy Webpack. Webpack should run on a developer machine (or in CI/CD stack or build system).
As for the NodeJS part just use Elastic Beanstalk.
I do not commit builds to source control. I see that a lot and it can make things easier, but you can also forget to rebuild as you have to do it manually, and it adds a lot of bloat to your repo.
I believe builds should be run as part of the deployment process. Assuming you are using git, you can add script hooks/post-receive in a remote repo there. When you push to that remote, the script will run. This is where I do my webpack build.
You may want to look into https://github.com/git-deploy/git-deploy for context, but I do this manually.
In my projects, on the deployment machine I do git --init --bare /var/git/myproject.git then add the script in /var/git/hooks/post-receive. The hook checks out the code into /var/www/myproject, runs the build, which fills in the /var/www/myproject/build. Then it removes the old /var/www/myproject/public and renames build to public. And done.
I'm coming from more of an operations background and would say that if your goal includes keeping that site up as much as possible then use Packer to generate AMI's and CloudFormation to build an Application Load Balancer (the newer, cheaper brother of ELB) in front of an AutoScalingGroup which keeps the EC2 instances up and running.
I'm currently working on a large scale project doing exactly what you describe. First off, there are so many different ways to do this, so what you really need is some general guidelines to get started, then we can dig a little deeper into details when some initial decisions are made, if you'd like. If you've already got the app deploying and running in two separate steps, but are just looking to combine those, I can definitely help. I'd just need to know how you're currently building/deploying. If you're just getting started on building your pipeline and need to set up the process from scratch, then read on:
First off you'll want to set up some kind of build server that will install your npm dependancies and run your webpack build. Most likely you'll want a separate webpack config that's just for your build server, this'll give you a build optimized for production or qa/staging environments. This config should split out vendor files that you won't update all the time, pull out seperate css files with extract text plugin and uglify the files. If you have an isomorphic React app, or are using es6 features not supported in your version of node, then you'll need a webpack build for your server code as well. This is really different from the hot reloading build you'll want to have on your local machine while you're actually coding the app. I'll be happy to show some examples if you'd like of our webpack config files for both local development and our CI build. You may also need a build.sh or makefile to do something with the compiled .js files that your webpack build creates, but that'll depend on your deployment which I'll cover later. You can run your production build locally as your getting your config just right and fire up the app from those files to test it's all working. Additionally, since you'll likely want to be able to automate all of this, you probably want to run your tests and linting right before you build your app, we run eslint and mocha/jdom to run our enzyme/expect specs as part of our build. Once that's all working nicely, you'll most likely want to set up a build server that can run your builds automatically. My team is using Jenkins for this, which is a little more work to set up, but it's free (aside from the ec2 box we run it on). There are also a ton of subscription based build/continuous integration servers, such as Travis and CodeShip. There's plenty of articles on the pros and cons of these different products and how to set them up. The bottom line is you'll want to have a build server that can pull down your code from source control, install npm deps, lint, test and build your app. If anything fails it should fail your build and if your build succeeds you'll have some sort of archive that you'll later deploy to an ec2 instance. In our shop we use a build.sh file to tarball up our build archive (basically a folder with our node server files as well as our minified client files, css files and any fonts or images needed to run the app) and upload it to an S3 bucket that we deploy from. We like this fairly old school method because the tarball will never change, so we have ultra reliable roll backs.
What you do with your build archive will depend on how you want to do deployments. We have a custom deployment system using puppet, but there are plenty of products that do this such as elastic beanstalk, that would be much easier to set up. You'll want some kind of process supervisor to actually run your node app, so unless you have a dev ops team that wants to build custom pipelines, using AWS built in features will probably be the easiest way to get started. As usual, there are so many ways to do this, but the basic principal is that you need something to download your build archive and run/supervise your node process. You also may want to be able to create and configure ec2 boxes on the fly (Puppet, chef, etc.), or even use containers (Docker) which allow you to move complete stacks around as single units. Using automation to create and configure servers is crucial if you need to scale your app, but it is complicated and may not be necessary for smaller projects. This is definitely an area where you can start simply and add complexity later on, as long as you have good long term goals and make sure to take the necessary step to prepare for future complexity.
All of this can get you pretty far in the weeds, so it's best to find the simplest thing that will serve your needs as you get started and then add complexity as real life situations demand so. I'll be happy to elaborate on any of these details if you provide a little more context about how big and well funded of a project your working on. If it's a little side project to learn the tech, I'd have very different advice then if you're trying to build an app that'll have a lot of traffic and/or complex features.
This could get 100 different answers and they could all end up being good ideas. First, you mention react + nodejs - keep in mind that these solve different tasks. React is going to be frontend and served out via static files. Nodejs is focused more around the server-side and would be the code that serves data. They can easily work together. You might use Express for the webserver (nodejs) to serve the HTML/React pages.
Unfortunately, I saw that you mentioned webpack, so you are going to have to 'build' your application with something - either via webpack, gulp, grunt, etc. This is where source control and build servers are great - but if you're new to it, it might be more complex than you need.
If you have just basic EC2 images as webservers and only 1-2, then the biggest hurdle is just pushing up your code. Something like https://deploybot.com/ could work as it can push your git repo down to multiple hosts via ftp, etc. If you wanted to get a bit fancier, you could look at something like Jenkins or some of the other items.
Docker is a great choice and if you are going to be dealing with multiple developers, server environments, deployments - it's worth the time. Otherwise, keep it simple and just get your code on the EC2 instance ;).

webpack build to multiple files

I have web pack application and want to build output multiple files not a single javascript file. for example if I have such folder structure
components
1.js
2.js
actions
1.js.
2.js
my webpack build have to compile files in same folder structure. How i can achieve that?
I tried babel cli:
babel ./src --out-dir ./lib --source-maps --presets es2015,react --plugins babel-plugin-add-module-exports,babel-plugin-transform-decorators-legacy,babel-plugin-transform-class-properties --watch
It outputs files as I wanted but getting error
Cannot resolve module
Because it does not know anything about webpack resolve.
Any suggestions?
Getting webpack to output multiple files is possible but it does have limitations. First it's important to understand how this works. Webpack actually provides a runtime script that can load code "chunks" as they are needed, this is important for large applications so the user doesn't have to download the javascript for the entire app just to see the homepage. But webpack needs to keep track of these chunks and what they're named at build time to be able to load them correctly at run time. For that reason it has it's own file naming conventions to enable this functionality. See more here: https://webpack.js.org/guides/code-splitting/. So you can require.ensure all of your deps, but they won't be named and foldered they way you describe as webpack's runtime wouldn't be able to find them in that case.
The other thing that's important to consider is that webpack is a bundler, so it's meant to bundle files. Essentially your saying you don't want to bundle your files. So if that's the case, you should probably look into using require.js. Many people have moved from require to bundlers such as Wepback and Browserify as typically it's not efficient to download every little module seperately. But, every app is different so you may in fact have a good reason to not bundle.
If you do in fact want to do some bundling but want to optimize how it's done, then I can help with that as well, and webpack is certainly a great tool for that. But I'll need to understand your use case a little more to give the best advice.

How do front end devs bundle and minify files?

What's the best practice for minifying and bundling js/css in a pure front end app, and how do the tools work?
I know how this can be done with server side apps like .NET/Java/LAMP/etc. But what about pure front end projects, SPA projects or backendless projects that are built with say, ember or angular these days? Say your entire project consists of HTML/css/js, which interfaces with a RESTful service elsewhere.
What kind of process or tool do you use to minify and bundle the resources for that?
I've seen grunt plugins that exist for this, but I find the documentation to be pretty magical and it's still unclear to me how they work.
Specifically, does the tool:
1) Replace src="/js/a.js",src="/js/b.js" with src="/js/bundle-a+b.min.js"? (and likewise with css?) in the source html files?
2) have different modes for dev and release, or is the tool only run when the project is released?
Or are the resource requests entirely managed by a js tool and js/css files have to be requested via a library function? Wouldn't the lag be noticeable in this case?
Thanks.
Through the use of Build tools front end devs can have minified javascript, css, or even images and html files automatically minified as they develop. The most common is grunt, with gulp close behind.
You configure grunt tasks, like grunt-contrib-uglify and grunt-contrib-copy, and put those tasks under a grunt-contrib-watch task. Have the grunt watch task watch the files you modify, and every time a change is detected those .min files are automatically generated.
These build tools have no impact on your application, they are run before the files are servered. You were correct to assume there was an easy way to do this. I suggest you look at grunt getting started, a sample gruntfile, or a project that uses grunt - here's mine, it does minification like you requested. Clone my repo, run sudo npm install, then sudo grunt. I don't have watch set up in my project but grunt is very well documented.

Categories