.env vs config.json - javascript

I just started learning JavaScript / node.js (a gentle introduction to back-end webdev), and thus I am completely green in the subject.
A few days ago I was reading a tutorial from which I learned to keep my confidential data (like passwords) in a config.json file.
Today I have discovered (by a chance) .env file, and the more I learn about it, it seems, the more people are using it to actually store passwords.
So when should I use .env and when should I use config.json?

The Question
When should .env be used over config.json and for what?
The answer
This is a rather difficult answer. On the one hand, you should only really ever use either of these tools while in development. This means that when in a production or prod-like environment, you would add these variables directly to the environment:
NODE_ENV=development node ./sample.js --mongodb:host "dharma.mongohq.com" --mongodb:port 10065
There is no real clear winner over the other per se as they are both helpful in different ways. You can have nested data with config.json, but on the other hand, you can also have a cleaner data structure with .env
Some thing also to note is that you never want to commit these files to source control (git, svc etc).
On the other hand, these tools make it very easy for beginners to get started quickly without having to worry about how to set the environment variables and the differences between a windows environment and a linux one.
All in all, I'd say its really up to the developer.

.env files are generally used to store information related to the particular deployment environment, while config.json files might be used to store data particular to the application as a whole.
either approach works, and whether or not your config files are stored in your repository is more a function of whether the data needs to be confidential.

This largely comes down to personal preference and the conventions of the frameworks you're using. They're just different formats for storing the same kind of information.
Some example config file formats:
.env
*.yml (YAML files)
*.ini (normally Windows-only)
*.json
At the end of the day, they all accomplish the same purpose: providing your application with environment-specific information (credentials, file paths, connection strings, etc.). Choose the format which best fits your choice of framework.

I think it's really up to you, the important thing to remember is why you're using this approach. The idea is to save your sensitive data in a file that doesn't get pushed to source control or any other place other than your local environment - this keeps the data safer. Then when you're ready to deploy to a remote server somewhere, you need to manually insert those values into that environment.
I generally use .env because the syntax for getting data from a .env file is supported in many remote environments - like heroku. When I deploy an app to heroku, I can go into the settings of the app and put in the environment variables using the heroku dashboard UI - I don't have to figure out how to get a json file manually created, etc... (maybe there are other workarounds). After the variables are in place, I just use process.env.variableName to access the data.

Statistically comparing the two NPM packages (along with other similar solutions) might be the best way to decide for yourself.
At the time of this writing, dotenv is a much smaller package with greater support (actual contributors aside, only deduced by the number of remaining issues and immense popularity). It's also newer by 2.5 years and, if fanfare is important to you, has twice as many stars.

If you are targeting deployment of your application in Docker, then .env is 100% the way to go. The ability to use nested data in config.json is great, but you'll be looking at some PITA when you need to migrate that data to a .env to make deployment with Docker work. docker build and docker-compose both are designed to use .env natively, so if you start with that then it will facilitate a smooth path to "Dockerizing" your application.
I am currently porting an application to run in Docker which was written with no such forethought, and it is quite painful... lots of refactoring, and only for the nested stuff. The basic key:value properties are easy to migrate to a .env:
~$ cat config.json
{
"PROTOCOL": "https",
"HOST": "example.com",
"PORT": 443
}
...
~$ cat .env
PROTOCOL="https"
HOST="example.com"
PORT=443

I prefer json because typescript can infer type from it, this is not possible with env file

Related

Why do people install Node.js on their PC, if its back-end runtime environment

What are those reasons that people install Node.js on PC, and can a Node.js website be developed without installing Node.js on PC, if yes, what are the disadvantages?
Thanks
There can be many reasons, but some common ones:
When developing for any language, you'll need to be able to run and test your code, and running it locally makes it easier to do so.
Additionally, although you can use remote debugging, debugging is faster and easier to set up locally.
Many of the tools used by web developers are also developed in JavaScript and to run those locally, an engine capable of doing so is required. It makes sense that these tools are developed in JavaScript, not just because their primary user group will be able to understand and extend their code, but also because many of these tools need to be able to perform tasks and integrate with components that require something like a JavaScript engine to begin with.
In many situations, running a local environment on your development system will be cheaper and easier to maintain than having a separate test server; and you don't want to run your untested code on a production server.
Not directly related, but npm and Node.js have many uses beyond serving as a back-end to JavaScript-driven websites. Many people that have Node.js installed have nothing to do with web development, but have it for one of the many other reasons.
To answer your 2nd question: "can a Node.js website be developed without installing Node.js on PC?" Yes, but I can see very little reason to want to do so, unless you must. The advantage might be that you can avoid having a complicated piece of software with a large footprint and possibly some security concerns on your development machine. But the disadvantages for the average developer likely far outweigh that - more so if you just sandbox the entire development environment in case security is your main concern.
If you are writing everything from scratch, you don't need a package manager, but there is so much great stuff out there that you can use rather than writing it yourself! If you want to use it, you need a package manager that allows you to download (an optionally specific version of) specific packages, which may use packages themselves, and YOUR source code repository doesn't need to store a copy of every package you use, and every package used by the packages you use, because, as long as your source code specifies which packages it uses in a way your package manager understands, all you need to do is specify a manifest of (an optionally specific version of) specific packages your code uses directly.
"NPM" is one such package manager. "bower" is another but that uses NPM under the hood. Maven is a package manager I've seen in Java projects, and NuGet for MS projects, but for JavaScript projects it's usually NPM. And NPM uses node.

Is it best practice to hide API keys in .env file in React

I am new to React and was learning how to hide API keys gotten from GitHub API and from other APIs. I found out that it is possible to hide keys in .env files and get access to those keys by using REACT_APP and ensure that .env file is added into .gitignore file in order not to be submitted to a server. `The question is Is it considered best practice the way of hiding keys I described above. Secondly, is .env file added to a server even if we add .env file into .gitignore file.
If you are going to be passing the contents of the environment data file to React, which is client-side code, then it isn't likely to be very useful for keeping things secret.
Mostly this will be useful for keeping your various environments separate (e.g. so you don't accidentally use the URL for your test API server in the production deployment of your app).
If you were using this for server-side code, then it would be useful to keep your secrets secret and not publishing them in a git repository (that you might want to allow other people access to).
Whether or not the environment data file would be deployed to your server would depend on your deployment process. If your deployment process consisted of nothing more than checking out your git repository to the live server, then no, it wouldn't be deployed.

Organising a client & server SPA/Node.js application

I am writing an application with a Node.js backend and a single-page web app front-end.
I am keeping the client and server logic in the same project for simplicity and speed of development.
I am considering how best to organise the artifacts.
The Node.js part is straightforward because it doesn't need to go through a battery of pre-processors (transpilation, minification, concatenation etc).
The front-end needs to be transformed per the above, and I guess placed in a dist folder.
The current hierarchy of files is like so:
my-app
- src
- client
- server
Should I put the dist folder for the client artifacts under src/client?
Has anyone tried this and found problems with this approach?
I am using Heroku (a deployment system that uses git).
Committing the built artifacts for the client feels wrong, but if I want to deploy it by pushing to Heroku I think I need to commit them. Is this correct?
This question, as is, invites opinionated answers, so I'll start by saying this is by no means the only way to go, but in my opinion, it is the easiest to work with and makes the most sense.
The production client code, after pre-processing, should be located in my-app/dist or my-app/dst, which could either mean distribution or destination, depending on how you look at it. Either way, my recommendation is to commit this folder, as it saves you a lot of hassle debugging remotely.
For example, if your code works locally but not remotely, using something like the postinstall hook to generate your dist folder adds yet another suspect to check when trying to determine what the issue is with your program.
Another advantage of committing the dist folder is it allows you to specify all the packages you use for your build process as devDependencies rather than dependencies. This is a huge plus, and makes deployment a lot faster, as well as less memory usage on your heroku process.
That being said, I still recommend (as you already probably plan to do) using an automated watch task to build your dist folder for ease of development, even if you decide you don't want to use that same build process remotely and opt for committing the dist directory instead. You could add that as a custom npm command, e.g. npm run build and have that invoke your gulp task.
One last thing. For those of you using templating languages like pug or dust or ejs instead of a framework like react or angular, I recommend determining whether you can run any of your templates to build static HTML files that will be served in production.
If not, you should at least compile your templates (not to be confused with running them) by following the recommendations provided by your particular templating language. Typically, they'll suggest using their command line utility to generate the compiled templates, so that they don't have to be compiled every time they're invoked in production. This will make your node.js server respond faster to requests at the expense of using more memory to cache the compiled templates.
If you're planning to go this route, I would edit nodejs app.js/index.js to serve static file and point the directory to dist/.
Also, you would need to tell express to forward all non-api requests to the frontend.

How make deployment onto Amazon Web Services (AWS) with ReactJS/NodeJS together?

I currently have ReactJS + NodeJS/ExpressJS + Webpack onto EC2, Amazon Web Services (AWS) under one project and would like to get it deployed together at once, in one project.
What are some suggestions on how to go about doing so? Done the research, and I've only seen tutorials on deploying one in specific, whether it be just ReactJS or just NodeJS. Any insights or leads would be greatly appreciated.
Will accept/upvote answer. Thank you in advance
You don't "deploy" ReactJS, it's just a static file or files like any other JS libraries in your applications. You also don't deploy Webpack. Webpack should run on a developer machine (or in CI/CD stack or build system).
As for the NodeJS part just use Elastic Beanstalk.
I do not commit builds to source control. I see that a lot and it can make things easier, but you can also forget to rebuild as you have to do it manually, and it adds a lot of bloat to your repo.
I believe builds should be run as part of the deployment process. Assuming you are using git, you can add script hooks/post-receive in a remote repo there. When you push to that remote, the script will run. This is where I do my webpack build.
You may want to look into https://github.com/git-deploy/git-deploy for context, but I do this manually.
In my projects, on the deployment machine I do git --init --bare /var/git/myproject.git then add the script in /var/git/hooks/post-receive. The hook checks out the code into /var/www/myproject, runs the build, which fills in the /var/www/myproject/build. Then it removes the old /var/www/myproject/public and renames build to public. And done.
I'm coming from more of an operations background and would say that if your goal includes keeping that site up as much as possible then use Packer to generate AMI's and CloudFormation to build an Application Load Balancer (the newer, cheaper brother of ELB) in front of an AutoScalingGroup which keeps the EC2 instances up and running.
I'm currently working on a large scale project doing exactly what you describe. First off, there are so many different ways to do this, so what you really need is some general guidelines to get started, then we can dig a little deeper into details when some initial decisions are made, if you'd like. If you've already got the app deploying and running in two separate steps, but are just looking to combine those, I can definitely help. I'd just need to know how you're currently building/deploying. If you're just getting started on building your pipeline and need to set up the process from scratch, then read on:
First off you'll want to set up some kind of build server that will install your npm dependancies and run your webpack build. Most likely you'll want a separate webpack config that's just for your build server, this'll give you a build optimized for production or qa/staging environments. This config should split out vendor files that you won't update all the time, pull out seperate css files with extract text plugin and uglify the files. If you have an isomorphic React app, or are using es6 features not supported in your version of node, then you'll need a webpack build for your server code as well. This is really different from the hot reloading build you'll want to have on your local machine while you're actually coding the app. I'll be happy to show some examples if you'd like of our webpack config files for both local development and our CI build. You may also need a build.sh or makefile to do something with the compiled .js files that your webpack build creates, but that'll depend on your deployment which I'll cover later. You can run your production build locally as your getting your config just right and fire up the app from those files to test it's all working. Additionally, since you'll likely want to be able to automate all of this, you probably want to run your tests and linting right before you build your app, we run eslint and mocha/jdom to run our enzyme/expect specs as part of our build. Once that's all working nicely, you'll most likely want to set up a build server that can run your builds automatically. My team is using Jenkins for this, which is a little more work to set up, but it's free (aside from the ec2 box we run it on). There are also a ton of subscription based build/continuous integration servers, such as Travis and CodeShip. There's plenty of articles on the pros and cons of these different products and how to set them up. The bottom line is you'll want to have a build server that can pull down your code from source control, install npm deps, lint, test and build your app. If anything fails it should fail your build and if your build succeeds you'll have some sort of archive that you'll later deploy to an ec2 instance. In our shop we use a build.sh file to tarball up our build archive (basically a folder with our node server files as well as our minified client files, css files and any fonts or images needed to run the app) and upload it to an S3 bucket that we deploy from. We like this fairly old school method because the tarball will never change, so we have ultra reliable roll backs.
What you do with your build archive will depend on how you want to do deployments. We have a custom deployment system using puppet, but there are plenty of products that do this such as elastic beanstalk, that would be much easier to set up. You'll want some kind of process supervisor to actually run your node app, so unless you have a dev ops team that wants to build custom pipelines, using AWS built in features will probably be the easiest way to get started. As usual, there are so many ways to do this, but the basic principal is that you need something to download your build archive and run/supervise your node process. You also may want to be able to create and configure ec2 boxes on the fly (Puppet, chef, etc.), or even use containers (Docker) which allow you to move complete stacks around as single units. Using automation to create and configure servers is crucial if you need to scale your app, but it is complicated and may not be necessary for smaller projects. This is definitely an area where you can start simply and add complexity later on, as long as you have good long term goals and make sure to take the necessary step to prepare for future complexity.
All of this can get you pretty far in the weeds, so it's best to find the simplest thing that will serve your needs as you get started and then add complexity as real life situations demand so. I'll be happy to elaborate on any of these details if you provide a little more context about how big and well funded of a project your working on. If it's a little side project to learn the tech, I'd have very different advice then if you're trying to build an app that'll have a lot of traffic and/or complex features.
This could get 100 different answers and they could all end up being good ideas. First, you mention react + nodejs - keep in mind that these solve different tasks. React is going to be frontend and served out via static files. Nodejs is focused more around the server-side and would be the code that serves data. They can easily work together. You might use Express for the webserver (nodejs) to serve the HTML/React pages.
Unfortunately, I saw that you mentioned webpack, so you are going to have to 'build' your application with something - either via webpack, gulp, grunt, etc. This is where source control and build servers are great - but if you're new to it, it might be more complex than you need.
If you have just basic EC2 images as webservers and only 1-2, then the biggest hurdle is just pushing up your code. Something like https://deploybot.com/ could work as it can push your git repo down to multiple hosts via ftp, etc. If you wanted to get a bit fancier, you could look at something like Jenkins or some of the other items.
Docker is a great choice and if you are going to be dealing with multiple developers, server environments, deployments - it's worth the time. Otherwise, keep it simple and just get your code on the EC2 instance ;).

Using package.json for client side packages, that could be loaded dynamically in browser

I am thinking of extending the format of package.json to include dynamic package (plugin) loading on client side and I would like to understand whether this idea contradicts with npm vision or not. In other words I want to load a bunch of modules, that share common metadata, in browser runtime. Solutions like system.js and jspm are good for modules management, but what I seek is dynamic packages management on client side.
Speaking in details I would like to add a property like "myapp-clientRuntimeDependencies" that would allow to specify dependencies that would be loaded by browser instead of standard prepackaging (npm install->browserify-like solution).
package.json example:
{
name: "myapp-package",
version: "",
myapp-clientRuntimeDependencies: {
"myapp-plugin": "file:myapp-plugin",
"myapp-anotherplugin": "file:myapp-anotherplugin"
},
peerDependencies: {
"myapp-core": "1.0.0"
}
}
The question:
Does this idea contradict with "npm" and "package.json" vision? If yes then why?
Any feedback from npm community is very much appreciated.
References:
Extending package.json: http://blog.npmjs.org/post/101775448305/npm-and-front-end-packaging
EDIT:
The question was not formulated too well, the better way to ask this is:
What is the most standard way (e.g. handled by some existing tools, likely to be supported by npm) to specify run-time dependencies between 2 dynamically loaded front-end packages in package.json?
What is the most standard way to attach metadata in JSON format to front-end packages, that are loaded dynamically?
I wouldn't say that it conflicts with the vision of package.json, however it does seem to conflict a bit with how it's typically used. As you seem to be aware, package.json is normally used pre-runtime. In order to load something from package.json into your runtime, you'd have to load the package.json into your frontend code. If you're storing configurations that you don't want visible to frontend via a simple view source, this could definitely present a problem.
One thing that didn't quite click with me on this: you said that system.js and jspm are good for module management but that you were looking for package management. In the end, packages and modules tend to be synonymous, as a package becomes a module.
I may be misunderstanding what it is that you're looking for, but from what I can gather, I recommend you take a look at code-splitting...which is essentially creating separate js files that will be loaded dynamically based on what is needed instead of bundling all your javascript into a single file. Here's some docs on how to do this with webpack (I'm sure browserify does it as well).
If I understand correctly, your question is about using the package.json file to include your own app configuration. In the example you describe, you would use such configuration to let your app know which dependencies can be loaded at runtime.
There is really nothing preventing you from inserting your own fields in the package.json file, except for the risk of conflict with names that are used by npm for other meanings. But if you use a very specific name (like in your example), you should be safe enough. Actually, many lint and build tools have done so already. It is even explicitly written in the post you refer to:
If your tool needs metadata to make it work, put it in package.json. It would be rude to do this without asking, but we’re inviting you to do it, so go ahead. The registry is a schemaless store, so every field you add is on an equal footing to all the others, and we won’t strip out or complain about new fields (as long as they don’t conflict with existing ones).
But if you want to be even safer, you could resort to use a different file (like Bower did for example).

Categories