I've successfully built my electron application which appears to be working substantially fine.
I use the node module fsto access files for use in my application, which is standard affair for Electron. This works exactly as expected in the development environment, and even when I build my app with the asar in C:\Users\myApp\ I can access the files from the built electron application.
However, when I've created an installer and placed the application # C:\Program Files(x86)\myApp\
fs.readFileSync
which previously worked fine both in the development verson and the built version when it was present in C:\Users\myApp\, now requires Run as Administrator Privileges to read files, else it will throw an error.
Any explanation?
Electron Version: 1.8.4
Platform: Windows 7
I believe you may want to deploy some of your application data into your %APPDATA% i.e. C:\Users\yourusername\AppData\Roaming, or in the case of electron, can deploy your files into the userData folder using app.getPath('appData') which refers you to C:\Users\yourusername\AppData\Local\your_electron_app_name or app.getPath('userData'). https://github.com/electron/electron/blob/master/docs/api/app.md#appgetpathname
In that directory, you can change the files without elevated privileges.
If you look properties of your electron application folder in your Program Files, and go to the security tab, you will noticed that that permission settings for Users is (Read & execute, List folder contents, and Read); however, administrators, have access to Full control (modify, read & execute, list folder contents, read, and write).
However, if you really need to create/edit/delete files in your Program files or ProgramData will require elevated privileges and to get around that you may want to install the npm package electron-sudo (https://www.npmjs.com/package/electron-sudo) or sudo-prompt (https://www.npmjs.com/package/sudo-prompt).
My team has project and 4-5 micro services on Node.js. We have important javascript file which we use in application, but now we need to start using this file in services.
I want to create private npm package which will represents this file.
I'll add this new package in package.json of application and services using link to the repo.
I have doubt about local development. Lets imagine that developer needs to make change into this file in separated repository and make change into application. I see this workflow:
Commit changes to file and push to branch "my-new-feature-001".
Change package.json of my application for using this branch of common file
Develop feature in application.
Merge "my-new-feature-001" into main branch of file
input "git checkout package.json" in application
Commit and merge changes in application.
It's so difficult. Is there any more simple solution? I mean using remote repository for deploy servers and local repository of common file for local development?
I want to set up a second node.js server to run an express.js application which is an exact and independent copy of my current html (client-side) and js (server-side) files.
The reason is that I want to deploy my current code in a production environment that can be used by the team that will not be shut down, while I work on my current code in a development environment.
My worry is that I have added my current node.js server to my path and I am not sure if upon installation of the second node.js server my command to start the second server will interfere with the node.js server I have saved in my path variable.
Here are a couple of things to know before I ask my questions:
I am working on a machine with a Linux distribution.
I am using Express.js routing
I am using the instructions to install another instance of node.js and express.js at:
www.vultr.com/docs/installing-node-js-and-express
My questions are as follows:
Is this as simple as installing node and express as per the instructions in the link above into a new directory and running from the new path without storing it in my path variable?
Is there a better and more effective way to create a production and a development environment so that my team can use the app I have built without interfering with my current instance of node while ensuring 100% up time for the app deployed in production?
Once the 2nd server is instantiated, how do I make the call from my terminal so that it does not turn on/off the original node server I have running from my path variable?
Considering that the link above is a how to on how to install node and express from scratch and in Ubuntu (I am on CentOS - Gnome), is there a better "how to" that I should use to complete the second node and express install?
When creating the new Port for the second node/express server to listen on can I just pick any number with 4 digits or is there a particular set of numbers that would be more effective to use? I am already using Port:3000 for my first instance in my development environment.
Thank you for your guidance.
Developing and serving from the same PC is not preferable, however, if you must, this is what you can do.
First, there is no need to install a second copy of node on to your machine - you can run multiple processes of node on the same machine without any problem.
What I suggest you do is this:
If you haven't already, commit your project into a git repository
Create separate branches for development and production, as shown here: http://nvie.com/posts/a-successful-git-branching-model/#the-main-branches
Every time you are ready to publish a new piece of code, push it to the master branch
Move all configuration parameters to a config file, and create s separate one for dev/production, you can do this easily with the config package: https://www.npmjs.com/package/config
Clone your repo to a separate folder which would always remain on the master (production) branch
Run you server from that folder - your team could then connect to it
All development would be done in the original folder. Once you are ready, push to master, and pull on the production folder.
Regarding the port numbers, you can use anything that is above 1024 and below 65535
I currently have ReactJS + NodeJS/ExpressJS + Webpack onto EC2, Amazon Web Services (AWS) under one project and would like to get it deployed together at once, in one project.
What are some suggestions on how to go about doing so? Done the research, and I've only seen tutorials on deploying one in specific, whether it be just ReactJS or just NodeJS. Any insights or leads would be greatly appreciated.
Will accept/upvote answer. Thank you in advance
You don't "deploy" ReactJS, it's just a static file or files like any other JS libraries in your applications. You also don't deploy Webpack. Webpack should run on a developer machine (or in CI/CD stack or build system).
As for the NodeJS part just use Elastic Beanstalk.
I do not commit builds to source control. I see that a lot and it can make things easier, but you can also forget to rebuild as you have to do it manually, and it adds a lot of bloat to your repo.
I believe builds should be run as part of the deployment process. Assuming you are using git, you can add script hooks/post-receive in a remote repo there. When you push to that remote, the script will run. This is where I do my webpack build.
You may want to look into https://github.com/git-deploy/git-deploy for context, but I do this manually.
In my projects, on the deployment machine I do git --init --bare /var/git/myproject.git then add the script in /var/git/hooks/post-receive. The hook checks out the code into /var/www/myproject, runs the build, which fills in the /var/www/myproject/build. Then it removes the old /var/www/myproject/public and renames build to public. And done.
I'm coming from more of an operations background and would say that if your goal includes keeping that site up as much as possible then use Packer to generate AMI's and CloudFormation to build an Application Load Balancer (the newer, cheaper brother of ELB) in front of an AutoScalingGroup which keeps the EC2 instances up and running.
I'm currently working on a large scale project doing exactly what you describe. First off, there are so many different ways to do this, so what you really need is some general guidelines to get started, then we can dig a little deeper into details when some initial decisions are made, if you'd like. If you've already got the app deploying and running in two separate steps, but are just looking to combine those, I can definitely help. I'd just need to know how you're currently building/deploying. If you're just getting started on building your pipeline and need to set up the process from scratch, then read on:
First off you'll want to set up some kind of build server that will install your npm dependancies and run your webpack build. Most likely you'll want a separate webpack config that's just for your build server, this'll give you a build optimized for production or qa/staging environments. This config should split out vendor files that you won't update all the time, pull out seperate css files with extract text plugin and uglify the files. If you have an isomorphic React app, or are using es6 features not supported in your version of node, then you'll need a webpack build for your server code as well. This is really different from the hot reloading build you'll want to have on your local machine while you're actually coding the app. I'll be happy to show some examples if you'd like of our webpack config files for both local development and our CI build. You may also need a build.sh or makefile to do something with the compiled .js files that your webpack build creates, but that'll depend on your deployment which I'll cover later. You can run your production build locally as your getting your config just right and fire up the app from those files to test it's all working. Additionally, since you'll likely want to be able to automate all of this, you probably want to run your tests and linting right before you build your app, we run eslint and mocha/jdom to run our enzyme/expect specs as part of our build. Once that's all working nicely, you'll most likely want to set up a build server that can run your builds automatically. My team is using Jenkins for this, which is a little more work to set up, but it's free (aside from the ec2 box we run it on). There are also a ton of subscription based build/continuous integration servers, such as Travis and CodeShip. There's plenty of articles on the pros and cons of these different products and how to set them up. The bottom line is you'll want to have a build server that can pull down your code from source control, install npm deps, lint, test and build your app. If anything fails it should fail your build and if your build succeeds you'll have some sort of archive that you'll later deploy to an ec2 instance. In our shop we use a build.sh file to tarball up our build archive (basically a folder with our node server files as well as our minified client files, css files and any fonts or images needed to run the app) and upload it to an S3 bucket that we deploy from. We like this fairly old school method because the tarball will never change, so we have ultra reliable roll backs.
What you do with your build archive will depend on how you want to do deployments. We have a custom deployment system using puppet, but there are plenty of products that do this such as elastic beanstalk, that would be much easier to set up. You'll want some kind of process supervisor to actually run your node app, so unless you have a dev ops team that wants to build custom pipelines, using AWS built in features will probably be the easiest way to get started. As usual, there are so many ways to do this, but the basic principal is that you need something to download your build archive and run/supervise your node process. You also may want to be able to create and configure ec2 boxes on the fly (Puppet, chef, etc.), or even use containers (Docker) which allow you to move complete stacks around as single units. Using automation to create and configure servers is crucial if you need to scale your app, but it is complicated and may not be necessary for smaller projects. This is definitely an area where you can start simply and add complexity later on, as long as you have good long term goals and make sure to take the necessary step to prepare for future complexity.
All of this can get you pretty far in the weeds, so it's best to find the simplest thing that will serve your needs as you get started and then add complexity as real life situations demand so. I'll be happy to elaborate on any of these details if you provide a little more context about how big and well funded of a project your working on. If it's a little side project to learn the tech, I'd have very different advice then if you're trying to build an app that'll have a lot of traffic and/or complex features.
This could get 100 different answers and they could all end up being good ideas. First, you mention react + nodejs - keep in mind that these solve different tasks. React is going to be frontend and served out via static files. Nodejs is focused more around the server-side and would be the code that serves data. They can easily work together. You might use Express for the webserver (nodejs) to serve the HTML/React pages.
Unfortunately, I saw that you mentioned webpack, so you are going to have to 'build' your application with something - either via webpack, gulp, grunt, etc. This is where source control and build servers are great - but if you're new to it, it might be more complex than you need.
If you have just basic EC2 images as webservers and only 1-2, then the biggest hurdle is just pushing up your code. Something like https://deploybot.com/ could work as it can push your git repo down to multiple hosts via ftp, etc. If you wanted to get a bit fancier, you could look at something like Jenkins or some of the other items.
Docker is a great choice and if you are going to be dealing with multiple developers, server environments, deployments - it's worth the time. Otherwise, keep it simple and just get your code on the EC2 instance ;).
I am very new to Node.js (which I'm assuming this is; I'm so new that I'm not really understanding what's going on here). I'm working with a client library for a system called RJ Metrics. I'm basically tying their API in with a Volusion API in order to import data into their system from the Volusion site. The code for that all makes sense but I'm not understanding how to install it and use it.
Their documentation is here:
And I'm needing to use the Javascript library because I'm working with Volusion which is on a Windows server and there is no ASP/C# option here. It says The RJMetrics Javascript client library is available via npm: and then terminal code. After research, it appeared that this uses Node.js so I installed that on my computer and ran the npm install rjmetrics in the Terminal which succeeded. I was assuming though that I must have to log into their server and run the code in order to get it to work.
Does this require me to SSH into the server? Am I way off base and is there a way I can just include some JS files in my page? I looked at their GitHub too and all of the main files use the require() function in them which I'm gathering is a Node.js function?
Apologies if I'm way off, I'm into this up to my neck and just trying to sort it all out now.
This part of the documentation (to which you refferred) is just plain ol' javascript. though NPM is the node package manager. So if you want, it looks like you can just run this .js script in a web browser like any other.
var rjmetrics = require("rjmetrics");
client = rjmetrics.Client(api_key, client_id);
# do stuff with client
If you wanted to do it in Node, you would create a .js file on your machine with their API code inside of it doing whatever you want. Then in terminal you run the script by going "node myfile.js". A local webserver setup is all you need to create and test this.