I am not sure how you would go about elevating NodeJs' permissions on the fly, the only way seems to be to run the whole program as root / admin.
Is there a way to elevate permissions on the fly (preferably dependency free and cross platform) ?
I've tried running a childProcess with sudo prefixed to the linux equivalent command of the node action (writeFile()), but even that didn't work out ...
The concrete problem:
I am trying to write to the hosts file, but I can't without admin privileges - Hence why I tried elevating the privileges.
Related
I created a discord bot and am now attempting to run it off an Ubuntu Machine.
I installed the folders of the bot and NodeJs, here is what I used to install NodeJS:
sudo apt-get install -y nodejs
Then I used cd to select the directory, and started my bot using node index.js
The bot started, however when I went to close the putty and keep it running on the VPS the bot shutdown. Here is what the directory looks like.
I think the problem is that when you start the app in the putty window, that process is linked to the window and gets terminated when that is closed.
To avoid that you can use a host service like screen, tmux, nohup, bg and so on...
If you want to know which is the best, try looking at this question from the askUbuntu Stack Exchange.
The key concept is that you open a new window using the tmux command (or screen, ...), then run your bot like you always do. When you want to leave but keep the process runing, you can detach the session with a key combination, that changes from service to service.
If you want to access that window again, you can run a command that will "restore" your session, like
tmux list-sessions
tmux attach-session -t 0
The NodeJS instance is terminated when putty is closed. You need something to keep the instance alive. Try:
PM2: http://pm2.keymetrics.io/
or,
Forever: https://github.com/foreverjs/forever#readme
Recommended though is to run the node instance as a service that can reboot on startup. Try looking at this:
https://stackoverflow.com/a/29042953/7739392
The shell runs in the foreground. This means any scripts you start there will end once you end your session. A simple solution would be to run your script in the background by adding the & after the call:
node index.js &
A better solution would be to create a service you can ask the service daemon to run for you. However, adding the & should get you what you want for now.
I recommend using one of these two node modules - ForeverJS or PM2. I'll show you how to quickly get started with ForeverJS but PM2 would be very similar.
You can easily install ForeverJS by typing the following in your terminal:
$ npm install forever -g
You may need to use SUDO depending on your user's privileges to get this working properly. It is NOT recommended to use it in production due to the security risks.
Once installed CD to your projects file directory and like you typed 'node index.js' you will do something similar with ForeverJS.
$ forever start index.js
Now when you exit the terminal your NodeJS application will remain as a running process.
I have built a GUI with Node.js that allows a user to run custom binaries, designed for CLI. These binaries are spawned with child_process module. A user is able to write to stdin and read from stdout of those child processes.
Yet, when a binary uses sudo command, a password prompt bypass stdout and is headed for (pseudo-)terminal device, therefore I can't catch it programmatically and ask user for a password with a fancy GUI notification. I am aware, that -S argument allows sudo to expect the password from the standard input, yet I have no control over scripts that are run by a user.
I have found out that the stream object that represents stdin or stdout of a process may have isTTY property that indicates that this process is run in TTY context. I have tried to pass streams created with TTY module to stdio argument of child_process.spawn method without any luck.
I tried to use pty.js project (https://github.com/chjj/pty.js/) that actually solves the problem, but for some reasons I can't rely on 3rd party libraries. I try to figure out the way pty.js redirects sudo prompts to stdin, and it seems they use custom C code to emulate PTY (https://github.com/chjj/pty.js/blob/master/src/unix/pty.cc). I am concerned with a comment in one of source files, that states that ‘<...> with vanilla node, it's impossible to spawn a child process that has the stdin and stdout fd's be isatty <...>’. Is that true?
Right now I use a workaround that requires user to elevate privileges to my GUI process so it can spawn children that may use sudo, but I'm not satisfied with this hack. According to manual (http://linux.die.net/man/8/sudo), there is a way to specify helper program to handle password prompt by setting SUDO_ASKPASS environment variable, but I wasn't able to find an example of such an application.
So I'd like to know if there is a common practice to overcome my problem or should I give up and just keep on with getting root privileges preliminary.
I'm using Sails.js 0.10.5 on Node 0.10.33 on Ubuntu Trusty. I'd like to execute the node process as a non-root user with the least possible privileges in the production environment. I'm comfortable with the various options for binding to ports below 1024 but I'm more concerned with directory permissions.
Ideally, I'd prefer the node process only have write access to its log files and nothing else. It should only have read access to the directory containing app.js and below.
At the moment I have needed to grant write access to the ./.tmp directory and also to the ./views directory due to the grunt tasks that run at startup. I'd rather perform the grunt tasks at deploy time as a different user instead of at run-time. The sails www command appeared promising but I couldn't get the desired outcome.
Can someone please point me in the right direction for running Sails.js with zero write access to its assets, views, etc?
Use sails www to build static assets
chmod -R 440 all files and directories, so that your user and the webserver (group) can access the files.
Use nginx/apache to host a webserver on port 80/443 and proxy requests to sails (running on its own port or over a unix socket).
Run sails using PM2 to keep it running and have it manage/collect logs.
Sails will lift, but will be unable to write its .tmp directory, which shouldn't even be necessary since all your static files will be routed to the www directory through nginx/apache.
The simplest solution to me seems to be to separate the grunt tasks that need the elevated privileges out into a separate file that you can call with a different user on deploy. Then sails won't need to run anything and can be read only.
EDITED: I use PM2 with apache as proxy ( with mod WS ).
You can use one proxy like apache to route from port 80 to others internal server ports based on host.
With this way you can run multiple apps in same server.
It has a lot of usefull functions like see the logs how varius apps in terminal, restart and log crashed apps, run app as user, app status ... etc .
Pm2 link: https://github.com/Unitech/pm2
PM2 configs: https://github.com/Unitech/PM2/blob/development/ADVANCED_README.md#options
I inherited a project where the person wrote tools to test our site's UI using JQuery and JS.
I don't know too much about it other than it requires a browser to be spawned and I think the tool uses JS to interact with iframes to see if it's the expected values.
My job is to get this tool to run on a remote server and post the results to Jenkins.
The remote test server and staging server is linux. From our staging server, I want to write a script to spawn a browser and run cmds from the tool to test our UI. I ran the following manually:
ssh -X user#remote_test_server /usr/bin/firefox
However, the remote server says:
Error: no display specified
Is there a way to spawn a browser for automated testing from one headless server to another? Thanks in advance for your help.
I faced a similar problem when I tried to automate a GUI installation program. While there are quite some different possibilities to choose from (e.g. Xnest, Xephyr?), I ended up using vncserver, because it's relatively easy to debug the GUI session this way.
You need to create a vncpassword file, I think:
mkdir -p $HOME/.vnc
chmod 0700 $HOME/.vnc
echo MyLittlePassword | vncpasswd -f > $HOME/.vnc/passwd
chmod 0600 $HOME/.vnc/passwd
Starting the server is then quite straightforward
vncserver
export DISPLAY=:1
/usr/bin/firefox&
...
Now it is possible to connect to the VNC server with a VNC viewer of your choice. But beware there may be no window manager, depending on the X startup scripts of your environment.
Shutting the server down
vncserver -kill :1
In the configuration of Jenkins project , specify the
Build Environment
Start Xvfb before the build, and shut it down after.
#
Ok so I have read through the Socket.IO docs and I am still a little unsure of a couple of points:
The documentation says...
To run the demo, execute the following:
git clone git://github.com/LearnBoost/Socket.IO-node.git socket.io
cd socket.io/example/
sudo node server.js
Now I don't know what this means at all! I think it may be command line interface. I of course have access to this on my localhost, but my online hosting package is a shared LAMP setup. Meaning I don't have access to the root command line (i think).
How do I actually setup socket.IO, is it impossible on my shared server package?
Appreciate any help...
W.
If you aren't familiar with node.js or with basic command line usage then I would suggest that you use a hosted WebSockets solution like pusherapp. Trying to learn WebSockets, and Node.js, and the Linux command line all at once is going to lead to a lot of frustration. Take a look a pusherapp's quick start guide, it's very easy to get started. You can have 5 simultaneous connections with a single application for free (I'm not affiliated with pusherapp).
Updated (with inline answers to questions):
If you are going to go the direction of running a Socket.IO application:
You don't technically need git since you can download node.js and Socket.IO from their respective download links on github.
You don't actually need a LAMP server to use Socket.IO. By default Socket.IO functions as a simple webserver in addition to a WebSockets server. If you want server side scripting then you might want Apache with mod_php, mod_python, etc.
You don't technically need a dedicated server or even root access. You do need a system where you can have long running process. And if you want the service to start automatically when the system is rebooted, you probably want to add a startup file to /etc/init.d, /etc/rc.d which will require root access. Both node.js and Socket.IO can be installed and run from a normal home directory. If you want to run Socket.IO on a standard port like 80 or 443 then you will need to run it with root privilege.
Node.JS scales quite well so Socket.IO will probably scale pretty well too.
It's not a simple matter to get everything setup and working, but if your goal is a free solution for web serving+WebSockets then Socket.IO is probably is good route to at least explore if you are brave.
First you'll have to determine if your host supports SSH. Sometimes they don't by default on shared hosting, but if you ask they can turn it on. If it does you'll use some sort of SSH client to connect to it. Putty for windows is the most common. Then you'll use git, which is a source control program. Which you'll probably have to install on your host, which may or may not be allowed. If you can, this can be accomplished a number of ways, you'll want to read the git documentation, it will depend largely on what linux distribution you're running. CD is change directory, basic command line stuff. sudo on the last line is telling the system to run the command as root, which it will ask you the password for, which you may not have access to on your host. Sounds like you're gonna have an uphill battle on shared hosting. You may want to opt for a VPS instead.
If your shared host is a LAMP system with no command line access you're not going to get very far with Socket.IO. The instructions you posted assume you have command line access and that you've installed the node.js runtime on your system.
If you really want to try this I recommend you get a VPS of your own (I use prgmr.com) to test it out. For what it's worth I found the Socket.IO platform pretty nice to use once I got it up and running.