Login credentials for puppeteer running on a rocker container - javascript

I am running puppeteer on a docker container in headless mode to test our website. The first page is the login page. The puppeteer script and docker files are stored in an internal git repo. What is a good way of securely storing the login credentials?
Obviously not as a file in the fit repo. Docker secrets is an option, what are some other options? I need puppeteer to read them without any user intervention.

You can pass the credentials as environment variables to your docker container. The following line starts the docker container and passes the variables LOGIN_USER and LOGIN_PASSWORD from your host to your docker environment. That way, you specify them as environment variables inside your host system, but you do specify them inside your code or repository.
Starting Docker
docker run -e LOGIN_USER LOGIN_PASSWORD [...]
Inside the container
Inside your container, you then use the variable by accessing process.env.LOGIN_USER and process.env.LOGIN_PASSWORD like this (example using page.type):
page.type('#input-field', process.env.LOGIN_USER);
Setting the environment variables
There are multiple options to set the environment variables. You can either set the permanently (in case you want to run multiple docker containers) or only for a single command. Check out this answer on askubuntu for more information.

Related

Not able to initialize firebase app using the env variable [duplicate]

I am building my web app in Next.js, and I have been doing some tests. What I am doing is to push my code to GitHub and from there deploy the project on to Vercel.
I am using Google APIs dependencies which require some Client ID and Client secret for me to be able to send emails using node-mailer from my client side to inbox (I'm doing this via a contact form).
However, on localhost everything is working fine but when I deploy onto Vercel, I am not able to make my contact form send mails (an issue that has to do with environment variables).
I tried Options A and B
Option A
Created a .env.local, added my variables there, then accessed them in next.config.js as shown in the code below (console log shows that I can access the variables anywhere on my app)
.env.local
env:{
CLIENT_URL:'vxcxsfddfdgd',
MAILING_SERVICE_CLIENT_ID:'1245785165455ghdgfhasbddahhhhhhhhm',
MAILING_SERVICE_CLIENT_SECRET:'Rdfvcnsf4263543624362536',
MAILING_SERVICE_REFRESH_TOKEN:'000000',
USER_EMAIL_ADDRESS:'yesyesyesyesyesyes#gmail.com',
}
next.config.js
module.exports = {
env:{
CLIENT_URL: process.env.CLIENT_URL,
MAILING_SERVICE_CLIENT_ID: process.env.MAILING_SERVICE_CLIENT_ID,
MAILING_SERVICE_CLIENT_SECRET: process.env.MAILING_SERVICE_CLIENT_SECRET,
MAILING_SERVICE_REFRESH_TOKEN: process.env.MAILING_SERVICE_REFRESH_TOKEN,
USER_EMAIL_ADDRESS: process.env.USER_EMAIL_ADDRESS,
}
}
If I do like Option A as per above, then send emails does not work on localhost neither does it work on Vercel.
Option B
I put my variables in next.config.js as below add the next.config.js to .gitignore then push to GitHub.
module.exports = {
env:{
CLIENT_URL:'http://localhost:3000',
MAILING_SERVICE_CLIENT_ID:'7777777777777777777777',
MAILING_SERVICE_CLIENT_SECRET:'R123456789',
MAILING_SERVICE_REFRESH_TOKEN:'1123456789',
USER_EMAIL_ADDRESS:'seiseibaba#gmail.com',
}
}
Option B works on localhost, but if I add environment variable on Vercel as shown in here then send mail does not work.
How can I set this to work properly for me?
Simply creating a .env.local (or .env) file with your environment variables should be enough to be picked by Next.js on the server. There's no need to add anything to your next.config.js.
# .env.local
CLIENT_URL=vxcxsfddfdgd
MAILING_SERVICE_CLIENT_ID=1245785165455ghdgfhasbddahhhhhhhhm
MAILING_SERVICE_CLIENT_SECRET=Rdfvcnsf4263543624362536
MAILING_SERVICE_REFRESH_TOKEN=000000
USER_EMAIL_ADDRESS=yesyesyesyesyesyes#gmail.com
However, if you need to expose a variable to the browser you have to prefix the variable with NEXT_PUBLIC_.
NEXT_PUBLIC_CLIENT_URL=vxcxsfddfdgd
This will be available on the browser using:
process.env.NEXT_PUBLIC_CLIENT_URL
For more details about environment variables in Next.js refer to https://nextjs.org/docs/basic-features/environment-variables.
The same principle applies to environment variables you create in Vercel (or any other hosting service), adding the prefix will make them available to the browser.
You can add environment variables in Vercel through the Environment Variables page of your Project Settings, that match the variables in your .env.local.
For more details about environment variables in Vercel refer to https://vercel.com/docs/concepts/projects/environment-variables.

How to properly set environment variables in Next.js app deployed to Vercel?

I am building my web app in Next.js, and I have been doing some tests. What I am doing is to push my code to GitHub and from there deploy the project on to Vercel.
I am using Google APIs dependencies which require some Client ID and Client secret for me to be able to send emails using node-mailer from my client side to inbox (I'm doing this via a contact form).
However, on localhost everything is working fine but when I deploy onto Vercel, I am not able to make my contact form send mails (an issue that has to do with environment variables).
I tried Options A and B
Option A
Created a .env.local, added my variables there, then accessed them in next.config.js as shown in the code below (console log shows that I can access the variables anywhere on my app)
.env.local
env:{
CLIENT_URL:'vxcxsfddfdgd',
MAILING_SERVICE_CLIENT_ID:'1245785165455ghdgfhasbddahhhhhhhhm',
MAILING_SERVICE_CLIENT_SECRET:'Rdfvcnsf4263543624362536',
MAILING_SERVICE_REFRESH_TOKEN:'000000',
USER_EMAIL_ADDRESS:'yesyesyesyesyesyes#gmail.com',
}
next.config.js
module.exports = {
env:{
CLIENT_URL: process.env.CLIENT_URL,
MAILING_SERVICE_CLIENT_ID: process.env.MAILING_SERVICE_CLIENT_ID,
MAILING_SERVICE_CLIENT_SECRET: process.env.MAILING_SERVICE_CLIENT_SECRET,
MAILING_SERVICE_REFRESH_TOKEN: process.env.MAILING_SERVICE_REFRESH_TOKEN,
USER_EMAIL_ADDRESS: process.env.USER_EMAIL_ADDRESS,
}
}
If I do like Option A as per above, then send emails does not work on localhost neither does it work on Vercel.
Option B
I put my variables in next.config.js as below add the next.config.js to .gitignore then push to GitHub.
module.exports = {
env:{
CLIENT_URL:'http://localhost:3000',
MAILING_SERVICE_CLIENT_ID:'7777777777777777777777',
MAILING_SERVICE_CLIENT_SECRET:'R123456789',
MAILING_SERVICE_REFRESH_TOKEN:'1123456789',
USER_EMAIL_ADDRESS:'seiseibaba#gmail.com',
}
}
Option B works on localhost, but if I add environment variable on Vercel as shown in here then send mail does not work.
How can I set this to work properly for me?
Simply creating a .env.local (or .env) file with your environment variables should be enough to be picked by Next.js on the server. There's no need to add anything to your next.config.js.
# .env.local
CLIENT_URL=vxcxsfddfdgd
MAILING_SERVICE_CLIENT_ID=1245785165455ghdgfhasbddahhhhhhhhm
MAILING_SERVICE_CLIENT_SECRET=Rdfvcnsf4263543624362536
MAILING_SERVICE_REFRESH_TOKEN=000000
USER_EMAIL_ADDRESS=yesyesyesyesyesyes#gmail.com
However, if you need to expose a variable to the browser you have to prefix the variable with NEXT_PUBLIC_.
NEXT_PUBLIC_CLIENT_URL=vxcxsfddfdgd
This will be available on the browser using:
process.env.NEXT_PUBLIC_CLIENT_URL
For more details about environment variables in Next.js refer to https://nextjs.org/docs/basic-features/environment-variables.
The same principle applies to environment variables you create in Vercel (or any other hosting service), adding the prefix will make them available to the browser.
You can add environment variables in Vercel through the Environment Variables page of your Project Settings, that match the variables in your .env.local.
For more details about environment variables in Vercel refer to https://vercel.com/docs/concepts/projects/environment-variables.

How to deploy node.js application in cyberpanel?

I have my application developed in node.js, and I have cyberpanel installed on my server. I have seen many examples of how to deploy a node application in cyberpanel, but I have doubts about how to view it from the browser.
So far I have the following configuration in vHost:
context / {
type appserver
location /FOLDER/FOLDER/PROJECT_FOLDER/dist
binPath /usr/bin/node
startupFile index.js
appType node
maxConns 100
}
My application runs perfectly on port 3000 when I run it by console, but I need to list it on port 80 with cyberpanel.
Does anyone have an idea how to do it?
try the following steps. Essentially, the error lies in selecting the root document folder and allowing access to the application.
Create a Website using the normal CyperPanel menu. [https://cyberpanel.net/docs/2-creating-website/]
Upload your Node.Js files into the public_html folder of the website.
Enter the Open Lite Speed panel via port :7080 (you would need to enable the port on the firewall)
Navigate to VH Hosts > Your Domain > Context
Select App Server, for location using $VH ROOT instead of the hardcoded path worked.
Additionally, don't forget to enable the site on access control via allowing all IPs (*).
context / {
type appserver
location $VH_ROOT/public_html/
binPath /usr/bin/node
appType node
startupFile server.js //this is the name of your
appserverEnv 1
maxConns 100
accessControl {
allow *
}
rewrite {
}
ad
See I am going to answer point to point to the question
First of all cyberpanel by default only takes app.js file as its core file to run the application.
Second, How to change that default file pointing ?
context / {
type appserver
startupFile index.js // **NAME OF YOUR STARTUP FILE**
location /home/PROJECT_FOLDER/public_html/dist
binPath /usr/bin/node
appType node
appserverEnv 1
maxConns 100
accessControl {
allow *
}
rewrite {
}
ad
location /FOLDER/FOLDER/PROJECT_FOLDER/dist
Note :- Things, I want to mention about this location parameter is this is the location to the your startup file, you will get it via file manager, as you cannot run typescript code directly here, you have to convert it into javascript using tsc command and further target dist folder using location parameter in vconfig file
Now next question is how to run application outside console ?
Create a website to deploy the project, use below link for reference click here
Issuing SSL for website - link for reference
This is my folder structure for deployment, simply zip all files and upload it on file manager of cyber panel, and extract out your files. You can see, I have dist folder which contains all javascript files and also have index.js, the main startup file.
Click on fix permissions on file manager.
Go to Web terminal and install node modules. how ?
on web terminal :- type cd .. and press enter.
There you have to find out your project from directory, You can use ls command to get list of files and folder structure.
mine directory was (after using cd ..) :- cd home/FOLDERNAME/public_html
At last run your project through terminal, to check its working.
Config your vhost config file, below is reference image
File you have to add in vhost config, I also had provided you above.
If you domain is setup correctly, you can view on api on your domain else you can click on preview button on cyber panel
Note :- Always Run code in terminal first to check its working.

How can I have a host and container read/write the same files with Docker?

I would like to volume mount a directory from a Docker container to my work station, so when I edit the content in the volume mount from my work station it updated in the container as well. It would be very useful for testing and develop web applications in general.
However I get a permission denied in the container, because the UID's in the container and host isn't the same. Isn't the original purpose of Docker that it should make development faster and easier?
This answer works around the issue I am facing when volume mounting a Docker container to my work station. But by doing this, I make changes to the container that I won't want in production, and that defeats the purpose of using Docker during development.
The container is Alpine Linux, work station Fedora 29, and editor Atom.
Question
Is there another way, so both my work station and container can read/write the same files?
There are multiple ways to do this, but the central issue is that bind mounts do not include any UID mapping capability, the UID on the host is what appears inside the container and vice versa. If those two UID's do not match, you will read/write files with different UID's and likely experience permission issues.
Option 1: get a Mac or deploy docker inside of VirtualBox. Both of these environments have a filesystem integration that dynamically updates the UID's. For Mac, that is implemented with OSXFS. Be aware that this convenience comes with a performance penalty.
Option 2: Change your host. If the UID on the host matches the UID inside the container, you won't experience any issues. You'd just run a usermod on your user on the host to change your UID there, and things will happen to work, at least until you run a different image with a different UID inside the container.
Option 3: Change your image. Some will modify the image to a static UID that matches their environment, often to match a UID in production. Others will pass a build arg with something like --build-arg UID=$(id -u) as part of the build command, and then the Dockerfile with something like:
FROM alpine
ARG UID=1000
RUN adduser -u ${UID} app
The downside of this is each developer may need a different image, so they are either building locally on each workstation, or you centrally build multiple images, one for each UID that exists among your developers. Neither of these are ideal.
Option 4: Change the container UID. This can be done in the compose file, or on a one off container with something like docker run -u $(id -u) your_image. The container will now be running with the new UID, and files in the volume will be accessible. However, the username inside the container will not necessarily map to your UID which may look strange to any commands you run inside the container. More importantly, any files own by the user inside the container that you have not hidden with your volume will have the original UID and may not be accessible.
Option 5: Give up, run everything as root, or change permissions to 777 allowing everyone to access the directory with no restrictions. This won't map to how you should run things in production, and the container may still write new files with limited permissions making them inaccessible to you outside the container. This also creates security risks of running code as root or leaving filesystems open to both read and write from any user on the host.
Option 6: Setup an entrypoint that dynamically updates your container. Despite not wanting to change your image, this is my preferred solution for completeness. Your container does need to start as root, but only in development, and the app will still be run as the user, matching the production environment. However, the first step of that entrypoint will be to change the user's UID/GID inside the container to match your volume's UID/GID. This is similar to option 4, but now files inside the image that were not replaced by the volume have the right UID's, and the user inside the container will now show with the changed UID so commands like ls show the username inside the container, not a UID to may map to another user or no one at all. While this is a change to your image, the code only runs in development, and only as a brief entrypoint to setup the container for that developer, after which the process inside the container will look identical to that in a production environment.
To implement this I make the following changes. First the Dockerfile now includes a fix-perms script and gosu from a base image I've pushed to the hub (this is a Java example, but the changes are portable to other environments):
FROM openjdk:jdk as build
# add this copy to include fix-perms and gosu or install them directly
COPY --from=sudobmitch/base:scratch / /
RUN apt-get update \
&& apt-get install -y maven \
&& useradd -m app
COPY code /code
RUN mvn build
# add an entrypoint to call fix-perms
COPY entrypoint.sh /usr/bin/
ENTRYPOINT ["/usr/bin/entrypoint.sh"]
CMD ["java", "-jar", "/code/app.jar"]
USER app
The entrypoint.sh script calls fix-perms and then exec and gosu to drop from root to the app user:
#!/bin/sh
if [ "$(id -u)" = "0" ]; then
# running on a developer laptop as root
fix-perms -r -u app -g app /code
exec gosu app "$#"
else
# running in production as a user
exec "$#"
fi
The developer compose file mounts the volume and starts as root:
version: '3.7'
volumes:
m2:
services:
app:
build:
context: .
target: build
image: registry:5000/app/app:dev
command: "/bin/sh -c 'mvn build && java -jar /code/app.jar'"
user: "0:0"
volumes:
- m2:/home/app/.m2
- ./code:/code
This example is taken from my presentation available here: https://sudo-bmitch.github.io/presentations/dc2019/tips-and-tricks-of-the-captains.html#fix-perms
Code for fix-perms and other examples are available in my base image repo: https://github.com/sudo-bmitch/docker-base
Since the UID in your containers are baked into the container definition, you can safely assume that they are relatively static. In this case, you can create a user in your host system with the machine UID and GID. Change user to the new account, and then make your edits to the files. Your host OS will not complain since it thinks it's just the user accessing its own files, and your container OS will see the same.
Alternatively, you can consider editing these files as root.

What is the best practice for figuring out in which AWS Beanstalk Environment my Node.js application is currently running?

My Node.js Express application runs in AWS Beanstalk. I've created three Beanstalk Environments for my application, namely:
DEV (Development)
UAT (User Acceptance Testing)
PROD (Production)
Dependent upon the environment my application is running in I would like to connect to different databases and use different cascading style sheets.
What is the best practice for figuring out in which AWS Beanstalk Environment my Node.js application is currently running?
I get the impression I should be using Beanstalk Environment Tags, but I've not been able to figure out how to access them via my Node.js application.
That's correct, use the environment variables you have configured from the Beanstalk console to let the instance of the application know which environment it is running in. You don't get that many options in a node beanstalk app, but if you say only want to pass a db connection string and a css path, you could do that with PARAM1 and PARAM2, then access these from within your app with
process.env.PARAM1 & process.env.PARAM2
(I've usually pushed these in to more appropriate names/places on application bootstrap).
Your other option is just to pass in some soft of 'env' variable in PARAM1, then have your app work out what to do with your various configurations (but this adds another hidden layer of config into your application).

Categories