REACT APP not access the google cloud run environment variables - javascript

I am using GitLab ci as my CI/CD tool. I am deploying the dockerized react app to cloud run but I am not able to access the environment variables declared on cloud run. Thank you!
Dockerfile
# build environment
FROM node:8-alpine as react-build
WORKDIR /app
COPY . ./
RUN npm install
RUN npm run build
# server environment
FROM nginx: alpine
COPY nginx.conf /etc/nginx/conf.d/configfile.template
COPY --from=react-build /app/build /usr/share/nginx/html
ENV PORT 8080
ENV HOST 0.0.0.0
EXPOSE 8080
CMD sh -c "envsubst '\$PORT' < /etc/nginx/conf.d/configfile.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
gitlab-ci.yml
default:
image: google/cloud-sdk:alpine
before_script:
- gcloud config set project PROJECTID
- gcloud auth activate-service-account --key-file $GCP_SERVICE_CREDS
stages:
- staging
- production
staging:
stage: staging
environment:
name: staging
only:
variables:
- $DEPLOY_ENV == "staging"
script:
- gcloud builds submit --tag gcr.io/PROJECTID/REPOSITORY
- gcloud run deploy REPOSITORY --image gcr.io/PROJECTID/REPOSITORY --platform managed --region us-central1 --allow-unauthenticated
production:
stage: production
environment:
name: production
only:
variables:
- $DEPLOY_ENV == "production"
script:
- gcloud builds submit --tag gcr.io/PROJECTID/REPOSITORY
- gcloud run deploy REPOSITORY --image gcr.io/PROJECTID/REPOSITORY --platform managed --region us-central1 --allow-unauthenticated --set-env-vars NODE_ENV=production

Credits to Guillaume Blaquiere because this answer is based on his post from this thread.
According to React documentation:
The environment variables are embedded during the build time. Since Create React App produces a static HTML/CSS/JS bundle, it can’t possibly read them at runtime. To read them at runtime, you would need to load HTML into memory on the server and replace placeholders in runtime.
Most likely what happens is that you expose static files to your users and the users get the files and load them in their browser like this:
console.log("REDIRECT", process.env.REACT_APP_API_END_POINT)
This returns null because the users' browser execute the Javascript and read the env variable on the current environment: the users' browser. You should have an execution that run on Cloud Run to use the env vars. If the code is ran on user side (in their browser), the env vars will not appear.

As I understand it, you want to access environment variables declared on GCP such as $GCP_SERVICE_CREDS, $PROJECTID, $REPOSITORY and others on your Gitlab pipeline.
To do this, go to Gitlab settings then click on CI/CD. Once there, click on the Expand button in front of Variables and you add your various GCP variables with their Values.

Related

AWS Amplify JavaScript Gitpod automatic setup

I'm trying to set up an AWS Amplify JavaScript project with Gitpod in a way that when I start a new Workspace I don't have to manually go through the amplify-cli steps (adding IAM user, generating aws-exports.js file, etc.).
I've managed to successfully install the aws-cli and amplify-cli on the machine so far (I'm adding this to my .gitpod.yml file on task init)
$ npm install #aws-amplify/cli
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
so I can add the
$ export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
$ export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
$ export AWS_DEFAULT_REGION=us-west-2
environment variables to gitpod variables, but when running for example amplify pull I don't see the [default] user as I normally would when running it with local setup.
I've got it working, first I've added these environment variables for the amplify setup using the Gitpod account settings:
AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
AWS_DEFAULT_REGION=us-west-2
AWS_AMPLIFY_PROJECT_NAME=headlessProjectName
AWS_AMPLIFY_APP_ID=amplifyServiceProjectAppId
The first three are the IAM user credentials and config, the latter two are amplify specific and can be found inside the AWS console on the Amplify project.
After that, I've created a Dockerfile for the Gitpod Custom Docker Image (as suggested by #Pauline) and a bash script that creates ~/.aws config files and runs amplify pull in a headless mode.
.gitpod.dockerfile
FROM gitpod/workspace-full
# install aws-cli v2
RUN sudo curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" \
&& unzip awscliv2.zip \
&& sudo ./aws/install
# install amplify-cli
RUN sudo curl -sL https://aws-amplify.github.io/amplify-cli/install | bash && $SHELL
This will preinstall aws-cli and amplify-cli on the docker image so they are ready to use inside the workspace. Also don't forget to add the docker configuration to the top of .gitpod.yml file:
.gitpod.yml
image:
file: .gitpod.Dockerfile
At this point, I'm setting up Amplify in a way that I don't have to manually pick the amplify-cli options when a new workspace is started. The magic happens inside a custom bash script with the help of the environment variables specified at the start:
amplify-pull.bash
#!/bin/bash
set -e
IFS='|'
# Specify the headless amplify pull parameters
# https://docs.amplify.aws/cli/usage/headless/#amplify-pull-parameters
VUECONFIG="{\
\"SourceDir\":\"src\",\
\"DistributionDir\":\"dist\",\
\"BuildCommand\":\"npm run-script build\",\
\"StartCommand\":\"npm run-script serve\"\
}"
AWSCLOUDFORMATIONCONFIG="{\
\"configLevel\":\"project\",\
\"useProfile\":false,\
\"profileName\":\"default\",\
\"accessKeyId\":\"$AWS_ACCESS_KEY_ID\",\
\"secretAccessKey\":\"$AWS_SECRET_ACCESS_KEY\",\
\"region\":\"$AWS_DEFAULT_REGION\"\
}"
AMPLIFY="{\
\"projectName\":\"$AWS_AMPLIFY_PROJECT_NAME\",\
\"appId\":\"$AWS_AMPLIFY_APP_ID\",\
\"envName\":\"dev\",\
\"defaultEditor\":\"code\"\
}"
FRONTEND="{\
\"frontend\":\"javascript\",\
\"framework\":\"vue\",\
\"config\":$VUECONFIG\
}"
PROVIDERS="{\
\"awscloudformation\":$AWSCLOUDFORMATIONCONFIG\
}"
# Create AWS credential file inside ~/.aws
mkdir -p ~/.aws \
&& echo -e "[default]\naws_access_key_id=$AWS_ACCESS_KEY_ID\naws_secret_access_key=$AWS_SECRET_ACCESS_KEY" \
>> ~/.aws/credentials
# Create AWS config file ~/.aws
echo -e "[default]\nregion=$AWS_DEFAULT_REGION" >> ~/.aws/config
# Run amplify pull in Headless mode,
# this also generates thw aws-exports.js file inside /src
amplify pull \
--amplify $AMPLIFY \
--frontend $FRONTEND \
--providers $PROVIDERS \
--yes
I'm using Vue for my frontend as an example, so those values need to be changed depending on the project type. The rest of the params are pretty straightforward, more info about the headless mode can be found here. I'm also creating the aws config files before the amplify pull command as mentioned before.
And this is what the final .gitpod.yml file looks like
.gitpod.yml
image:
file: .gitpod.dockerfile
tasks:
- name: Amplify pull dev
command: |
bash amplify-pull.bash
gp sync-done amplify
- name: Install npm packages, run dev
init: yarn install
command: |
gp sync-await amplify
yarn serve
ports:
- port: 8080
onOpen: open-preview
I'm also waiting for the Amplify pull to finish before running the dev server using gp sync.

Docker: Why Do Variables Passed From .env Linger in the Container Even After They Are Removed From .env and Rebuilt

I'm trying to understand why env variables inside my Docker container keep appearing when I've clearly removed or commented them out from my .env file. I'm fairly new to Docker and don't know if this is expected behavior or an anomaly.
The way my system is setup, I spin up an instance of Azure's IoT Edge server locally (via deployment.template.json) which builds the Docker container and populates the environmental variables using the associated .env file.
Now what's perplexing me is that if I were to completely stop the server (not pause), comment out/remove the variable from the .env file, restart the server, and inspect the container (docker container inspect), I still see the variable name and value. I've also used docker system prune -a --volumes after stopping the server to prune my system and volumes, then restarted the server only to see the variable still listed.
Just in case it helps, inside my deployment.template.json I'm passing my variables as MY_VAR=${MY_VAR}. Then in my .env file I have the variable as MY_VAR=abc123.
From my Dockerfile:
# -------------
# Build Sources
# -------------
FROM node:10-alpine as builder
# Install additional git and openssh dependencies and make sure GitLab domain is accepted by SSH
RUN apk add --no-cache openssh git curl \
&& mkdir /root/.ssh/ \
&& touch /root/.ssh/known_hosts \
&& ssh-keyscan gitlab.com github.com >> /root/.ssh/known_hosts
WORKDIR /app
# Install app dependencies
RUN npm i -g typescript --no-cache --silent
COPY package*.json ./
RUN npm ci --only=production --silent
# Copy sources and build
COPY . .
RUN npm run build
# ----------------
# Production Image
# ----------------
FROM node:10-alpine
RUN apk add --no-cache curl
WORKDIR /app
COPY --from=builder /app/node_modules /app/node_modules
COPY --from=builder /app/dist /app/dist
COPY . .
USER node
CMD ["node", "dist/index.js"]
you can run "docker inspect" to your container and see what environment variables are defined in docker create options.
you can also check docker create options in Azure Portal.

Heroky upload node app and Java jar

I am developing a node.js app at the moment. I plan to host in on heroku.
The catch is this app relies on a jar file, that i will have to run obviously.
Is this possible on heroku, to run java?
You'll need to add the JVM buildpack to your app:
$ heroku buildpacks:add -i 1 heroku/jvm
Then redeploy with:
$ git commit -m "Add JVM" --allow-empty
$ git push heroku master
After this, the java command will be available at runtime.

Deploy Meteor to Google App Engine 2017

So I am trying to deploy a simple meteor app to Google App Engine. I've tried following this tutorial https://cloud.google.com/community/tutorials/run-meteor-on-google-app-engine
But it resulted in
error: Can't find npm module 'meteor-deque'. Did you forget to call 'Npm.depends' in package.js within the 'meteor' package?
Googling resulted in a few more tutorials but via their comments it seems they are outdate as well.
There is also this one https://medium.com/google-cloud/meteor-google-a-devops-post-b8a17f889f84
However this is about deploying to the compute engine, so this is a plan B.
So I wander if any of you successfully deployed Meteor to GAE recently in 2017 with Meteor 1.4? Can you please share details?
Thanks to kiyohiko from meteor forums.
https://forums.meteor.com/t/deploy-meteor-to-google-app-engine-2017/36171/4
Here are the configs that worked for me
app.yaml
env: flex
runtime: custom
threadsafe: true
automatic_scaling:
max_num_instances: 1
env_variables:
ROOT_URL: https://<gae-app-name>.appspot.com
MONGO_URL: mongodb://<mongodb-username>:<mongodb-password>#<gce-ip>:27017/<mongodb-name>
DISABLE_WEBSOCKETS: "1"
skip_files:
- ^(.*/)?\.dockerignore$
- ^(.*/)?\npm-debug.log$
- ^(.*/)?\yarn-error.log$
- ^(.*/)?\.git$
- ^(.*/)?\.hg$
- ^(.*/)?\.svn$
Dockerfile
FROM launcher.gcr.io/google/nodejs
RUN install_node v4.6.2
COPY . /app/
RUN (cd programs/server && npm install --unsafe-perm)
CMD node main.js
Steps to deploy
$> meteor build ../ --directory --architecture os.linux.x86_64 --server-only
$> cp app.yaml ../bundle/ && cp Dockerfile ../bundle/
$> cd ../bundle && gcloud app deploy --verbosity=info -q

Error Building on Heroku - Isomorphic App

When trying to deploy my app on Heroku, I'm getting the following errors:
Cannot GET /
NOT FOUND - The server has not found anything matching the requested URI (Uniform Resource Identifier).
Server is being run outside of live development mode, meaning it will only serve the compiled application bundle in ~/dist. Generally you do not need an application server for this and can instead use a web server such as nginx to serve your static files. See the "deployment" section in the README for more information on deployment strategies.
I understand that I should be running it "live", rather than from the localhost, so I set the following settings via the CLI:
heroku config:set NODE_ENV=production
heroku config:set NODE_PATH=./src
heroku config:set NPM_CONFIG_PRODUCTION=false
What else should I check for. The app is a clone of this boiler: https://github.com/davezuko/react-redux-starter-kit
My Heroku link is: https://hidden-temple-43853.herokuapp.com/
Assuming you don't use a Procfile to tell Heroku how to launch your app, the npm start script of package.json will be used instead.
Are you running the npm run deploy with NODE_ENV=production before deploying to Heroku?
Have a look at this issue where a fix was suggessted on deploying to Heroku (copied below).
// ...
// ------------------------------------
// Apply Webpack HMR Middleware
// ------------------------------------
if (config.env === 'development') {
const webpackDevMiddleware = require('./middleware/webpack-dev').default
const webpackHMRMiddleware = require('./middleware/webpack-hmr').default
const compiler = webpack(webpackConfig)
// Enable webpack-dev and webpack-hot middleware
const { publicPath } = webpackConfig.output
app.use(webpackDevMiddleware(compiler, publicPath))
app.use(webpackHMRMiddleware(compiler))
// ...

Categories