AWS Amplify JavaScript Gitpod automatic setup - javascript

I'm trying to set up an AWS Amplify JavaScript project with Gitpod in a way that when I start a new Workspace I don't have to manually go through the amplify-cli steps (adding IAM user, generating aws-exports.js file, etc.).
I've managed to successfully install the aws-cli and amplify-cli on the machine so far (I'm adding this to my .gitpod.yml file on task init)
$ npm install #aws-amplify/cli
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
so I can add the
$ export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
$ export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
$ export AWS_DEFAULT_REGION=us-west-2
environment variables to gitpod variables, but when running for example amplify pull I don't see the [default] user as I normally would when running it with local setup.

I've got it working, first I've added these environment variables for the amplify setup using the Gitpod account settings:
AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
AWS_DEFAULT_REGION=us-west-2
AWS_AMPLIFY_PROJECT_NAME=headlessProjectName
AWS_AMPLIFY_APP_ID=amplifyServiceProjectAppId
The first three are the IAM user credentials and config, the latter two are amplify specific and can be found inside the AWS console on the Amplify project.
After that, I've created a Dockerfile for the Gitpod Custom Docker Image (as suggested by #Pauline) and a bash script that creates ~/.aws config files and runs amplify pull in a headless mode.
.gitpod.dockerfile
FROM gitpod/workspace-full
# install aws-cli v2
RUN sudo curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" \
&& unzip awscliv2.zip \
&& sudo ./aws/install
# install amplify-cli
RUN sudo curl -sL https://aws-amplify.github.io/amplify-cli/install | bash && $SHELL
This will preinstall aws-cli and amplify-cli on the docker image so they are ready to use inside the workspace. Also don't forget to add the docker configuration to the top of .gitpod.yml file:
.gitpod.yml
image:
file: .gitpod.Dockerfile
At this point, I'm setting up Amplify in a way that I don't have to manually pick the amplify-cli options when a new workspace is started. The magic happens inside a custom bash script with the help of the environment variables specified at the start:
amplify-pull.bash
#!/bin/bash
set -e
IFS='|'
# Specify the headless amplify pull parameters
# https://docs.amplify.aws/cli/usage/headless/#amplify-pull-parameters
VUECONFIG="{\
\"SourceDir\":\"src\",\
\"DistributionDir\":\"dist\",\
\"BuildCommand\":\"npm run-script build\",\
\"StartCommand\":\"npm run-script serve\"\
}"
AWSCLOUDFORMATIONCONFIG="{\
\"configLevel\":\"project\",\
\"useProfile\":false,\
\"profileName\":\"default\",\
\"accessKeyId\":\"$AWS_ACCESS_KEY_ID\",\
\"secretAccessKey\":\"$AWS_SECRET_ACCESS_KEY\",\
\"region\":\"$AWS_DEFAULT_REGION\"\
}"
AMPLIFY="{\
\"projectName\":\"$AWS_AMPLIFY_PROJECT_NAME\",\
\"appId\":\"$AWS_AMPLIFY_APP_ID\",\
\"envName\":\"dev\",\
\"defaultEditor\":\"code\"\
}"
FRONTEND="{\
\"frontend\":\"javascript\",\
\"framework\":\"vue\",\
\"config\":$VUECONFIG\
}"
PROVIDERS="{\
\"awscloudformation\":$AWSCLOUDFORMATIONCONFIG\
}"
# Create AWS credential file inside ~/.aws
mkdir -p ~/.aws \
&& echo -e "[default]\naws_access_key_id=$AWS_ACCESS_KEY_ID\naws_secret_access_key=$AWS_SECRET_ACCESS_KEY" \
>> ~/.aws/credentials
# Create AWS config file ~/.aws
echo -e "[default]\nregion=$AWS_DEFAULT_REGION" >> ~/.aws/config
# Run amplify pull in Headless mode,
# this also generates thw aws-exports.js file inside /src
amplify pull \
--amplify $AMPLIFY \
--frontend $FRONTEND \
--providers $PROVIDERS \
--yes
I'm using Vue for my frontend as an example, so those values need to be changed depending on the project type. The rest of the params are pretty straightforward, more info about the headless mode can be found here. I'm also creating the aws config files before the amplify pull command as mentioned before.
And this is what the final .gitpod.yml file looks like
.gitpod.yml
image:
file: .gitpod.dockerfile
tasks:
- name: Amplify pull dev
command: |
bash amplify-pull.bash
gp sync-done amplify
- name: Install npm packages, run dev
init: yarn install
command: |
gp sync-await amplify
yarn serve
ports:
- port: 8080
onOpen: open-preview
I'm also waiting for the Amplify pull to finish before running the dev server using gp sync.

Related

how to get hot reload to work with React with Dotnet and Docker

I'm trying to get hot reload to work with React, Docker and Dotnet.
However, from what i found on the internet
only static rendering works with docker.
So i have to do
docker -t build {Name_of_file}
everytime to see changes within React....
I'm sure there's a way to do this,
here's my dockerfile.
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
RUN curl --silent --location https://deb.nodesource.com/setup_10.x | bash -
RUN apt-get install --yes nodejs
# Copy the source from your machine onto the container.
WORKDIR /src
COPY . .
RUN dotnet restore "./dotnet-test.csproj"
RUN dotnet publish "dotnet-test.csproj" -c Release -o /app/publish
FROM mcr.microsoft.com/dotnet/aspnet:5.0
# Expose port 80 to your local machine so you can access the app.
EXPOSE 80
EXPOSE 443
COPY --from=build /app/publish .
ENTRYPOINT ["dotnet", "dotnet-test.dll"]
If anyone has a github that does this let me know (:
Future me here!! Here is a link of me doing this with https
https://easyrun32.medium.com/net-5-react-docker-nginx-mysql-https-hotreload-50d87b32d492
Disclaimer: Not sure if this solution will work but give it a shot
Summary:
Use only the sdk image (runtime image won't work).
Use dotnet watch
Update .csproj file to include files to be watched
Long answer:
1. New Dockerfile
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
RUN curl --silent --location https://deb.nodesource.com/setup_10.x | bash -
RUN apt-get install --yes nodejs
# Copy the source from your machine onto the container.
WORKDIR /src
COPY . .
RUN dotnet restore "./dotnet-test.csproj"
RUN dotnet publish "dotnet-test.csproj" -c Release -o /app/publish
# Expose port 80 to your local machine so you can access the app.
EXPOSE 80
EXPOSE 443
WORKDIR /app/publish
ENTRYPOINT ["dotnet", "watch", "dotnet-test.dll"]
2. Project File Changes
<ItemGroup>
<!-- extends watching group to include *.js files -->
<Watch Include="**\*.js" Exclude="node_modules\**\*;**\*.js.map;obj\**\*;bin\**\*" />
</ItemGroup>
This should work for the most part. Make quick updates to include/exclude more files.

REACT APP not access the google cloud run environment variables

I am using GitLab ci as my CI/CD tool. I am deploying the dockerized react app to cloud run but I am not able to access the environment variables declared on cloud run. Thank you!
Dockerfile
# build environment
FROM node:8-alpine as react-build
WORKDIR /app
COPY . ./
RUN npm install
RUN npm run build
# server environment
FROM nginx: alpine
COPY nginx.conf /etc/nginx/conf.d/configfile.template
COPY --from=react-build /app/build /usr/share/nginx/html
ENV PORT 8080
ENV HOST 0.0.0.0
EXPOSE 8080
CMD sh -c "envsubst '\$PORT' < /etc/nginx/conf.d/configfile.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
gitlab-ci.yml
default:
image: google/cloud-sdk:alpine
before_script:
- gcloud config set project PROJECTID
- gcloud auth activate-service-account --key-file $GCP_SERVICE_CREDS
stages:
- staging
- production
staging:
stage: staging
environment:
name: staging
only:
variables:
- $DEPLOY_ENV == "staging"
script:
- gcloud builds submit --tag gcr.io/PROJECTID/REPOSITORY
- gcloud run deploy REPOSITORY --image gcr.io/PROJECTID/REPOSITORY --platform managed --region us-central1 --allow-unauthenticated
production:
stage: production
environment:
name: production
only:
variables:
- $DEPLOY_ENV == "production"
script:
- gcloud builds submit --tag gcr.io/PROJECTID/REPOSITORY
- gcloud run deploy REPOSITORY --image gcr.io/PROJECTID/REPOSITORY --platform managed --region us-central1 --allow-unauthenticated --set-env-vars NODE_ENV=production
Credits to Guillaume Blaquiere because this answer is based on his post from this thread.
According to React documentation:
The environment variables are embedded during the build time. Since Create React App produces a static HTML/CSS/JS bundle, it can’t possibly read them at runtime. To read them at runtime, you would need to load HTML into memory on the server and replace placeholders in runtime.
Most likely what happens is that you expose static files to your users and the users get the files and load them in their browser like this:
console.log("REDIRECT", process.env.REACT_APP_API_END_POINT)
This returns null because the users' browser execute the Javascript and read the env variable on the current environment: the users' browser. You should have an execution that run on Cloud Run to use the env vars. If the code is ran on user side (in their browser), the env vars will not appear.
As I understand it, you want to access environment variables declared on GCP such as $GCP_SERVICE_CREDS, $PROJECTID, $REPOSITORY and others on your Gitlab pipeline.
To do this, go to Gitlab settings then click on CI/CD. Once there, click on the Expand button in front of Variables and you add your various GCP variables with their Values.

Docker: Why Do Variables Passed From .env Linger in the Container Even After They Are Removed From .env and Rebuilt

I'm trying to understand why env variables inside my Docker container keep appearing when I've clearly removed or commented them out from my .env file. I'm fairly new to Docker and don't know if this is expected behavior or an anomaly.
The way my system is setup, I spin up an instance of Azure's IoT Edge server locally (via deployment.template.json) which builds the Docker container and populates the environmental variables using the associated .env file.
Now what's perplexing me is that if I were to completely stop the server (not pause), comment out/remove the variable from the .env file, restart the server, and inspect the container (docker container inspect), I still see the variable name and value. I've also used docker system prune -a --volumes after stopping the server to prune my system and volumes, then restarted the server only to see the variable still listed.
Just in case it helps, inside my deployment.template.json I'm passing my variables as MY_VAR=${MY_VAR}. Then in my .env file I have the variable as MY_VAR=abc123.
From my Dockerfile:
# -------------
# Build Sources
# -------------
FROM node:10-alpine as builder
# Install additional git and openssh dependencies and make sure GitLab domain is accepted by SSH
RUN apk add --no-cache openssh git curl \
&& mkdir /root/.ssh/ \
&& touch /root/.ssh/known_hosts \
&& ssh-keyscan gitlab.com github.com >> /root/.ssh/known_hosts
WORKDIR /app
# Install app dependencies
RUN npm i -g typescript --no-cache --silent
COPY package*.json ./
RUN npm ci --only=production --silent
# Copy sources and build
COPY . .
RUN npm run build
# ----------------
# Production Image
# ----------------
FROM node:10-alpine
RUN apk add --no-cache curl
WORKDIR /app
COPY --from=builder /app/node_modules /app/node_modules
COPY --from=builder /app/dist /app/dist
COPY . .
USER node
CMD ["node", "dist/index.js"]
you can run "docker inspect" to your container and see what environment variables are defined in docker create options.
you can also check docker create options in Azure Portal.

Deploy Meteor to Google App Engine 2017

So I am trying to deploy a simple meteor app to Google App Engine. I've tried following this tutorial https://cloud.google.com/community/tutorials/run-meteor-on-google-app-engine
But it resulted in
error: Can't find npm module 'meteor-deque'. Did you forget to call 'Npm.depends' in package.js within the 'meteor' package?
Googling resulted in a few more tutorials but via their comments it seems they are outdate as well.
There is also this one https://medium.com/google-cloud/meteor-google-a-devops-post-b8a17f889f84
However this is about deploying to the compute engine, so this is a plan B.
So I wander if any of you successfully deployed Meteor to GAE recently in 2017 with Meteor 1.4? Can you please share details?
Thanks to kiyohiko from meteor forums.
https://forums.meteor.com/t/deploy-meteor-to-google-app-engine-2017/36171/4
Here are the configs that worked for me
app.yaml
env: flex
runtime: custom
threadsafe: true
automatic_scaling:
max_num_instances: 1
env_variables:
ROOT_URL: https://<gae-app-name>.appspot.com
MONGO_URL: mongodb://<mongodb-username>:<mongodb-password>#<gce-ip>:27017/<mongodb-name>
DISABLE_WEBSOCKETS: "1"
skip_files:
- ^(.*/)?\.dockerignore$
- ^(.*/)?\npm-debug.log$
- ^(.*/)?\yarn-error.log$
- ^(.*/)?\.git$
- ^(.*/)?\.hg$
- ^(.*/)?\.svn$
Dockerfile
FROM launcher.gcr.io/google/nodejs
RUN install_node v4.6.2
COPY . /app/
RUN (cd programs/server && npm install --unsafe-perm)
CMD node main.js
Steps to deploy
$> meteor build ../ --directory --architecture os.linux.x86_64 --server-only
$> cp app.yaml ../bundle/ && cp Dockerfile ../bundle/
$> cd ../bundle && gcloud app deploy --verbosity=info -q

How can I create a "tmp" directory with Elastic Beanstalk?

I'm using Node.js and need to save files to a tmp directory within my app. The problem is that Elastic Beanstalk does not set the app directory to be writable by the app. So when I try to create the temp directory I get this error
fs.js:653
return binding.mkdir(pathModule._makeLong(path),
^
Error: EACCES, permission denied '/var/app/tmp/'
at Object.fs.mkdirSync (fs.js:653:18)
at Promise.<anonymous> (/var/app/current/routes/auth.js:116:18)
at Promise.<anonymous> (/var/app/current/node_modules/mongoose/node_modules/mpromise/lib/promise.js:177:8)
at Promise.emit (events.js:95:17)
at Promise.emit (/var/app/current/node_modules/mongoose/node_modules/mpromise/lib/promise.js:84:38)
at Promise.fulfill (/var/app/current/node_modules/mongoose/node_modules/mpromise/lib/promise.js:97:20)
at /var/app/current/node_modules/mongoose/lib/query.js:1394:13
at model.Document.init (/var/app/current/node_modules/mongoose/lib/document.js:250:11)
at completeOne (/var/app/current/node_modules/mongoose/lib/query.js:1392:10)
at Object.cb (/var/app/current/node_modules/mongoose/lib/query.js:1151:11)
I've tried several things such as an app-setup.sh script within .ebextensions/scripts/app-setup.sh that looks like this
#!/bin/bash
# Check if this is the very first time that this script is running
if ([ ! -f /root/.not-a-new-instance.txt ]) then
newEC2Instance=true
fi
# Get the directory of 'this' script
dirCurScript=$(dirname "${BASH_SOURCE[0]}")
# Fix the line endings of all files
find $dirCurScript/../../ -type f | xargs dos2unix -q -k
# Get the app configuration environment variables
source $dirCurScript/../../copy-to-slash/root/.elastic-beanstalk-app
export ELASTICBEANSTALK_APP_DIR="/$ELASTICBEANSTALK_APP_NAME"
appName="$ELASTICBEANSTALK_APP_NAME"
dirApp="$ELASTICBEANSTALK_APP_DIR"
dirAppExt="$ELASTICBEANSTALK_APP_DIR/.ebextensions"
dirAppTmp="$ELASTICBEANSTALK_APP_DIR/tmp"
dirAppData="$dirAppExt/data"
dirAppScript="$dirAppExt/scripts"
# Create tmp directory
mkdir -p $dirApp/tmp
# Set permissions
chmod 777 $dirApp
chmod 777 $dirApp/tmp
# Ensuring all the required environment settings after all the above setup
if ([ -f ~/.bash_profile ]) then
source ~/.bash_profile
fi
# If new instance, now it is not new anymore
if ([ $newEC2Instance ]) then
echo -n "" > /root/.not-a-new-instance.txt
fi
# Print the finish time of this script
echo $(date)
# Always successful exit so that beanstalk does not stop creating the environment
exit 0
As well as creating a file called 02_env.config within .ebextensions that looks like this
# .ebextensions/99datadog.config
container_commands:
01mkdir:
command: "mkdir /var/app/tmp"
02chmod:
command: "chmod 777 /var/app/tmp"
Neither seem to work. How can I create a tmp directory within my app that is writable?
I recently experienced the same issue with a .NET application where the application was failing because it couldn't write to a directory, even after I had set the permissions.
What I found was that after the whole .ebextensions process was completed, the final step was a web container permissions update which ended up overwriting my ebextensions permissions change.
To solve it I moved the directory outside of the web container and updated the application to write there instead.
In your case I would suggest /tmp
With the newer (current?) Amazon Linux 2 elastic beanstalk installs, setting up a Post Deploy hook is the way to make this happen. The tmp folder needs to be created and made writeable AFTER elastic beanstalk has moved the newly deployed app bundle to /var/app. It's just a shell script placed in the following location from the root of your app:
.platform/hooks/postdeploy/10_create_tmp_and_make_writeable.sh
#!/bin/bash
mkdir /var/app/current/tmp
chmod 777 /var/app/current/tmp

Categories