Gitlab CI - server gets 'killed' before Cypress tests can run - javascript

I am running a CI pipeline in Gitlab which runs some Cypress integration tests as part of the testing stage. The tests are working absolutely fine on my machine locally but when I try and run them in Gitlab CI it appears that the Gitlab runner is killing my local server before I can run my Cypress tests against it. Here is my Gitlab config:
variables:
API_BASE_URL: https://t.local.um.io/api
CYPRESS_API_BASE_URL: https://t.local.um.io/api
npm_config_cache: '$CI_PROJECT_DIR/.npm'
CYPRESS_CACHE_FOLDER: '$CI_PROJECT_DIR/cache/Cypress'
cache:
paths:
- node_modules/
- cache/Cypress
stages:
- install
- build
- tests
install:
image: cypress/browsers:node14.15.0-chrome86-ff82
stage: install
cache:
key: 'e2eDeps'
paths:
- node_modules/
- cache/Cypress/
script:
- npm ci
build:
stage: build
dependencies:
- install
script:
- npm run build
artifacts:
expire_in: 1 days
when: on_success
tests:
image: cypress/browsers:node14.15.0-chrome86-ff82
stage: tests
script:
- npm ci
- npm run test:ci
And here are the relevant package.json scripts that the above config runs in CI:
"scripts": {
"build": "webpack --config webpack.prod.js",
"dev": "webpack serve --config webpack.dev.js",
"start:ci": "export NODE_OPTIONS=--max_old_space_size=4096 serve dist --no-clipboard --listen ${PORT:-3000}",
"test": "cross-env NODE_ENV=test && npm run test:cypress && npm run test:jest",
"test:ci": "cross-env NODE_ENV=test && start-server-and-test start:ci http-get://localhost:3000 test",
"test:cypress": "cypress run --headless --browser chrome",
"test:jest": "jest",
},
It is the final stage tests that is currently failing. Here is the console output from the Gitlab runner, you can see where it says 'killed' then 'err no 137' it appears that it just stops the start:ci process which is what runs my local server so the integration tests can run against them.
Finally here is a small snippet of my test, I use the cy.visit command which never responds as the server is killed:
describe('Code entry page - API responses are managed correctly', () => {
beforeEach(() => {
cy.visit(routes.APP.HOME); // this just times out
});
...
EDIT
I have tried running the test:ci script inside of the exact same docker container that it uses (cypress/browsers:node14.15.0-chrome86-ff82) locally (not in gitlabci) and it works no problem. The issue must lay with Gitlab surely?

Error 137 typically means that your Docker container was killed due to not having sufficient resources. As I mentioned in the comments, your current container is running with 4GB of memory. Since you are defining no tag keys within your CI/CD, you are likely running on GitLab's Linux runner cloud, which runs using a n1-standard-1 instance on GCP, which is limited to 3.75 GB of ram. Essentially, as soon as your test container starts, it instantly consumes all the memory available on the runner and your container is killed.
To get around the memory limitation, you must run your own gitlab-runner. There is no way to run using a higher amount of memory on the shared runner cloud. You can test this fairly easily by spinning up a gitlab-runner on your local machine (see instructions here for installing a gitlab runner). Once you've installed your runner, register your runner to the high-memory tag, then update your CI/CD to use that tag using the following syntax on that last job:
tests:
image: cypress/browsers:node14.15.0-chrome86-ff82
stage: tests
tags:
- high-memory
script:
- npm ci
- npm run test:ci
Your jobs are allowed to use as much memory as your runner has allocated. If your machine has 8Gb of memory, the jobs will be allowed to use up to 8Gb of memory.
If your machine doesn't have enough memory by itself, you could always temporarily spin up a cloud instance with sufficient memory. You could try a digital ocean droplet with 16GB of memory for 0.11c / hour, for example. This would allow you to run an instance for a couple of hours to test out the solution before you determine what's viable long-term.

Related

How do I automatically run an NPM script after webpack dev server has been started?

I'm doing a Vue project that runs on Electron. Since Vue uses webpack dev server to run the Vue app in development mode, I need to launch Electron with the dev server URL right after compilation completes and dev server has been started. Right after this.
I know I can manually run Electron after this but I need this task to be automated. My only purpose for this is to get Vue devtools running on Electron. Vue devtools won't work even if I set writeToDisk: true and open up the index.html on Electron. It only seems to work over the dev server (Issue seems to be file:// protocol). I found out that It's possible to open a browser after the server has started. But can't run any custom scripts.
So what I want is to automatically run cross-env NODE_ENV=development electron dist/main.js after I run serve Vue task and the dev server has been started. (I also know that this feature is already implemented in vue-cli-plugin-electron-builder but I'm avoiding all these plugins for multiple reasons)
You can prepend pre and post to npm scripts, and npm will figure out what you want. Because serve probably runs in the foreground, a postserve would normally not work, but you can get around that by using &. This won't wait for any build steps to complete, but if it's a requirement to have it wait, you could throw a short sleep in as well.
"serve": "my serve command &",
"postserve": "cross-env NODE_ENV=development electron dist/main.js"
// or with a sleep
"postserve": "sleep 5 && cross-env NODE_ENV=development electron dist/main.js"
This is how I ended up doing it and managed to create a build tool called Vuelectro.
I had to do it programmatically using the #vue/cli-service module and manually start the serve process where I could run electron once the webpack dev server was started.
const vueService = require('#vue/cli-service');
const service = new vueService(process.cwd());
function serveDev() {
service.init("development");
service.run('serve').then(({server, url}) => {
let electron = spawn(path.join(process.cwd(), 'node_modules', '.bin', process.platform === 'win32' ? 'electron.cmd' : 'electron'), ['app/electron-main.js'], {stdio: 'inherit'});
electron.on('exit', function (code) {
process.exit(0);
});
}).catch((err) => {
console.error(err.stack);
});
}
Complete source code can be found here.

npm stuck at "Starting the development server..."

I know this topic has been created before, but nothing I tried could fix it. The problem is precisely the following. I have a react script on an AWS ec2 server, that I want to execute automatically, whenever the instance is starting. For this purpose, the following script is executed at the start of the AWS server:
#!/usr/bin/python3
import time
import shlex, subprocess
args = shlex.split('sudo su ubuntu -c "/usr/bin/npm start --prefix /home/ubuntu/my-app > /home/ubuntu/output.txt 2>&1"')
subprocess.Popen(args)
When I run the script manually, everything works just fine. But whenever it is run during the server start, I get the following log:
> my-app#0.1.0 start /home/ubuntu/my-app
> react-scripts start
^[[34mℹ^[[39m ^[[90m「wds」^[[39m: Project is running at http://172.31.14.57/
^[[34mℹ^[[39m ^[[90m「wds」^[[39m: webpack output is served from
^[[34mℹ^[[39m ^[[90m「wds」^[[39m: Content not from webpack is served from /home/ubuntu/my-app/public
^[[34mℹ^[[39m ^[[90m「wds」^[[39m: 404s will fallback to /
Starting the development server...
That's all - nothing happens. Does anybody have an idea how to fix this? I thought it had something to do with the fact, that it is started from root. So I tried to fix that by using sudo su ubuntu -c, but it doesn't help either.
My guess is that it's the same problem as this issue. When you call npm start by default it calls the start script in package.json which points to :
"start": "react-scripts start",
There was a change in react-scripts that checks for non interactive shell when CI variable is not set here
But when you start it using sudo su ubuntu -c, it starts a non interactive shell.
What could work is setting CI variable to true like this :
export CI=true
sudo su ubuntu -c "/usr/bin/npm start ....."
You can also create a new script inside package.json :
"scripts": {
"start": "react-scripts start",
"ec2-dev": "CI=true;export CI; react-scripts start",
.....
}
and run :
/usr/bin/npm run ec2-dev
instead of npm start
Starting a development server is only useful if you mount the src folder to a local directory via nfs or other file sharing mechanism in order to use the nodemon capability of react-script to instantly restart your server for live changes during development.
If it's for other purposes. You need to build the app using :
npm run build
Provision the build directory artifacts and serve it using a web server

How to terminate CircleCI after running tasks?

My test passes as good as expected but the process hangs on the running status! is it a side effect of the background process? how to solve this problem?
(I need to run nodeJs as a background process to testing my app)
my config file:
version: 2.1
orbs:
node: circleci/node#1.1.6
jobs:
build-and-run:
executor:
name: node/default
steps:
- checkout
- node/with-cache:
steps:
- run: npm install
- run:
name: initial run
command: npm run start-server-for-test && sleep 5
background: true
- run: npm run ci-test
workflows:
build-and-run:
jobs:
- build-and-run
This doesn't looks like a CircleCI error. Run this same commands in your machine and try if it works. Also you can run jobs with ssh enabled in CircleCI and login to the machine and execute the commands, in that way you could troubleshoot.

How to reduce memory usage of a docusaurus build (v2)?

I'm using docusaurus v2.0.0-alpha.39 to generate documentation, and bitbucket pipelines to build everything and push it to AWS.
The problem is I'm hitting Container 'Build' exceeded memory limit. when executing the yarn buildcommand.
In bitbucket-pipelines.yml, I've already bump up the memory at the maximum I can (7680mb for build and 512mb for docker):
deploy: &deploy
size: 2x
name: Deploy
caches:
- node
script:
- yarn
- yarn build
I've also tried to limit node memory usage by setting --max-old-space-size=7168 in various ways :
{
"scripts": {
"build": "cross-env NODE_OPTIONS=--max-old-space-size=7168 node node_modules/#docusaurus/core/bin/docusaurus build"
}
}
{
"scripts": {
"build": "NODE_OPTIONS=--max-old-space-size=7168 node node_modules/#docusaurus/core/bin/docusaurus build"
}
}
{
"scripts": {
"build": "node --max-old-space-size=7168 node_modules/#docusaurus/core/bin/docusaurus build"
}
}
But no matter what i put as max variable, my node process goes up and use up to 14GB of memory !
And if I put a "small" --max-old-space-size, i get :
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
I have pretty huge docs(6,3 MB and 1.493 items per version) i have to admit but I can't even version them using docusaurus, each version add 4/5GB of memory usage...
Do you have any idea of potential solutions for this ?

Run many nodejs servers in one command

I have few Nodejs servers, they are small servers and each one of them is stored in a separate folder. And all the folders are stored in one root folder. Every time i want to run the servers i have to go through each one of them and type
nodemon *name*.
This is becoming tiresome, especially that the number of servers is growing. is there any tool or a script i could use to run all the servers in one command??
Basically, how can i run all the servers in one command or a script?
With NPM. Write this in package.json :
{
"name": "project-name",
"version": "1.0.0",
"scripts": {
"start": "nodemon server1.js | nodemon server2.js | nodemon server3.js"
}
}
Then you only need to execute npm start.
Also see this post
PM2 is a great answer for this.
pm2 start app.js -i 4 (or max will take up all available cores)
You also get great benefits such as automatic restarts, log aggregation and load balancing.
Use pm2
If you use Linux
#!/bin/bash
pm2 start << Path to User Server>>
pm2 start << Path to User Server>>
pm2 logs
You can save
pm2 save
pm2 list
pm2 stop

Categories