How to terminate CircleCI after running tasks? - javascript

My test passes as good as expected but the process hangs on the running status! is it a side effect of the background process? how to solve this problem?
(I need to run nodeJs as a background process to testing my app)
my config file:
version: 2.1
orbs:
node: circleci/node#1.1.6
jobs:
build-and-run:
executor:
name: node/default
steps:
- checkout
- node/with-cache:
steps:
- run: npm install
- run:
name: initial run
command: npm run start-server-for-test && sleep 5
background: true
- run: npm run ci-test
workflows:
build-and-run:
jobs:
- build-and-run

This doesn't looks like a CircleCI error. Run this same commands in your machine and try if it works. Also you can run jobs with ssh enabled in CircleCI and login to the machine and execute the commands, in that way you could troubleshoot.

Related

Gitlab CI - server gets 'killed' before Cypress tests can run

I am running a CI pipeline in Gitlab which runs some Cypress integration tests as part of the testing stage. The tests are working absolutely fine on my machine locally but when I try and run them in Gitlab CI it appears that the Gitlab runner is killing my local server before I can run my Cypress tests against it. Here is my Gitlab config:
variables:
API_BASE_URL: https://t.local.um.io/api
CYPRESS_API_BASE_URL: https://t.local.um.io/api
npm_config_cache: '$CI_PROJECT_DIR/.npm'
CYPRESS_CACHE_FOLDER: '$CI_PROJECT_DIR/cache/Cypress'
cache:
paths:
- node_modules/
- cache/Cypress
stages:
- install
- build
- tests
install:
image: cypress/browsers:node14.15.0-chrome86-ff82
stage: install
cache:
key: 'e2eDeps'
paths:
- node_modules/
- cache/Cypress/
script:
- npm ci
build:
stage: build
dependencies:
- install
script:
- npm run build
artifacts:
expire_in: 1 days
when: on_success
tests:
image: cypress/browsers:node14.15.0-chrome86-ff82
stage: tests
script:
- npm ci
- npm run test:ci
And here are the relevant package.json scripts that the above config runs in CI:
"scripts": {
"build": "webpack --config webpack.prod.js",
"dev": "webpack serve --config webpack.dev.js",
"start:ci": "export NODE_OPTIONS=--max_old_space_size=4096 serve dist --no-clipboard --listen ${PORT:-3000}",
"test": "cross-env NODE_ENV=test && npm run test:cypress && npm run test:jest",
"test:ci": "cross-env NODE_ENV=test && start-server-and-test start:ci http-get://localhost:3000 test",
"test:cypress": "cypress run --headless --browser chrome",
"test:jest": "jest",
},
It is the final stage tests that is currently failing. Here is the console output from the Gitlab runner, you can see where it says 'killed' then 'err no 137' it appears that it just stops the start:ci process which is what runs my local server so the integration tests can run against them.
Finally here is a small snippet of my test, I use the cy.visit command which never responds as the server is killed:
describe('Code entry page - API responses are managed correctly', () => {
beforeEach(() => {
cy.visit(routes.APP.HOME); // this just times out
});
...
EDIT
I have tried running the test:ci script inside of the exact same docker container that it uses (cypress/browsers:node14.15.0-chrome86-ff82) locally (not in gitlabci) and it works no problem. The issue must lay with Gitlab surely?
Error 137 typically means that your Docker container was killed due to not having sufficient resources. As I mentioned in the comments, your current container is running with 4GB of memory. Since you are defining no tag keys within your CI/CD, you are likely running on GitLab's Linux runner cloud, which runs using a n1-standard-1 instance on GCP, which is limited to 3.75 GB of ram. Essentially, as soon as your test container starts, it instantly consumes all the memory available on the runner and your container is killed.
To get around the memory limitation, you must run your own gitlab-runner. There is no way to run using a higher amount of memory on the shared runner cloud. You can test this fairly easily by spinning up a gitlab-runner on your local machine (see instructions here for installing a gitlab runner). Once you've installed your runner, register your runner to the high-memory tag, then update your CI/CD to use that tag using the following syntax on that last job:
tests:
image: cypress/browsers:node14.15.0-chrome86-ff82
stage: tests
tags:
- high-memory
script:
- npm ci
- npm run test:ci
Your jobs are allowed to use as much memory as your runner has allocated. If your machine has 8Gb of memory, the jobs will be allowed to use up to 8Gb of memory.
If your machine doesn't have enough memory by itself, you could always temporarily spin up a cloud instance with sufficient memory. You could try a digital ocean droplet with 16GB of memory for 0.11c / hour, for example. This would allow you to run an instance for a couple of hours to test out the solution before you determine what's viable long-term.

Why would a full suite of Jasmine tests fail when running smaller suites individually works?

I have a protractor configuration file with the following suite glob patterns:
suites: {
all: ['**/*.spec.js'],
ui: ['ui/**/*.spec.js'],
api: ['api/**/*.spec.js']
},
If I, on a Mac, run npm run protractor (with the default suite of all), the tests run fine.
If another person on the team, on a Mac, runs npm run protractor, the tests run fine.
If the other person on the team, on a Ubuntu VM on a Windows host:
runs npm run protractor, the tests die. Specifically, the first line of the onPrepare throws an error.
runs npm run protractor --suite=ui, the tests run fine
runs npm run protractor --suite=api, the tests run fine
runs npm run protractor --suite=ui,api, the onPrepare errors again
At this point, I'm wondering if the issue is the VM's nodejs resources when Jasmine initially traverses the spec files. There are 15k+ its in the full suite. The onPrepare working fine outside of suite all makes me think the actual thrown error was a red herring (it was a database call with the mysql package that threw a connection timeout).
I can make a guess this is related to how different OS read your ** path. I'd recommend to use path library to solve this. Make sure you install it and then
let path = require('path');
let specs = path.resolve('**/*.spec.js'); // it should be /your/working/direction/**/*.spec.js
console.log(specs) // to confirm your assumption
exports.config = {
suites: {
all: [specs],
ui: ['ui/**/*.spec.js'],
api: ['api/**/*.spec.js']
},
}
But there might be something else, that causes the problem

Run selenium jar on travis CI from protractor node_modules folder

I am setting up Travis in order to execute e2e tests through protractor.
On my protractor.config.js I have the following:
seleniumServerJar: './node_modules/protractor/node_modules/webdriver-manager/selenium/selenium-server-standalone-3.5.0.jar'
So actually it refers to the selenium jar included by default inside the protractor plugin.
Then I use the plugin gulp-protractor in order to execute the tests pointing to the right protractor.config.js.
Locally everything works like a charm.
But when trying to execute this on Travis, I am getting the following error:
[18:59:15] I/launcher - Running 1 instances of WebDriver [18:59:15]
E/local - Error code: 135 [18:59:15] E/local - Error message: No
selenium server jar found at
/home/travis/build/quirimmo/Qprotractor/node_modules/protractor/node_modules/webdriver-manager/selenium/selenium-server-standalone-3.5.0.jar.
Run 'webdriver-manager update' to download binaries.
Any idea why it looks like it cannot retrieve the jar from the node_modules subfolder please?
Here my .travis.yml configuration, which is actually pretty simple:
sudo: required
dist: trusty
addons:
chrome: stable
language: node_js
node_js:
- '6.11'
before_script:
- export DISPLAY=:99.0
- sh -e /etc/init.d/xvfb start
- sleep 3
install:
- npm install
script:
- echo "Triggered!"
- gulp protractor-test
Thanks a lot, any help would be really appreciated!
p.s. I already did it on other projects with Travis running manually the webdriver-manager and then pointing to the selenium address from the protractor.config.js, but I don't want that solution and I want to go on through the seleniumServerJar property, because in this way it will run everything alone without any need of starting manually the webdriver-manager
Fixed in your repo. You should change your before_script to below
before_script:
- export DISPLAY=:99.0
- sh -e /etc/init.d/xvfb start
- sleep 3
- npm install -g webdriver-manager
- webdriver-manager update
- webdriver-manager start &
- sleep 3
And then in your protactor.confg.js add the seleniumAddress
exports.config = {
seleniumAddress: 'http://127.0.0.1:4444/wd/hub/',
specs: [
'./test/base-protractor.spec.js',
'./test/element-finder.spec.js',
'./test/element-array-finder.spec.js'
],
onPrepare: function() {
require('./index');
}
};
Posting here the answer if this could be useful for someone else in the future.
As explained very well in this link:
https://github.com/angular/protractor/issues/3225
You need to manually trigger the installation of the selenium server.
So in the install block of your travis file, you can simply add this:
install:
- npm install
- node_modules/protractor/bin/webdriver-manager update
And then inside the protractor.config.js, grab the current version of the installed selenium server:
const SELENIUM_FOLDER = './node_modules/protractor/node_modules/webdriver-manager/selenium';
const fs = require('fs');
let res, seleniumVersion;
fs.readdirSync(SELENIUM_FOLDER).forEach(file => {
res = file.match(/selenium-server-standalone-(\d{1}.\d{1}.\d{1}).jar/i);
if (res) {
seleniumVersion = res[1];
}
})
if (!seleniumVersion) {
throw new Error('No selenium server jar found inside your protractor node_modules subfolder');
}
And then execute it in this way:
seleniumServerJar: `./node_modules/protractor/node_modules/webdriver-manager/selenium/selenium-server-standalone-${seleniumVersion}.jar`
I hope this will help someone else avoiding to lose few hours of time against this issue!

kill node server after tests have run gitlab

I am trying to spin up a node server on my gitlab server. I have set up my gitlab-ci.yml file and it all seems to work. However, I want to kill the node server after the tests have finished running.
The relevant section of my gitlab-ci.yml file looks like this:
unit_tests:
stage: test
variables:
GIT_STRATEGY: clone
script:
- npm install
- npm run websocket-server
- npm test
after_script:
- //what should go here to kill the node server after the tests have run?
You can use pkill for that.
A simple usage method could be something like this:
after_script:
- pkill node
Read pkill(1)'s manual for more info:
https://linux.die.net/man/1/pkill

Protractor + xvfb + selenium test Hangs on Jenkins

I'm having this issue sometimes on my automated tests running remotely on Jenkins. About 30% of the times.
The error:
Tests hangs after this line:
"I/launcher - Running 1 instances of WebDriver"
It gets stuck at this step indefinitely and I have to stop execution (manually) or wait to timeout to take control of it.
I tried several known solutions that i found on this site but nothing changed on my results.
My config:
Ubuntu OS 14.04.5 on a headless cloud storage system.
xvfb
Jenkins 1.656
selenium-server-standalone-2.53.1
chromedriver_2.24linux64
geckodriver-v0.9.0
nodejs v0.10.37
node v6.3.1
npm 3.10.3
protractor Version 4.0.9
grunt-cli v1.2.0
grunt v0.4.5
jasmine v2.5.2
jasmine-core v2.5.2
Jenkins Job Config:
Jenkins downloads my test project from github and runs the following commands on Project's shell:
node node_modules/grunt-cli/bin/grunt testName:parameters
(I don't want to run grunt globally, thats why i execute it in this way.)
Is there anything i can try in order to solve this hang issue?
Thanks in advance!

Categories