This might be a case of 'you're using the wrong tools for the job' but I'm going to shoot my question anyways, because this is what I have to work with for now.
So, here goes:
I have to make relatively small applications that periodically run as functions in an Azure environment. These applications perform tasks like fetching data from an API and storing that data on a SFTP server. When I create these applications I use a TDD approach with Jest.
I'd like to react to any problems proactively and solve them before the function runs are scheduled. If I run Jest locally I would notice any of these problems but I'd like to automate this proces. Therefor I'd like to know if it's possible to run these tests from an Azure function and have Azure Warnings notify me when one these runs fail.
What have I tried?
Created new function folder "Jest_Function"
Added an always failing test in a separate file.
/main_functions_folder
/jest_function
- index.js
- function.json
- failingTest.test.js
added the following code to index.js:
const { exec } = require('child_process');
function checkTests() {
return new Promise((resolve, reject) => {
exec('npm run test failingTest.test.js', (error) => {
if (error) reject(error);
else resolve();
});
});
}
module.exports = async function (context) {
try {
await checkTests();
} catch (err) {
context.log('tests failed!');
throw err;
}
};
Transforming the function and running it in the terminal results in expected behaviour:
const { exec } = require('child_process');
function checkTests() {
return new Promise((resolve, reject) => {
exec('npm run test failingTest.test.js', (error) => {
if (error) reject(error);
else resolve();
});
});
}
async function myTest() {
try {
await checkTests();
} catch (err) {
console.log('tests failed!');
throw err;
}
}
myTest();
tests failed!
node:child_process:399
ex = new Error('Command failed: ' + cmd + '\n' + stderr);
^
Error: Command failed: npm run test failingTest.test.js
FAIL jest_function/failingTest.test.js
✕ short test (3 ms)
● short test
expect(received).toBe(expected) // Object.is equality
Expected: 1
Received: 0
1 | test('short test', () => {
> 2 | expect(0).toBe(1);
| ^
3 | });
4 |
at Object.<anonymous> (jest_function/failingTest.test.js:2:13)
Test Suites: 1 failed, 1 total
Tests: 1 failed, 1 total
Snapshots: 0 total
Time: 0.227 s, estimated 1 s
Ran all test suites matching /failingTest.test.js/i.
at ChildProcess.exithandler (node:child_process:399:12)
at ChildProcess.emit (node:events:520:28)
at maybeClose (node:internal/child_process:1092:16)
at Process.ChildProcess._handle.onexit (node:internal/child_process:302:5) {
killed: false,
code: 1,
signal: null,
cmd: 'npm run test failingTest.test.js'
}
Azure
I deployed the function in Azure and manualy ran it. This resulted in a failing function as I expected, but for the wrong reason. It displayed the following error message:
Result: Failure Exception: Error: Command failed: npm run test failingTest.test.js sh: 1: jest: Permission denied
I'm not really sure where to go from here, any help or advice will be appreciated!
Not sure if you can use jest directly from within Functions but I know you can run pupeteer headless in Azure Functions:
https://anthonychu.ca/post/azure-functions-headless-chromium-puppeteer-playwright/
and there's also jest-pupeteer package but not sure if there is a specific limitation on jest in Functions if all of the deps are installed as runtime dependencies.
I was able to make this work using npx instead of npm:
const { exec } = require('child_process');
function checkTests() {
return new Promise((resolve, reject) => {
exec('npx jest jest_function/failingTest.test.js', (error) => {
if (error) reject(error);
else resolve();
});
});
}
module.exports = async function (context) {
try {
await checkTests();
} catch (err) {
context.log('tests failed!');
throw err;
}
};
Looking at the logs I'm not really sure what '330' exactly is, but is assume it is installing jest?
2022-04-19T09:54:06Z [Information] Error: Command failed: npx jest
npx: installed 330 in 32.481s
Anyways I'm glad I got this working now :).
Related
I needed to execute shell commands and keep the process alive to keep the changes made in the env variables, so I made a function that looks like this:
import { spawn } from 'child_process';
function exec(command) {
const bash = spawn('/bin/bash', {
detached: true
});
bash.stdout.setEncoding('utf-8');
bash.stderr.setEncoding('utf-8');
return new Promise(resolve => {
const result = {
stdout: '',
stderr: '',
code: 0
};
bash.stdout.on('data', data => {
// Removes the last line break (i.e \n) from the data.
result.stdout += data.substring(0, data.length - 1);
// Gets the code by getting the number before the `:` on the last line
result.code = Number(result.stdout.split('\n').pop().split(':')[0]);
resolve(result);
});
bash.stderr.on('data', err => {
result.stderr += err;
resolve(result);
});
// Writes the passed command and another to check the code of the command
bash.stdin.write(command + `; echo "$?:${ command }"\n`);
});
}
The test is simple:
it('should run a command that outputs an error', async () => {
// Notice the typo in `echoo`
const res = await exec('echoo test');
expect(res.code).toBe(127);
});
When running this test, it sometimes fails with res.code being 0 instead of 127.
When testing without jest, it works flawlessly 100% of the time.
There are few solutions given by Jest in that direction. I have tried out most of the solutions like --findRelatedTests, --onlyChanged, --changedSince. But there are few shortcomings in every solutions. I thought --changedSince is the best match for me.
jest --changedSince=origin/master --coverage
It mostly covers the basic scenarios like running test files corresponding to the changed source files. But it does not handle few scenarios like if a source-file(say a.js) is deleted and same(a.js) is being used(imported) in another file(say b.js), it does not run tests for any of the files(a.js or b.js). It does not seem to run tests for parent files where it was imported.
Is there a clean solution which can handle all the scenarios like file rename/deletion, dynamic imports, running tests for the parent modules where it was imported or any other impact that may happen when you change a source file?
Quick answer: No.
Long answer: Yes, but it's not that clean or straight forward.
I have achieved this through three steps.
STEP 1:
You can achieve this by a bit of scripting. First you'll want to get a list of all the changed files through Git.
This can be achieved through a function like the below:
const util = require("util")
const exec = util.promisify(require("child_process").exec)
const detectChangedFiles = async () => {
try {
const { stdout, stderr } = await exec("git diff origin/master --name-only")
if (stderr) {
throw new Error(stderr)
}
return stdout.replace(/\n/g, " ").replace(/client\//g, " ")
} catch (error) {
console.log(error)
}
}
STEP 2:
Secondly, you'd wanna get a list of related tests for those files like the below:
const findRelatedTests = async () => {
const changedFiles = await detectChangedFiles()
try {
const { stdout, stderr } = await exec(`jest --listTests --findRelatedTests ${changedFiles}`)
if (stderr) {
throw new Error(stderr)
}
if (!stdout) {
console.log('No tests found for the changed files :)')
} else {
return stdout.replace(/\n/g, " ")
}
} catch (error) {
console.log(error)
}
}
STEP 3:
And finally you'd wanna feed all of those tests to jest to run;
const runRelatedTests = async () => {
const relatedTests = await findRelatedTests()
if (relatedTests) {
try {
const { stdout, stderr } = await exec(`jest --ci --coverage ${relatedTests}`)
if (stderr) {
throw new Error(stderr)
}
} catch (error) {
console.log(error)
}
}
}
One of the limitations of this implementation is that I"m always diffing against the master and that's not a good assumption. In special cases, one may chose to merge against another branch.
This can be handled in a few ways;
If you're running a cli, pass the arguments to the cli and consume it in your script
If you're running in pipeline like Gitlab and assuming that this is a MR/PR, consider using some available environment varibale ( in this case CI_MERGE_REQUEST_TARGET_BRANCH_NAME )
I have made a function that I am exporting using node.js called return_match_uid. I am importing the function in another express routing file and am using async await, with try and catch to handle the error. But somehow, the errors produced by return_match_uid always slip and are unhandled, even though I am using the error handling for the realtime listener recommended by Firestore doc
Here is the function:
exports.return_match_uid = function return_match_uid() {
return new Promise((resolve, reject) => {
const unsub = db.collection('cities').onSnapshot(() => {
throw ("matching algo error");
resolve();
unsub();
}, err => {
console.log(err);
})
})
})
In another express router file, I am calling the function:
const Router = require('express').Router;
const router = new Router();
const {return_match_uid} = require("./match_algo");
router.get('/match', async (req, res) => {
try {
var match_user = await return_match_uid(req.query.data, req.query.rejected);
res.send(match_user);
}
catch (error) {
console.log("Matching algorithm return error: " + error);
}
})
The error I am throwing inside the function: matching algo error do not get caught by either the err => {console.log(err);}) in the function nor the try catch block in the router. It slips and causes my app to crash. It shows the following error:
throw "matching algo error!";
^
matching algo error!
(Use `node --trace-uncaught ...` to show where the exception was thrown)
[nodemon] app crashed - waiting for file changes before starting...
I am throwing an error inside matching algo error because I have some other codes in there, and there is a possibility that it produces an error. If it does, I would like to make sure that it gets handled properly.
I am learning node.js and how to test functions. I have a problem when using mocha: when functions are passing test, everything is completely fine, I get a nice looking message.
But if whichever function which doesnt pass a test - for example the result in the test is 0 but intentionally I wrote the asswertion to expect 1 - it gives me a mile long error massage in the bash-cli-console:
Async functions
(node:6001) UnhandledPromiseRejectionWarning: AssertionError [ERR_ASSERTION]: 0 == 1
at utils.requestWikiPage.then.resBody (/home/sandor/Documents/learning-curve-master/node-dev-course/testing-tut/utils/utils.test.js:10:20)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
(node:6001) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:6001) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
1) it should return a html page
0 passing (2s)
1 failing
1) Async functions
it should return a html page:
Error: Timeout of 2000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/home/sandor/Documents/learning-curve-master/node-dev-course/testing-tut/utils/utils.test.js)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! dev-course#1.0.0 test: `mocha ./testing-tut/**/*.test.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the dev-course#1.0.0 test script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/sandor/.npm/_logs/2018-07-04T11_31_53_292Z-debug.log
[nodemon] app crashed - waiting for file changes before starting...
I dont know why do I get this part: UnhandledPromiseRejectionWarning...
and why do I get this part: npm ERR! code ELIFECYCLE
the function I am testing: (it makes a request to wikipedia for the wiki page of george washington and collects the html page from the response. On 'end' of the responses readstream it resolves the html page. The function works just fine)
// utils.js
function requestWikiPage() {
const reqOpts = {
hostname : 'en.wikipedia.org',
port : 443,
path : '/wiki/George_Washington',
method : "GET"
}
return new Promise(resolve => {
let req = https.request(reqOpts, (res) => {
let resBody = "";
res.setEncoding('utf-8');
res.on('data', (chunk) => {
resBody += chunk;
});
res.on('end', () => {
resolve(resBody);
});
});
req.on('err', (err) => {
console.log(err);
});
req.end();
});
}
module.exports.requestWikiPage = requestWikiPage;
Mocha code: (The 'resBody' variable is a string, containing a html page, where '' stays on the index of 0. In the assertion I test it for 1 to create an error message)
const utils = require('./utils');
var assert = require('assert');
describe('Async functions', function() {
it('it should return a html page', (done) => {
utils.requestWikiPage().then(resBody => {
assert.equal(resBody.indexOf('<!DOCTYPE html>'), 1);
done();
});
});
});
So I dont understand why do I get that long error message just because I expect to be not on the 0 index than on the first? (Actually I get that error message whith every function not just with this)
How can I set up mocha that it gives me a more minimal and intuitive error message.
Thanks a million for your answers
You need to properly reject the promise in #requestWikiPage if it doesn't resolve or there is an error, and then handle that rejection in your test. The following changes will likely solve the issue in your question (i.e. having mocha correctly handle a failed test without all the extra output), but the next step would obviously be getting your test to pass.
Notice we add the reject callback to our new Promise() and instead of console.log(err); below in your req.on('error'... callback, we now use reject as our error callback.
// utils.js
function requestWikiPage() {
const reqOpts = {
hostname : 'en.wikipedia.org',
port : 443,
path : '/wiki/George_Washington',
method : "GET"
}
return new Promise((resolve, reject) => {
let req = https.request(reqOpts, (res) => {
let resBody = "";
res.setEncoding('utf-8');
res.on('data', (chunk) => {
resBody += chunk;
});
res.on('end', () => {
resolve(resBody);
});
});
req.on('err', reject);
req.end();
});
}
module.exports.requestWikiPage = requestWikiPage;
And now handle if the promise is rejected via a catch block by using done as the catch callback (which would effectively pass the error to done which mocha requires).
const utils = require('./utils');
var assert = require('assert');
describe('Async functions', function() {
it('it should return a html page', (done) => {
utils.requestWikiPage().then(resBody => {
assert.equal(resBody.indexOf('<!DOCTYPE html>'), 1);
done();
}).catch(done);
});
});
I'm running unittests via Mocha / Chai on a sequelize definition as shown:
Main tests.js that is run with mocha tests.js:
// Testing Dependencies
expect = require("chai").expect;
should = require("chai").should;
require('dotenv').load();
var Sequelize = require('sequelize');
var sequelize = new Sequelize(
process.env.PG_DB_TEST,
process.env.PG_USER,
process.env.PG_PASSWORD, {
dialect: "postgres",
logging: false
});
var models = require('./models/db')(sequelize);
var seq_test = function (next) {
return function () {
beforeEach(function (done) {
sequelize.sync({ force: true }).then(function() {
done();
});
});
afterEach(function (done) {
sequelize.drop().then(function() {
done();
});
});
next();
};
}
describe("Model Unittests", seq_test(function () {
require("./models/tests/test_user.js")(models);
require("./models/tests/test_interest.js")(models);
}));
test_user.js
var mockedUser = require("./mocks/user");
module.exports = function (models) {
var User = models.user;
it("User should have the correct fields", function (done) {
User.create(mockedUser).then(function (result) {
expect(result.pack()).to.include.all.keys(
["id", "name", "email", "intro"]
);
done();
});
});
it("User should require an email", function (done) {
User.create({
"name": mockedUser['name']
}).then(function (result) {
expect.fail();
done();
}).catch(function (err) {
expect(err['name']).to.be.equal('SequelizeValidationError');
done();
});
});
it("User should require a name", function (done) {
User.create({
"email": mockedUser['email']
}).then(function (result) {
expect.fail();
done();
}).catch(function (err) {
expect(err['name']).to.be.equal('SequelizeValidationError');
done();
});
});
}
Sometimes (about 1 out of 15 on a Codeship (CI)), it gives this error:
Model Unittests
Unhandled rejection SequelizeUniqueConstraintError: Validation error
at Query.formatError (/home/rof/src/github.com/podtogether/pod-test-prototype/node_modules/sequelize/lib/dialects/postgres/query.js:402:16)
at null.<anonymous> (/home/rof/src/github.com/podtogether/pod-test-prototype/node_modules/sequelize/lib/dialects/postgres/query.js:108:19)
at emitOne (events.js:77:13)
at emit (events.js:169:7)
at Query.handleError (/home/rof/src/github.com/podtogether/pod-test-prototype/node_modules/pg/lib/query.js:108:8)
at null.<anonymous> (/home/rof/src/github.com/podtogether/pod-test-prototype/node_modules/pg/lib/client.js:171:26)
at emitOne (events.js:77:13)
at emit (events.js:169:7)
at Socket.<anonymous> (/home/rof/src/github.com/podtogether/pod-test-prototype/node_modules/pg/lib/connection.js:109:12)
at emitOne (events.js:77:13)
at Socket.emit (events.js:169:7)
at readableAddChunk (_stream_readable.js:146:16)
at Socket.Readable.push (_stream_readable.js:110:10)
at TCP.onread (net.js:523:20)
1) "before each" hook for "User should have the correct fields"
Locally, these unittests haven't failed (I've run it perhaps... 60 times in a row). I saw similar issues earlier when I didn't use the done callback in the beforeEach and afterEach. Both of those were async and needed to wait before continuing. After fixing that, I stopped seeing these issues locally.
Can anyone shed some light on this issue? (ssh'ed into Codeship and ran the tests resulted in the 1 / ~15 error)
I had this issue with my QA database. Sometimes a new record would save to the database, and sometimes it would fail. When performing the same process on my dev workstation it would succeed every time.
When I caught the error and printed the full results to the console, it confirmed that a unique constraint as being violated - specifically, the primary key id column, which was set to default to an autoincremented value.
I had seeded my database with records, and even though the ids of those records were also set to autoincrement, the ids of the 200-some records were scattered between 1 and 2000, but the database's autoincrement sequence was set to start at 1. Usually the next id in sequence was unused, but occasionally it was already occupied, and the database would return this error.
I used the answer here to reset the sequence to start after the last of my seeded records, and now it works every time.
If you are seeding records to run integration tests on, it is possible that the database autoincrement sequence isn't set to follow them. Sequelize doesn't have this functionality because it's a simple, single command operation that needs to be run in the database.
I had this issue. It was caused by the autoincrement not being set correctly after seeding. The root issue was that our seed methods were explicitly setting the primary/autoincremented key (id) in the seed method, which should generally be avoided. We remove the ids, and the issue was resolved.
Here's a reference to the sequelize issue where we found the solution: https://github.com/sequelize/sequelize/issues/9295#issuecomment-382570944