I have an unconventional setup I can't change. It looks something like this:
test POSTs to server
server POSTs to blockchain
blockchain updates
syncing script updates the database
It is absolutely imperative that I do not run the next test until the test before it has completed that entire workflow, which typically takes 2-3 seconds. Here is an example of a test written for that flow with supertest and chai:
it('should create a user', done => {
request(server)
.post(`${API}/signup`)
.set('Content-Type', 'application/json')
.send(`{
"email":"${USER_EMAIL}",
"password":"${USER_PASSWORD}"
}`)
.expect(200)
.expect(res => {
expect(res.body.role).to.equal('user');
expect(res.body.id).to.match(ID_PATTERN);
})
.end(_wait(done));
});
That _wait function is the key issue here. If I write it very naively with a setTimeout, it will work:
const _wait = cb => () => setTimeout(cb, 5000);
However, this isn't a great solution since the blockchain is very unpredictable, and can sometimes take much more than 2-3 seconds. What would be much better is to watch the database for changes. Thankfully the database is written in Rethink, which provides cursor objects that update on a change. So that should be easy, and look something like this:
var _wait = cb => () => {
connector.exec(db => db.table('chain_info').changes())
.then(cursor => {
cursor.each((err, change) => {
cb(err);
return false;
});
});
};
This setup breaks the tests. As near as I can tell done does get called. Any console logs in and around it fire, and the test itself is logged as completed, but the next test never starts, and eventually everything times out:
Manager API Workflow:
Account Creation:
✓ should create a user (6335ms)
1) should login an administrator
1 passing (1m)
1 failing
1) Manager API Workflow: Account Creation: should login an administrator:
Error: timeout of 60000ms exceeded. Ensure the done() callback is being called in this test.
Any assistance would be greatly appreciated. I am using Mocha 3.1.2, Chai 3.5.0, Supertest 2.0.1, and Node 6.9.1.
Related
This question already has answers here:
Using async/await with a forEach loop
(33 answers)
Closed 3 years ago.
I am learning modern JavaScript, and am writing a little API. I plan to host it in MongoDB Stitch, which is a serverless lambda-like environment. I am writing functions in the way that this system requires, and then adding Jest functions to be run locally and in continuous integration.
I am learning Jest as I go, and for the most part, I like it, and my prior experience with Mockery in PHP is making it a fairly painless experience. However, I have an odd situation where my lack of knowledge of JavaScript is stopping my progress. I have a failing test, but it is intermittent, and if I run all of the tests, sometimes it all passes, and sometimes the test that fails changes from one to another. This behaviour, coupled with my using async-await makes me think I am experiencing a race condition.
Here is my SUT:
exports = async function(delay) {
/**
* #todo The lambda only has 60 seconds to run, so it should test how
* long it has been running in the loop, and exit before it gets to,
* say, 50 seconds.
*/
let query;
let ok;
let count = 0;
for (let i = 0; i < 5; i++) {
query = await context.functions.execute('getNextQuery');
if (query) {
ok = context.functions.execute('runQuery', query.phrase, query.user_id);
if (ok) {
await context.functions.execute('markQueryAsRun', query._id);
count++;
}
} else {
break;
}
// Be nice to the API
await sleep(delay);
}
return count;
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
};
The content.functions object is a global in Stitch, but as I show below, it is mocked inside Jest. Stitch is not involved in these tests at all.
As you can see, getNextQuery and markQueryAsRun are awaited, as they are defined as async.
Here is my main test:
// Main SUT
const findAndRunQueries = require('./_source');
// Utility test classes
const MongoTester = require('../../test/mongo-tester');
const StitchFuncMocking = require('../../test/stitch-func-mocking');
// Other functions for the integration test
const getNextQuery = require('../getNextQuery/_source');
const markQueryAsRun = require('../markQueryAsRun/_source');
describe('Some integration tests for findAndRunQueries', () => {
const mongoTester = new MongoTester('findAndRunQueries-integration');
const stitchFuncMocking = new StitchFuncMocking();
beforeAll(async () => {
await mongoTester.connect();
console.log('Connect');
});
afterAll(async () => {
await mongoTester.disconnect();
console.log('Disconnect');
});
beforeEach(async () => {
// Set up global values
global.context = {};
global.context.services = mongoTester.getStitchContext();
global.context.functions = stitchFuncMocking.getFunctionsObject(jest);
// Delete existing mocks
jest.clearAllMocks();
stitchFuncMocking.clearMocks();
// Connect some real implementations
stitchFuncMocking.setGlobalMock('getNextQuery', getNextQuery);
stitchFuncMocking.setGlobalMock('markQueryAsRun', markQueryAsRun);
// Truncate all collections in use
await mongoTester.emptyCollections(['queries']);
console.log('Init mocks and clear collections');
});
test('end-to-end test with no queries', async () => {
expect(await findAndRunQueries(0)).toBe(0);
});
test('end-to-end test with one successful query', async () => {
// Here is a query entry
await mongoTester.getDatabase().collection('queries').insertOne({
"user_id": 1,
"phrase": 'hello',
"last_run_at": null,
"enabled": true
});
var d = await mongoTester.getDatabase().collection('queries').findOne({});
console.log(d);
// We need to mock runQuery, as that calls an external API
stitchFuncMocking.setGlobalMock('runQuery', () => 123);
// Let's see if we can run a call sucessfully
expect(await findAndRunQueries(0)).toBe(1);
// #todo Check that a log entry has been made
});
});
From this code you can see that getNextQuery and markQueryAsRun are wired to their real implementations (since this is an integration test) but runQuery is a mock, because I don't want this test to make HTTP calls.
For brevity, I am not showing the above code, as I don't think it is needed to answer the question. I am also not showing all of MongoTester or any of StitchFuncMocking (these connect to an in-memory MongoDB instance and simplify Jest mocking respectively).
For database-level tests, I run this MongoTester utility function to clear down collections:
this.emptyCollections = async function(collections) {
// Interesting note - how can I do deleteMany without async, but
// wait for all promises to finish before the end of emptyCollections?
collections.forEach(async (collectionName) => {
let collection = this.getDatabase().collection(collectionName);
await collection.deleteMany({});
});
};
This is how I am running the test:
sh bin/test-compile.sh && node node_modules/jest/bin/jest.js -w 1 functions/findAndRunQueries/
The compilation step can be ignored (it just converts the exports to module.exports, see more here). I then run this test plus a unit test inside the functions/findAndRunQueries/ folder. The -w 1 is to run a single thread, in case Jest does some weird parallelisation.
Here is a good run (containing some noisy console logging):
root#074f74105081:~# sh bin/test-compile.sh && node node_modules/jest/bin/jest.js -w 1 functions/findAndRunQueries/
PASS functions/findAndRunQueries/findAndRunQueries.integration.test.js
● Console
console.log functions/findAndRunQueries/findAndRunQueries.integration.test.js:18
Connect
console.log functions/findAndRunQueries/findAndRunQueries.integration.test.js:42
Init mocks and clear collections
console.log functions/findAndRunQueries/findAndRunQueries.integration.test.js:42
Init mocks and clear collections
console.log functions/findAndRunQueries/findAndRunQueries.integration.test.js:60
{ _id: 5e232c13dd95330804a07355,
user_id: 1,
phrase: 'hello',
last_run_at: null,
enabled: true }
PASS functions/findAndRunQueries/findAndRunQueries.test.js
Test Suites: 2 passed, 2 total
Tests: 6 passed, 6 total
Snapshots: 0 total
Time: 0.783s, estimated 1s
Ran all test suites matching /functions\/findAndRunQueries\//i.
In the "end-to-end test with one successful query" test, it inserts a document and then passes some assertions. However, here is another run:
root#074f74105081:~# sh bin/test-compile.sh && node node_modules/jest/bin/jest.js -w 1 functions/findAndRunQueries/
FAIL functions/findAndRunQueries/findAndRunQueries.integration.test.js
● Console
console.log functions/findAndRunQueries/findAndRunQueries.integration.test.js:18
Connect
console.log functions/findAndRunQueries/findAndRunQueries.integration.test.js:42
Init mocks and clear collections
console.log functions/findAndRunQueries/findAndRunQueries.integration.test.js:42
Init mocks and clear collections
console.log functions/findAndRunQueries/findAndRunQueries.integration.test.js:60
null
● Some integration tests for findAndRunQueries › end-to-end test with one successful query
expect(received).toBe(expected) // Object.is equality
Expected: 1
Received: 0
64 |
65 | // Let's see if we can run a call sucessfully
> 66 | expect(await findAndRunQueries(0)).toBe(1);
| ^
67 |
68 | // #todo Check that a log entry has been made
69 | });
at Object.test (functions/findAndRunQueries/findAndRunQueries.integration.test.js:66:44)
PASS functions/findAndRunQueries/findAndRunQueries.test.js
Test Suites: 1 failed, 1 passed, 2 total
Tests: 1 failed, 5 passed, 6 total
Snapshots: 0 total
Time: 0.918s, estimated 1s
Ran all test suites matching /functions\/findAndRunQueries\//i.
The null in the log output indicates that the insert failed, but I do not see how that is possible. Here is the relevant code (reproduced from above):
await mongoTester.getDatabase().collection('queries').insertOne({
"user_id": 1,
"phrase": 'hello',
"last_run_at": null,
"enabled": true
});
var d = await mongoTester.getDatabase().collection('queries').findOne({});
console.log(d);
I assume the findOne() returns a Promise, so I have awaited it, and it is still null. I also awaited the insertOne() as I reckon that probably returns a Promise too.
I wonder if my in-RAM MongoDB database might not be performing like a real Mongo instance, and I wonder if I should spin up a Docker MongoDB instance for testing.
However, perhaps there is just an async thing I have not understood? What can I dig into next? Is it possible that the MongoDB write concerns are set to the "don't confirm write before returning control"?
Also perhaps worth noting is that the "Disconnect" message does not seem to pop up. Am I not disconnecting from my test MongoDB instance, and could that cause a problem by leaving an in-memory server in a broken state?
The problem in my project was not initially shown in the question - I had omitted to show the emptyCollections method, believing the implementation to be trivial and not worth showing. I have now updated that in the question.
The purpose of that method is to clear down collections between tests, so that I do not accidentally rely on serial effects based on test run order. The new version looks like this:
this.emptyCollections = async function(collections) {
const promises = collections.map(async (collectionName, idx) => {
await this.getDatabase().collection(collectionName).deleteMany({})
});
await Promise.all(promises);
};
So, what was going wrong with the old method? It was just a forEach wrapping awaited database operation Promises - not much to go wrong, right?
Well, it turns out that plenty can go wrong. I have learned that async functions can be run in a for, and they can be run in a for in, but the Array implementation of forEach does not do any internal awaiting, and so it fails. This article explains some of the differences. It sounds like if there is one takeaway, it is "don't try to run async code in a forEach loop".
The effect of my old function was that the collection emptying was not finished even after that method had run, and so when I did other async operations - like inserting a document! - there would be a race that determined which one executed first. This partially explains the intermittent test failure.
As it turns out, I had made some provision while debugging to use a different Mongo database per test file, in case Jest was doing some parallelisation. In fact it was, and when I removed the feature to use different databases, it failed again - this time because there was a race condition between tests (and their respective beforeEach functions) to set up the state of a single database.
Is there a way to call fetch in a Jest test? I just want to call the live API to make sure it is still working. If there are 500 errors or the data is not what I expect than the test should report that.
I noticed that using request from the http module doesn't work. Calling fetch, like I normally do in the code that is not for testing, will give an error: Timeout - Async callback was not invoked within the 5000ms timeout specified by jest.setTimeout. The API returns in less than a second when I call it in the browser. I use approximately the following to conduct the test but I also have simply returned the fetch function from within the test without using done with a similar lack of success:
import { JestEnvironment } from "#jest/environment";
import 'isomorphic-fetch';
import { request, } from "http";
jest.mock('../MY-API');
describe('tests of score structuring and display', () => {
test('call API - happy path', (done) => {
fetch(API).then(
res => res.json()
).then(res => {
expect(Array.isArray(response)).toBe(true);
console.log(`success: ${success}`);
done();
}).catch(reason => {
console.log(`reason: ${reason}`);
expect(reason).not.toBeTruthy();
done();
});
});
});
Oddly, there is an error message I can see as a console message after the timeout is reached: reason: ReferenceError: XMLHttpRequest is not defined
How can I make an actual, not a mocked, call to a live API in a Jest test? Is that simply prohibited? I don't see why this would fail given the documentation so I suspect there is something that is implicitly imported in React-Native that must be explicitly imported in a Jest test to make the fetch or request function work.
Putting aside any discussion about whether making actual network calls in unit tests is best practice...
There's no reason why you couldn't do it.
Here is a simple working example that pulls data from JSONPlaceholder:
import 'isomorphic-fetch';
test('real fetch call', async () => {
const res = await fetch('https://jsonplaceholder.typicode.com/users/1');
const result = await res.json();
expect(result.name).toBe('Leanne Graham'); // Success!
});
With all the work Jest does behind the scenes (defines globals like describe, beforeAll, test, etc., routes code files to transpilers, handles module caching and mocking, etc.) ultimately the actual tests are just JavaScript code and Jest just runs whatever JavaScript code it finds, so there really aren't any limitations on what you can run within your unit tests.
I got the following error message "Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL." On top of that, I also received an error message stating that 0 assertions were executed while expecting 2 assertions.
I've tried extending the timeout to 10 seconds using jest.setTimeout(10000), which should be more than sufficient time to execute that code, but the problem persisted.
I know m.employeeGetAll() works because when I test my web app using the browser, I can see the list of employees in the view.
Here's how my test looks like
it('Lists all employees successfully', () => {
expect.assertions(2);
return m.employeeGetAll().then(result => { //m.employeeGetAll() returns a promise
expect(result).toBeDefined();
expect(result.length).toBe(3);
});
});
The problem I've discovered, was the way async code works.
What cannot be seen in the code snippet was the call to mongoose.connection.close(); at the very end of my test file.
This call must be done inside afterEach() or afterAll() function for Jest unit testing framework. Otherwise, the connection to the database will be closed before the tests could complete since all of the calls in my controller methods are asynchronous; This leads to no promise ever being returned and the code goes into timeout.
Since I'm using beforeAll() and afterAll(), to load data from database once before the start of all tests and to clear the database at the end of all tests, I've included the call to connect to DB using mongoose inside beforeAll() as well.
Hope this helps someone who's also stuck in my situation.
with async you have to call done
it('Lists all employees successfully', (done) => {
expect.assertions(2);
return m.employeeGetAll().then(result => { //m.employeeGetAll() returns a promise
expect(result).toBeDefined();
expect(result.length).toBe(3);
done();
});
});
I have to transcode videos from webm to mp4 when they're uploaded to firebase storage. I have a code demo here that works, but if the uploaded video is too large, firebase functions will time out on me before the conversion is finished. I know it's possible to increase the timeout limit for the function, but that seems messy, since I can't ever confirm the process will take less time than the timeout limit.
Is there some way to stop firebase from timing out without just increasing the maximum timeout limit?
If not, is there a way to complete time consuming processes (like video conversion) while still having each process start using firebase function triggers?
If even completing time consuming processes using firebase functions isn't something that really exists, is there some way to speed up the conversion of fluent-ffmpeg without touching the quality that much? (I realize this part is a lot to ask. I plan on lowering the quality if I absolutely have to, as the reason webms are being converted to mp4 is for IOS devices)
For reference, here's the main portion of the demo I mentioned. As I said before, the full code can be seen here, but this section of the code copied over is the part that creates the Promise that makes sure the transcoding finishes. The full code is only 70 something lines, so it should be relatively easy to go through if needed.
const functions = require('firebase-functions');
const mkdirp = require('mkdirp-promise');
const gcs = require('#google-cloud/storage')();
const Promise = require('bluebird');
const ffmpeg = require('fluent-ffmpeg');
const ffmpeg_static = require('ffmpeg-static');
(There's a bunch of text parsing code here, followed by this next chunk of code inside an onChange event)
function promisifyCommand (command) {
return new Promise( (cb) => {
command
.on( 'end', () => { cb(null) } )
.on( 'error', (error) => { cb(error) } )
.run();
})
}
return mkdirp(tempLocalDir).then(() => {
console.log('Directory Created')
//Download item from bucket
const bucket = gcs.bucket(object.bucket);
return bucket.file(filePath).download({destination: tempLocalFile}).then(() => {
console.log('file downloaded to convert. Location:', tempLocalFile)
cmd = ffmpeg({source:tempLocalFile})
.setFfmpegPath(ffmpeg_static.path)
.inputFormat(fileExtension)
.output(tempLocalMP4File)
cmd = promisifyCommand(cmd)
return cmd.then(() => {
//Getting here takes forever, because video transcoding takes forever!
console.log('mp4 created at ', tempLocalMP4File)
return bucket.upload(tempLocalMP4File, {
destination: MP4FilePath
}).then(() => {
console.log('mp4 uploaded at', filePath);
});
})
});
});
Cloud Functions for Firebase is not well suited (and not supported) for long-running tasks that can go beyond the maximum timeout. Your only real chance at using only Cloud Functions to perform very heavy compute operations is to find a way to split up the work into multiple function invocations, then join the results of all that work into a final product. For something like video transcoding, that sounds like a very difficult task.
Instead, consider using a function to trigger a long-running task in App Engine or Compute Engine.
As follow up for the random anonymous person who tries to figure out how to get past transcoding videos or some other long processes, here's a version of the same code example I gave that instead sends a http request to a google app engine process which transcodes the file. No documentation for it as of right now, but looking at Firebase/functions/index.js code and the app.js code may help you with your issue.
https://github.com/Scew5145/GCSConvertDemo
Good luck.
I am a beginner in Node js and was wondering if someone could help me out.
Winston allows you to pass in a callback which is executed when all transports have been logged - could someone explain what this means as I am slightly lost in the context of callbacks and Winston?
From https://www.npmjs.com/package/winston#events-and-callbacks-in-winston I am shown an example which looks like this:
logger.info('CHILL WINSTON!', { seriously: true }, function (err, level, msg, meta) {
// [msg] and [meta] have now been logged at [level] to **every** transport.
});
Great... however I have several logger.info across my program, and was wondering what do I put into the callback? Also, do I need to do this for every logger.info - or can I put all the logs into one function?
I was thinking to add all of the log call into an array, and then use async.parallel so they all get logged at the same time? Good or bad idea?
The main aim is to log everything before my program continues with other tasks.
Explanation of the code above in callback and winston context would be greatly appreciated!
Winston allows you to pass in a callback which is executed when all transports have been logged
This means that if you have a logger that handles more than one transport (for instance, console and file), the callback will be executed only after the messages have been logged on all of them (in this case, on both the console and the file).
An I/O operation on a file will always take longer than just outputting a message on the console. Winston makes sure that the callback will be triggered, not at the end of the first transport logging, but at the end of the last one of them (that is, the one that takes longest).
You don't need to use a callback for every logger.info, but in this case it can help you make sure everything has been logged before continuing with the other tasks:
var winston = require('winston');
winston.add(winston.transports.File, { filename: './somefile.log' });
winston.level = 'debug';
const tasks = [x => {console.log('task1');x();},x => {console.log('task2');x();},x => {console.log('task3');x()}];
let taskID = 0;
let complete = 0;
tasks.forEach(task => {
task(() => winston.debug('CHILL WINSTON!', `logging task${++taskID}`, waitForIt));
});
function waitForIt() {
// Executed every time a logger has logged all of its transports
if (++complete===tasks.length) nowGo();
};
function nowGo() {
// Now all loggers have logged all of its transports
winston.log('debug', 'All tasks complete. Moving on!');
}
Sure... you probably won't define tasks that way, but just to show one way you could launch all the tasks in parallel and wait until everythings has been logged to continue with other tasks.
Just to explain the example code:
The const tasks is an array of functions, where each one accepts a function x as a parameter, first performs the task at hand, in this case a simple console.log('task1'); then executes the function received as parameter, x();
The function passed as parameter to each one of those functions in the array is the () => winston.debug('CHILL WINSTON!',`logging task${++taskID}`, waitForIt)
The waitForIt, the third parameter in this winston.debug call, is the actual callback (the winston callback you inquired about).
Now, taskID counts the tasks that have been launched, while complete counts the loggers that have finished logging.
Being async, one could launch them as 1, 2, 3, but their loggers could end in a 1, 3, 2 sequence, for all we know. But since all of them will trigger the waitForIt callback once they're done, we just count how many have finished, then call the nowGo function when they all are done.
Compare it to
var winston = require('winston');
var logger = new winston.Logger({
level:'debug',
transports: [
new (winston.transports.Console)(),
new (winston.transports.File)({filename: './somefile.log'})
]
});
const tasks = [x => {console.log("task1");x();},x => {console.log("task2");x();},x => {console.log("task3");x()}];
let taskID = 0;
let complete = 0;
tasks.forEach(task => {
task(() => logger.debug('CHILL WINSTON!', `logging task${++taskID}`, (taskID===tasks.length)? nowGo : null));
});
logger.on('logging', () => console.log(`# of complete loggers: ${++complete}`));
function nowGo() {
// Stop listening to the logging event
logger.removeAllListeners('logging');
// Now all loggers have logged all of its transports
logger.debug('All tasks complete. Moving on!');
}
In this case, the nowGo would be the callback, and it would be added only to the third logger.debug call. But if the second logger finished later than the third, it would have continued without waiting for the second one to finish logging.
In such simple example it won't make a difference, since all of them finish equally fast, but I hope it's enough to get the concept.
While at it, let me recommend the book Node.js Design Patterns by Mario Casciaro for more advanced async flow sequencing patterns. It also has a great EventEmitter vs callback comparison.
Hope this helped ;)