Unit test asynchronous generator function in Jest - javascript

I would like to write a unit test for a generator function but I am not able to pass a properly mocked read stream (ReadStream) object.
Testable function:
public async *readChunks(file: string, chunkSize: number): AsyncIterableIterator<Buffer> {
if (!this.cwd) throw new Error('Working directory is not set!');
const readStream: ReadStream = fs.createReadStream(path.join(this.cwd, file), {
highWaterMark: chunkSize
});
for await (const chunk of readStream) yield chunk;
}
Failed implementation (I tried different mocking of the createReadStream but without success):
describe('Work Dir Utils', () => {
jest.mock('fs');
let workDirUtils: WorkDirUtils;
beforeEach(() => {
(os.tmpdir as jest.Mock).mockReturnValue('/tmp');
(fs.mkdtempSync as jest.Mock).mockReturnValue('/tmp/folder/pref-rand');
(fs.createReadStream as jest.Mock).mockReturnValue({});
workDirUtils = new WorkDirUtils();
workDirUtils.createTempDir('pref-');
});
afterEach(() => {
jest.clearAllMocks();
});
it('should read chunks of a file using generator', async () => {
for await (const chunk of workDirUtils.readChunks(
path.join(__dirname, './fixtures/manifest.ts'),
1024 * 1024 * 1024
)) {
expect(chunk).toBeInstanceOf(Buffer);
}
});
});
Any suggestions?

Actually, it turned out to be quite easy. In the end, I did not want to revoke the question. Maybe it will be useful for others.
jest.mock('fs');
jest.mock('tar');
jest.mock('os');
let workDirUtils: WorkDirUtils;
describe('Work Dir Utils', () => {
beforeEach(() => {
(os.tmpdir as jest.Mock).mockReturnValue('/tmp');
(fs.mkdtempSync as jest.Mock).mockReturnValue('/tmp/folder/pref-rand');
(fs.existsSync as jest.Mock).mockReturnValue(true);
(fs.createReadStream as jest.Mock).mockReturnValue(Readable.from([path.join(__dirname, './fixtures/manifest.ts')]));
workDirUtils = new WorkDirUtils();
workDirUtils.createTempDir('pref-');
});
afterEach(() => {
jest.clearAllMocks();
});
it('should generator function throw an error', async () => {
const workdirUtilsMock = new WorkDirUtils();
const generator = workdirUtilsMock.readChunks('file-path', 5000);
expect(generator.next).rejects.toThrow('Working directory is not set!');
});
it('should read chunks of a file using generator', async () => {
const generator = workDirUtils.readChunks(path.join(__dirname, './fixtures/manifest.ts'), 1024 * 1024 * 1024);
const response = await generator.next();
expect(response).toBeInstanceOf(Object);
expect(response.value).toEqual(path.join(__dirname, './fixtures/manifest.ts'));
});
});

Related

JEST Mocking from node modules

I have been trying this for a while and i couldn't find the solution.
I tried a lot of solutions (__ mock __ folder, mockimplementation, mock and more) but i always had the same error client.mockImplementation is not a function
//restclient.js
module.exports = (cfg) => new Client(config);
// api.js
const client = require('restclient');
module.exports.doRequest = () => {
const request = client();
const config = {};
request.get('/path/to/request', config)
.then(result => console.log(result))
}
//api.tests.js
const client = require('restclient');
const api = require('./api');
jest.mock('restclient', () => () => ({
get: jest.fn(),
}));
describe('testing API', () => {
test('test then', async () => {
try {
restclient.mockImplementation(() => () => ({
get: (url, config) => 'Hi! I\'m mocked',
}));
const result = await api.doRequest();
console.log('result', result);
} catch (e) {
console.log('eee', e);
}
});
});
I could not found the solution, I think I can't mock the const request = restclient() part but i dont know why!!
You were missing to mock the constructor.
This mocks the constructor of restclient
jest.mock('restclient', ()=> jest.fn().mockImplementation(() => {
/** here you can create and return a mock of request **/
}));

Hide console logs in JEST test

I'm new to JEST and testing in general and having trouble figuring out the following.
I have the following script that is part of a CLI tool.
I would like to stop the spinner outputs when testing.
I have tried spyOn/mock, but to no avail.
const ora = require('ora');
const spinner = new ora();
const chalk = require('chalk');
const fs = require('fs');
module.exports = path =>
new Promise((resolve, reject) => {
spinner.text = chalk.blue('Creating directory...');
spinner.start();
fs.mkdir(path, err => {
if (!err) {
spinner.succeed(chalk.bgGreen('Directory created\n'));
resolve(true);
} else {
spinner.fail(chalk.bgRed(`Directory already exists: ${path}`));
reject(err);
}
});
});
this is my test:
const createDir = require('./utils/createDir');
const fs = require('fs');
describe('createDir function', () => {
const folders = {
base: './.test',
fail: './.test/fail',
success: './.test/success'
};
beforeAll(() => {
fs.mkdirSync(folders.fail, { recursive: true });
});
afterAll(() => {
fs.rmdirSync(folders.base, { recursive: true });
});
it('creates the directory', async () => {
await expect(createDir(folders.success)).resolves.toBe(true);
});
it('fails if directory exists', async () => {
await expect(createDir(folders.fail)).rejects.toThrow();
});
});
You should be able to just add
jest.mock('ora')
in the beginning of your test. It will auto-mock the entire library replacing each of the methods with jest.fn() (without any implementation) so the calls from the implementation will have no effect on the output.
EDIT by Ben:
The functional mock turned out to be this:
jest.mock('ora', () => {
return jest.fn().mockImplementation(() => {
return {
start: () => {},
fail: () => {},
succeed: () => {}
};
});
});

How to test recursive function in Jest.js

I have an script to looping over directories and match files with specific type. Unfortunately jest passes this test before it ends. I know why, but I don't know how to make script to wait for the end of looping.
import fs from 'fs'
const path = require('path');
describe('something', () => {
it('should something', () => {
const traverseDir = (dir, callback) => {
fs.readdirSync(dir).forEach(file => {
let fullPath = path.join(dir, file);
if (fs.lstatSync(fullPath).isDirectory()) {
callback(fullPath)
traverseDir(fullPath, callback);
} else {
callback(fullPath)
}
});
}
traverseDir('src/', (fullPath) => {
const splitted = fullPath.split('/')
const filename = splitted[splitted.length - 1]
if (filename.match(/.*.foo/)) {
fs.readFile(fullPath, 'utf8', (err, data) => {
expect(err).toBe(null)
// some assertion
})
}
})
})
})
You could pass done in the test parameter and call it when the test ends.
You can read more about async testing here.
import fs from "fs";
const path = require("path");
describe("something", () => {
it("should something", done => {
const traverseDir = (dir, callback) => {
fs.readdirSync(dir).forEach(file => {
let fullPath = path.join(dir, file);
if (fs.lstatSync(fullPath).isDirectory()) {
callback(fullPath);
traverseDir(fullPath, callback);
} else {
callback(fullPath);
}
});
done(); // Call done to tell Jest that the test has finished.
};
traverseDir("src/", fullPath => {
const splitted = fullPath.split("/");
const filename = splitted[splitted.length - 1];
if (filename.match(/.*.foo/)) {
fs.readFile(fullPath, "utf8", (err, data) => {
expect(err).toBe(null);
});
}
});
});
});
You should use fs.promises functions to list the contents of your directory recursively to obtain a single unified file list.
Unit test this function separately from any code that actually reads the file. (e.g.: your filename.match and readFile code should be tested separately from the traverseDir code.)
Example of walking directories asynchronously to get a unified file list:
This asynchronous allFilesIn function gets all files within a directory recursively and returns the list as a single array with full (relative) paths.
const fs = require('fs').promises;
const path = require('path');
const allFilesIn = async (dir, results = []) => {
const files = await fs.readdir(dir);
for (file of files) {
const fullPath = path.join(dir, file);
const stat = await fs.stat(fullPath);
if (stat.isDirectory()) {
await allFilesIn(fullPath, results);
} else {
results.push(fullPath);
}
}
return results;
}
// Example call:
allFilesIn('src/').then(files => {
console.log(files); // e.g.: [ 'src\\foo.cpp', 'src\\bar.cpp', 'src\\include\\foo.h' ]
});
Once you have a single array of all the files it should be easy to use a single forEach to do something for all the files in the unified list.

Read and write to csv file with Node.js fast-csv library

I may be lacking some in depth understanding of streams in general. However, I would like to know how efficiently what I need should work.
I want to implement so that a csv file would be read, then to each row a query to the database (or api) is made and data is attached. After that the row with attached data is written to a new csv file. I am using fast-csv node library for this.
Here is my implementation:
const fs = require("fs");
const csv = require("fast-csv");
const delay = t => new Promise(resolve => setTimeout(resolve, t));
const asyncFunction = async (row, csvStream) => {
// Imitate some stuff with database
await delay(1200);
row.data = "data";
csvStream.write(row);
};
const array = [];
const csvStream = csv.format({ headers: true });
const writeStream = fs.createWriteStream("output.csv");
csvStream.pipe(writeStream).on("finish", () => {
console.log("End of writing");
});
fs.createReadStream("input.csv")
.pipe(csv.parse({ headers: true }))
.transform(async function(row, next) {
array.push(asyncFunction(row, csvStream));
next();
})
.on("finish", function() {
console.log("finished reading file");
//Wait for all database requests and writings to be finished to close write stream
Promise.all(array).then(() => {
csvStream.end();
console.log("finished writing file");
});
});
Particularly I would like to know are there ways to optimize what I am doing here, because I feel that I am missing something important on how this library can be used for these type of cases
Regards,
Rokas
I was able to find a solution in fast-csv issues section. A good person doug-martin, provided this gist, on how you can do efficiently this kind of operation via Transform stream:
const path = require('path');
const fs = require('fs');
const { Transform } = require('stream');
const csv = require('fast-csv');
class PersistStream extends Transform {
constructor(args) {
super({ objectMode: true, ...(args || {}) });
this.batchSize = 100;
this.batch = [];
if (args && args.batchSize) {
this.batchSize = args.batchSize;
}
}
_transform(record, encoding, callback) {
this.batch.push(record);
if (this.shouldSaveBatch) {
// we have hit our batch size to process the records as a batch
this.processRecords()
// we successfully processed the records so callback
.then(() => callback())
// An error occurred!
.catch(err => err(err));
return;
}
// we shouldnt persist so ignore
callback();
}
_flush(callback) {
if (this.batch.length) {
// handle any leftover records that were not persisted because the batch was too small
this.processRecords()
// we successfully processed the records so callback
.then(() => callback())
// An error occurred!
.catch(err => err(err));
return;
}
// no records to persist so just call callback
callback();
}
pushRecords(records) {
// emit each record for down stream processing
records.forEach(r => this.push(r));
}
get shouldSaveBatch() {
// this could be any check, for this example is is record cont
return this.batch.length >= this.batchSize;
}
async processRecords() {
// save the records
const records = await this.saveBatch();
// besure to emit them
this.pushRecords(records);
return records;
}
async saveBatch() {
const records = this.batch;
this.batch = [];
console.log(`Saving batch [noOfRecords=${records.length}]`);
// This is where you should save/update/delete the records
return new Promise(res => {
setTimeout(() => res(records), 100);
});
}
}
const processCsv = ({ file, batchSize }) =>
new Promise((res, rej) => {
let recordCount = 0;
fs.createReadStream(file)
// catch file read errors
.on('error', err => rej(err))
.pipe(csv.parse({ headers: true }))
// catch an parsing errors
.on('error', err => rej(err))
// pipe into our processing stream
.pipe(new PersistStream({ batchSize }))
.on('error', err => rej(err))
.on('data', () => {
recordCount += 1;
})
.on('end', () => res({ event: 'end', recordCount }));
});
const file = path.resolve(__dirname, `batch_write.csv`);
// end early after 30000 records
processCsv({ file, batchSize: 5 })
.then(({ event, recordCount }) => {
console.log(`Done Processing [event=${event}] [recordCount=${recordCount}]`);
})
.catch(e => {
console.error(e.stack);
});
https://gist.github.com/doug-martin/b434a04f164c81da82165f4adcb144ec

.then() does not appear to be waiting for the previous .then()

I'm creating a process that converts multiple markdown files into a single pdf. It creates a pdf file for each .md file found in the source directory. Then it merges the individual pdf files into one pdf. It is this last step that is failing saying the individual pdf files do not exist.
const markdownpdf = require('markdown-pdf')
const path = require('path')
const PDFMerge = require('pdf-merge')
const fse = require('fs-extra')
const srcDir = '../manuscript'
const outDir = 'out'
const main = () => {
fse.pathExists(outDir)
.then(() => {
fse.remove(outDir).then(() => {
fse.ensureDir(outDir)
}).then(() => {
return fse.readdir(srcDir)
}).then((srcDirFiles) => {
console.log('source directory file count = ', srcDirFiles.length)
return srcDirFiles.filter(f => path.extname(f) === '.md')
}).then((mdFiles) => {
console.log('number of md files', mdFiles.length);
return mdFiles.map(file => {
const outFileName = `${path.basename(file, '.md')}.pdf`
fse.createReadStream(`${srcDir}/${file}`)
.pipe(markdownpdf())
.pipe(fse.createWriteStream(`${outDir}/${outFileName}`))
return `${outDir}/${outFileName}`
})
}).then(outFiles => {
console.log('number of pdf files created =', outFiles.length)
PDFMerge(outFiles, { output: `${__dirname}/3.pdf` })
})
})
}
main()
If I wrap the PDFMerge() line in setTimeout() it does work
setTimeout(() => {
PDFMerge(outFiles, { output: `${__dirname}/3.pdf` })
}, 1000)
I'm wondering why the setTimeout() is needed and what needs to be changed so it isn't.
I also wrote an async/await version that had the same problem and also worked with setTimeOut()
Edit
In response to Zach Holt's suggestion, here is the async/await version:
const markdownpdf = require('markdown-pdf')
const path = require('path')
const PDFMerge = require('pdf-merge')
const fse = require('fs-extra')
const srcDir = '../manuscript'
const outDir = 'out'
const createPdf = async (file) => {
try {
const outFileName = `${path.basename(file, '.md')}.pdf`
await fse.createReadStream(`${srcDir}/${file}`)
.pipe(markdownpdf())
.pipe(await fse.createWriteStream(`${outDir}/${outFileName}`))
}
catch (e) {
console.log(e)
}
}
const makePdfFiles = (files) => {
files.forEach(file => {
if (path.extname(file) === '.md') {
createPdf(file)
}
})
}
const mergeFiles = async (files) => {
try {
await PDFMerge(files, {output: `${__dirname}/3.pdf`})
}
catch (e) {
console.log(e)
}
}
const addPathToPdfFiles = (files) => {
return files.map(file => {
return `${outDir}/${file}`
})
}
const main = async () => {
try {
const exists = await fse.pathExists(outDir)
if (exists) {
await fse.remove(outDir)
}
await fse.ensureDir(outDir)
const mdFiles = await fse.readdir(srcDir)
const filesMade = await makePdfFiles(mdFiles)
const pdfFiles = await fse.readdir(outDir)
const pdfFilesWithPath = addPathToPdfFiles(pdfFiles)
mergeFiles(pdfFilesWithPath)
// setTimeout(() => {
// mergeFiles(pdfFilesWithPath)
// }, 1000)
} catch (e) {
console.log(e)
}
}
It has the same problem.
I also tried:
const makePdfFiles = files => {
return new Promise((resolve, reject) => {
try {
files.forEach(file => {
if (path.extname(file) === '.md') {
createPdf(file)
}
})
resolve(true)
} catch (e) {
reject(false)
console.log('makePdfFiles ERROR', e)
}
})
}
But it made no difference.
You need to return the promise from ensureDir() to make it wait for it.
I think the issue might be that you're creating a read stream for each of the .md files, but not waiting for the reads to finish before trying to merge outFiles.
You could likely wait until the outFiles length is the same as the number of md files found before merging.
Also, you should stick with async/await for this. It'll keep the code much clearer
Let me over-simplify your code to illustrate the problem:
p1.then(() => {
p2.then().then().then()
}).then(/* ??? */)
which is the same as:
p1.then(() => {
p2.then().then().then()
return undefined
}).then(/* undefined */)
What you need for chaining is to return the inner Promise:
p1.then(() => // no {code block} here, just return value
p2.then().then().then()
).then(/* ??? */)
which is the same as:
p1.then(() => {
p3 = p2.then()
p4 = p3.then()
p5 = p4.then()
return p5
}).then(/* p5 */)
As far as I can tell the original problem was the approach and not the obvious errors correctly pointed out by others. I found a much simpler solution to the overall goal of producing a single pdf from multiple md files.
const markdownpdf = require('markdown-pdf')
const path = require('path')
const fse = require('fs-extra')
const srcDir = '../manuscript'
const filterAndAddPath = (files) => {
try {
const mdFiles = files
.filter(f => path.extname(f) === '.md')
.map(f => `${srcDir}/${f}`)
return mdFiles
}
catch (e) {
console.log('filterAndAddPath', e)
}
}
const main4 = async () => {
const allFiles = await fse.readdir(srcDir)
const mdFiles = filterAndAddPath(allFiles)
const bookPath = 'book.pdf'
markdownpdf()
.concat.from(mdFiles)
.to(bookPath, function() {
console.log('Created', bookPath)
})
}
main4()

Categories