Lets say we are using setInterval inside of a hapi plugin, like so:
// index.js
internals = {
storage: {},
problemFunc: () => {
setInterval((storage) => {
storage.forEach((problem) => {
problem.foo = 'bar';
});
}, 500);
}
};
module.exports.register = (server, options, next) => {
server.on('start', () => { // called when the server starts
internals.storage.problem1 = {};
internals.storage.problem2 = {};
internals.storage.problem3 = {};
internals.problemFunc(internals.storage);
});
}
In our tests for this server, we may start up and stop the server many times to test different aspects of the server. Sometimes, we will get an error like cannot set property 'foo' of undefined. This is because the server gets shutdown right before that async code runs, and internals.storage.problem gets removed right along with the server stop.
This makes total sense, and I don't have a problem with that. I'd really like to know what would be some good ways to make sure my tests work 100% of the time rather than 90% of the time.
We could do:
problemFunc: () => {
setInterval((storage) => {
storage.forEach((problem) => {
if (problem !== undefined) { // check if deleted
problem.foo = 'bar';
}
});
}, 500);
}
or:
problemFunc: () => {
setInterval((storage = {}) => { // default assignment
storage.forEach((problem) => {
problem.foo = 'bar';
});
}, 500);
}
But I would rather not add conditionals to my code just so that my tests pass. Also, this can cause issues with keeping 100% code coverage because then sometimes that conditional will get run and sometimes it wont. What would be a better way to go about this?
It's absolutely normal to have slight differences in set-up and configuration when running code in a test environment.
A simple approach is to let the application know the current environment, so it can obtain the appropriate configuration and correctly set-up the service. Common environments are testing, development, staging and production.
Simple example, using an environment variable:
// env.js
module.exports.getName = function() {
return process.env['ENV'] || 'development'
}
// main.js
if (env.getName() !== 'testing') {
scheduleBackgroundTasks()
}
Then run your tests passing the ENV variable, or tell your test runner to do it:
ENV=testing npm test
Related
I have multiple .js test files and I would like to be able to run the tests in each file multiple times. For example, I would like to run each 5 or 10 times in a row. Even if it's just one file at a time, but the same file multiple times.
I was trying to unload the module from memory and then loading it again multiple time to see if it worked, but it doesn't. Check:
const testFilePath = './myFileWithOneMochaTestSuit.js';
for(let i = 0; i < 5; i++) {
delete require.cache[require.resolve(test)]; // Unloading the module in memory
require(testFilePath);
}
The reason why I want to do this is because I have some integration tests that sometimes can fail. I want to be able to run these tests as many times as I need to analyze them when they fail.
All my test files look something like this:
// test1.js
describe('test suit 1', () => {
//...
before(() => {
//...
});
it(`test 1...`, async () => {
//...
});
it('test 2', () => {
//...
});
// more tests
});
// test2.js
// test2.js
describe('test suit 2', () => {
//...
before(() => {
//...
});
it(`test 1...`, async () => {
//...
});
it('test 2', () => {
//...
});
// more tests
});
So, what I need is to be able to run the test suite in test1.js multiple times in a row. And then test2.js multiple times in a row. Etc.
I have not been able to do that.
Any help would be appreciated.
I tried loading the files with require multiple times, but they only run once, the first time.
I tried removing the cached module in memory and loading it again with require, but didn't work.
Weird! It started working all of a sudden once I assigned the returned value of require to a variable and print it like:
for(let i = 0; i < 5; i++) {
delete require.cache[require.resolve(testFilePath)]; // Unloading the module in memory
const m = require(testFilePath);
console.log(m);
}
But then I removed the console.log(m) to see if it kept working, and it did:
for(let i = 0; i < 5; i++) {
delete require.cache[require.resolve(testFilePath)]; // Unloading the module in memory
const m = require(testFilePath);
}
Finally I removed the const m = to see if it keeps working, and it did!
for(let i = 0; i < 5; i++) {
delete require.cache[require.resolve(testFilePath)]; // Unloading the module in memory
require(testFilePath);
}
So I basically got back to where I started, but now it was working and loading the modules and unloading the test files and running the tests multiple times as I wanted.
I don't know how this happened with basically no difference in the code, but I'm glad it did.
TL;DR: If I try to do var pty = require('node-pty'); results in TypeError: Object.setPrototypeOf: expected an object or null, got undefined keep reading for context
Hi, I'm trying to build a proof of concept by creating a terminal using React. For that, I used xterm-for-react which I made it work fine, and node-pty with this last library is with the one I'm having problems.
Initially I created a file in which I would try to make calls to it, it looks like this:
var os = require('os');
var pty = require('node-pty');
var shell = os.platform() === 'win32' ? 'powershell.exe' : 'bash';
var ptyProcess;
function createNewTerminal(FE){
ptyProcess = pty.spawn(shell, [], {
name: 'xterm-color',
cols: 80,
rows: 30,
cwd: process.env.HOME,
env: process.env
});
ptyProcess.onData((data) => FE.write(data));
}
function writeOnTerminal(data){
ptyProcess.write(data);
}
module.exports = {
createNewTerminal,
writeOnTerminal
}
I know it may not be the best code out there, but I was doing it just to try to see if this was possible. My plan was to call the functions from the react component like this:
import {createNewTerminal, writeOnTerminal} from './terminal-backend';
function BashTerminal() {
const xtermRef = React.useRef(null)
React.useEffect(() => {
// You can call any method in XTerm.js by using 'xterm xtermRef.current.terminal.[What you want to call]
xtermRef.current.terminal.writeln("Hello, World!")
createNewTerminal(xtermRef.current.terminal)
}, [])
const onData = (data) => {
writeOnTerminal(data);
}
return (
<XTerm ref={xtermRef} onData={onData}/>
);
}
But I was surprised that this was not working, and returned the error in the title. So, in order to reduce noise, I tried to change my functions to just console logs and just stay with the requires. My file now looked like this:
var os = require('os');
var pty = require('node-pty');
function createNewTerminal(FE){
console.log("Creating new console");
}
function writeOnTerminal(data){
console.log("Writing in terminal");
}
module.exports = {
createNewTerminal,
writeOnTerminal
}
Still got the same error. I'm currently not sure if this is even possible to do, or why this error occurs. Trying to look things online doesn't give any results, or maybe it does and I'm just not doing it right. Well, thanks for reading, I'm completely lost, so, if someone knows something even if it's not the complete answer I will be very thankful
I need to run similar tests on a bunch of different files, and I do so using a single test suite file.
Example:
for (const configName in defaults) {
const config = defaults[configName];
const onMobile = testMobile.includes(configName);
const onDesktop = testDesktop.includes(configName);
describe(`${configName}`, () => {
const tests = (isDesktop:boolean) => {
let defaultConfig: AttnConfig<MultiPageCreativeConfig>;
function freshRender() {
cleanup();
return render(
<TestWrapper isDesktop={isDesktop} layout={defaultConfig.creativeConfig.base.fields.layout.layout}>
<ConfigCtx.Provider value={defaultConfig}>
<App />
</ConfigCtx.Provider>
</TestWrapper>,
{
container: document.documentElement
}
);
}
beforeEach(() => {
jest.clearAllMocks();
mockedUseWaitForPageLoad.mockReturnValue(false);
mockedUseResponsiveLayout.mockReturnValue([isDesktop]);
defaultConfig = attnTestConfigWrapper(config);
});
afterEach(cleanup);
for (let page = 0; page < config.pages.length ;page++) {
describe(`Page ${page + 1}`, () => {
beforeEach(() => {
defaultConfig.overrides.currentPageIndex = page;
freshRender();
});
itPassesVisualRegressionTests(!isDesktop);
});
}
}
if (onMobile) tests(false);
if (onDesktop) tests(true);
})
}
This way however, does not take advantage of multithreading. Since I will only ever be running these tests alone, it takes considerably longer (about two-three times longer) than writing a separate test file for each config.
As much as I would like to write the tests in individual files, this causes a lot of extra work if I need to refactor something (I've already had to change these files several times).
Is there a way generate test suites for jest to run in parallel or at least break out test logic into a shared utility function?
I added concurrency to my tests:
export function itPassesVisualRegressionTests(mobile = false, debug = false) {
const itPassesVisualRegressionTestsOn = (browser: string) => {
it.concurrent(`has no visual regressions on ${mobile ? 'mobile' : 'desktop'} ${browser}`, async () => {
await sleep(400); // wait for animations
const html = document.documentElement.outerHTML;
// Pixel 3a (adjusted for high dpi) and generic 1080p desktop
// Note we use a high-res phone due to desktop browsers having a minimum width
const viewport = mobile
? { width: 540, height: 1110 }
: { width: 1080, height: 720 };
const image = await fetchSnapshot(html, { browser, debug, viewport });
expect(image).toBeTruthy();
expect(image).toMatchImageSnapshot();
});
};
itPassesVisualRegressionTestsOn('chrome');
itPassesVisualRegressionTestsOn('firefox');
itPassesVisualRegressionTestsOn('operablink');
}
Edit: I tested this and this causes problems when speaking to the snapshot server. If I could create locks around fetchSnapshot I should be fine...
I have an instability issue.
I'm using openzeppelin-test-helpers, and first had an issue with typescript saying Could not find a declaration file for module '#openzeppelin/test-helpers'.. This issue was solved by creating a .d.ts file containing declare module "#openzeppelin/test-helpers";.
However, adding this created a new problem, which is that now, most of the time, only one file is being run by rm -rf build && truffle test (I guess this is similar to truffle test --reset).
I got 2 test files. The first one looks like:
require("chai")
.use(require("chai-as-promised"))
.should();
const EventHandler = artifacts.require("EventHandler");
const { expectRevert } = require("#openzeppelin/test-helpers");
contract("EventHandler", function([_, superAdmin0, admin0, device0]) {
beforeEach(async function() {
this.eventHandler = await EventHandler.new(superAdmin0);
});
describe("Initial tests", function() {
it("should print hello", async function() {
await this.eventHandler
.printHello()
.should.eventually.equal("Hello");
});
});
});
The second one looks like:
require("chai")
.use(require("chai-as-promised"))
.should();
const { expectRevert } = require("#openzeppelin/test-helpers");
const EventHandler = artifacts.require("EventHandler");
contract("Roles", function([_, superAdmin0, superAdmin1, admin0, device0]) {
beforeEach(async function() {
this.EventHandler = await EventHandler.new(superAdmin0);
});
it("...should work", async function() {});
});
When I comment the content of one file, or just what's inside contract(..., {}), the other file works just fine and the tests pass successfully.
However, whenever I leave those 2 files uncommented, I get a massive error:
Error: Returned values aren't valid, did it run Out of Gas?
Of course, resetting ganache-cli didn't solve anything...
Does anyone know where it could come from?
I am writing test cases for NODE JS API. But wherever console.log() is there in routes or services of NODE JS File, it gets printed to CLI. Is there a way to mock these so that these won't get printed in CLI.
I have explored couple of libraries like Sinon, Stub for mocking. But couldn't grasp the working of those libraries.
You can override function entirely: console.log = function () {}.
You should not try to mock console.log itself, a better approach is for your node modules to take a logging object. This allows you to provide an alternative (ie. a mock) during testing. For example:
<my_logger.js>
module.exports = {
err: function(message) {
console.log(message);
}
}
<my_module.js>
var DefaultLogger = require('my_logger.js');
module.exports = function(logger) {
this.log = logger || DefaultLogger;
// Other setup goes here
};
module.exports.prototype.myMethod = function() {
this.log.err('Error message.');
};
<my_module_test.js>
var MyModule = require('my_module.js');
describe('Test Example', function() {
var log_mock = { err: function(msg) {} };
it('Should not output anything.', function() {
var obj = new MyModule(log_mock);
obj.myMethod();
});
});
The code here I've simplified, as the actual test isn't the reason for the example. Merely the insertion of alternative logging.
If you have a large codebase with lots of console.log calls, it is better to simply update the code as you add tests for each method. Making your logging pluggable in this way makes your code easier and more receptive to testing. Also, there are many logging frameworks available for node. console.log is fine during development when you just want to dump out something to see what's going on. But, if possible, try to avoid using it as your logging solution.
I could not find a solution which only hides the console.log calls in the module to be tested, and mocks none of the calls of the testing framework (mocha/chai in my case).
I came up with using a copy of console in the app code:
/* console.js */
module.exports = console;
/* app.js */
const console = require('./console');
console.log("I'm hidden in the tests");
/* app.spec.js */
const mockery = require('mockery');
var app;
before(() => {
// Mock console
var consoleMock = {
log: () => {}
}
mockery.registerMock('./console', consoleMock);
// Require test module after mocking
app = require('./app');
});
after(() => {
mockery.deregisterAll();
mockery.disable();
});
it('works', () => {});
You could do something along the lines of adding these before/after blocks to your tests, but the issue is that mocha actually uses console.log to print the pretty messages about the results of the test, so you would lose those
describe('Test Name', function() {
var originalLog;
beforeEach(function() {
originalLog = console.log;
console.log = function () {};
});
// test code here
afterEach(function() {
console.log = originalLog;
})
})
The problem is that your output would just look like
Test Name
X passing (Yms)
Without any intermediate text