Generating jest test suites as separate files instead of single file - javascript

I need to run similar tests on a bunch of different files, and I do so using a single test suite file.
Example:
for (const configName in defaults) {
const config = defaults[configName];
const onMobile = testMobile.includes(configName);
const onDesktop = testDesktop.includes(configName);
describe(`${configName}`, () => {
const tests = (isDesktop:boolean) => {
let defaultConfig: AttnConfig<MultiPageCreativeConfig>;
function freshRender() {
cleanup();
return render(
<TestWrapper isDesktop={isDesktop} layout={defaultConfig.creativeConfig.base.fields.layout.layout}>
<ConfigCtx.Provider value={defaultConfig}>
<App />
</ConfigCtx.Provider>
</TestWrapper>,
{
container: document.documentElement
}
);
}
beforeEach(() => {
jest.clearAllMocks();
mockedUseWaitForPageLoad.mockReturnValue(false);
mockedUseResponsiveLayout.mockReturnValue([isDesktop]);
defaultConfig = attnTestConfigWrapper(config);
});
afterEach(cleanup);
for (let page = 0; page < config.pages.length ;page++) {
describe(`Page ${page + 1}`, () => {
beforeEach(() => {
defaultConfig.overrides.currentPageIndex = page;
freshRender();
});
itPassesVisualRegressionTests(!isDesktop);
});
}
}
if (onMobile) tests(false);
if (onDesktop) tests(true);
})
}
This way however, does not take advantage of multithreading. Since I will only ever be running these tests alone, it takes considerably longer (about two-three times longer) than writing a separate test file for each config.
As much as I would like to write the tests in individual files, this causes a lot of extra work if I need to refactor something (I've already had to change these files several times).
Is there a way generate test suites for jest to run in parallel or at least break out test logic into a shared utility function?

I added concurrency to my tests:
export function itPassesVisualRegressionTests(mobile = false, debug = false) {
const itPassesVisualRegressionTestsOn = (browser: string) => {
it.concurrent(`has no visual regressions on ${mobile ? 'mobile' : 'desktop'} ${browser}`, async () => {
await sleep(400); // wait for animations
const html = document.documentElement.outerHTML;
// Pixel 3a (adjusted for high dpi) and generic 1080p desktop
// Note we use a high-res phone due to desktop browsers having a minimum width
const viewport = mobile
? { width: 540, height: 1110 }
: { width: 1080, height: 720 };
const image = await fetchSnapshot(html, { browser, debug, viewport });
expect(image).toBeTruthy();
expect(image).toMatchImageSnapshot();
});
};
itPassesVisualRegressionTestsOn('chrome');
itPassesVisualRegressionTestsOn('firefox');
itPassesVisualRegressionTestsOn('operablink');
}
Edit: I tested this and this causes problems when speaking to the snapshot server. If I could create locks around fetchSnapshot I should be fine...

Related

How to run a .js file containing a Mocha suite multiple times in a loop?

I have multiple .js test files and I would like to be able to run the tests in each file multiple times. For example, I would like to run each 5 or 10 times in a row. Even if it's just one file at a time, but the same file multiple times.
I was trying to unload the module from memory and then loading it again multiple time to see if it worked, but it doesn't. Check:
const testFilePath = './myFileWithOneMochaTestSuit.js';
for(let i = 0; i < 5; i++) {
delete require.cache[require.resolve(test)]; // Unloading the module in memory
require(testFilePath);
}
The reason why I want to do this is because I have some integration tests that sometimes can fail. I want to be able to run these tests as many times as I need to analyze them when they fail.
All my test files look something like this:
// test1.js
describe('test suit 1', () => {
//...
before(() => {
//...
});
it(`test 1...`, async () => {
//...
});
it('test 2', () => {
//...
});
// more tests
});
// test2.js
// test2.js
describe('test suit 2', () => {
//...
before(() => {
//...
});
it(`test 1...`, async () => {
//...
});
it('test 2', () => {
//...
});
// more tests
});
So, what I need is to be able to run the test suite in test1.js multiple times in a row. And then test2.js multiple times in a row. Etc.
I have not been able to do that.
Any help would be appreciated.
I tried loading the files with require multiple times, but they only run once, the first time.
I tried removing the cached module in memory and loading it again with require, but didn't work.
Weird! It started working all of a sudden once I assigned the returned value of require to a variable and print it like:
for(let i = 0; i < 5; i++) {
delete require.cache[require.resolve(testFilePath)]; // Unloading the module in memory
const m = require(testFilePath);
console.log(m);
}
But then I removed the console.log(m) to see if it kept working, and it did:
for(let i = 0; i < 5; i++) {
delete require.cache[require.resolve(testFilePath)]; // Unloading the module in memory
const m = require(testFilePath);
}
Finally I removed the const m = to see if it keeps working, and it did!
for(let i = 0; i < 5; i++) {
delete require.cache[require.resolve(testFilePath)]; // Unloading the module in memory
require(testFilePath);
}
So I basically got back to where I started, but now it was working and loading the modules and unloading the test files and running the tests multiple times as I wanted.
I don't know how this happened with basically no difference in the code, but I'm glad it did.

Delayed read performance when using navigator.serial for serial communication

I've been trying out the web serial API in chrome (https://web.dev/serial/) to do some basic communication with an Arduino board. I've noticed quite a substantial delay when reading data from the serial port however. This same issue is present in some demos, but not all.
For instance, using the WebSerial demo linked towards the bottom has a near instantaneous read:
While using the Serial Terminal example results in a read delay. (note the write is triggered at the moment of a character being entered on the keyboard):
WebSerial being open source allows for me to check for differences between my own implementation, however I am seeing performance much like the second example.
As for the relevant code:
this.port = await navigator.serial.requestPort({ filters });
await this.port.open({ baudRate: 115200, bufferSize: 255, dataBits: 8, flowControl: 'none', parity: 'none', stopBits: 1 });
this.open = true;
this.monitor();
private monitor = async () => {
const dataEndFlag = new Uint8Array([4, 3]);
while (this.open && this.port?.readable) {
this.open = true;
const reader = this.port.readable.getReader();
try {
let data: Uint8Array = new Uint8Array([]);
while (this.open) {
const { value, done } = await reader.read();
if (done) {
this.open = false;
break;
}
if (value) {
data = Uint8Array.of(...data, ...value);
}
if (data.slice(-2).every((val, idx) => val === dataEndFlag[idx])) {
const decoded = this.decoder.decode(data);
this.messages.push(decoded);
data = new Uint8Array([]);
}
}
} catch {
}
}
}
public write = async (data: string) => {
if (this.port?.writable) {
const writer = this.port.writable.getWriter();
await writer.write(this.encoder.encode(data));
writer.releaseLock();
}
}
The equivalent WebSerial code can be found here, this is pretty much an exact replica. From what I can observe, it seems to hang at await reader.read(); for a brief period of time.
This is occurring both on a Windows 10 device and a macOS Monterey device. The specific hardware device is an Arduino Pro Micro connected to a USB port.
Has anyone experienced this same scenario?
Update: I did some additional testing with more verbose logging. It seems that the time between the write and read is exactly 1 second every time.
the delay may result from SerialEvent() in your arduino script: set Serial.setTimeout(1);
This means 1 millisecond instead of default 1000 milliseconds.

429 Too Many Requests - Angular 7 - on multiple file upload

I have this problem when I try to upload more than a few hundred of files at the same time.
The API interface is for one file only so I have to call the service sending each file. Right now I have this:
onFilePaymentSelect(event): void {
if (event.target.files.length > 0) {
this.paymentFiles = event.target.files[0];
}
let i = 0;
let save = 0;
const numFiles = event.target.files.length;
let procesed = 0;
if (event.target.files.length > 0) {
while (event.target.files[i]) {
const formData = new FormData();
formData.append('file', event.target.files[i]);
this.payrollsService.sendFilesPaymentName(formData).subscribe(
(response) => {
let added = null;
procesed++;
if (response.status_message === 'File saved') {
added = true;
save++;
} else {
added = false;
}
this.payList.push({ filename, message, added });
});
i++;
}
}
So really I have a while for sending each file to the API but I get the message "429 too many request" on a high number of files. Any way I can improve this?
Working with observables will make that task easier to reason about (rather than using imperative programming).
A browser usually allows you to make 6 request in parallel and will queue the others. But we don't want the browser to manage that queue for us (or if we're running in a node environment we wouldn't have that for ex).
What do we want: We want to upload a lot of files. They should be queued and uploaded as efficiently as possible by running 5 requests in parallel at all time. (so we keep 1 free for other requests in our app).
In order to demo that, let's build some mocks first:
function randomInteger(min, max) {
return Math.floor(Math.random() * (max - min + 1)) + min;
}
const mockPayrollsService = {
sendFilesPaymentName: (file: File) => {
return of(file).pipe(
// simulate a 500ms to 1.5s network latency from the server
delay(randomInteger(500, 1500))
);
}
};
// array containing 50 files which are mocked
const files: File[] = Array.from({ length: 50 })
.fill(null)
.map(() => new File([], ""));
I think the code above is self explanatory. We are generating mocks so we can see how the core of the code will actually run without having access to your application for real.
Now, the main part:
const NUMBER_OF_PARALLEL_CALLS = 5;
const onFilePaymentSelect = (files: File[]) => {
const uploadQueue$ = from(files).pipe(
map(file => mockPayrollsService.sendFilesPaymentName(file)),
mergeAll(NUMBER_OF_PARALLEL_CALLS)
);
uploadQueue$
.pipe(
scan(nbUploadedFiles => nbUploadedFiles + 1, 0),
tap(nbUploadedFiles =>
console.log(`${nbUploadedFiles}/${files.length} file(s) uploaded`)
),
tap({ complete: () => console.log("All files have been uploaded") })
)
.subscribe();
};
onFilePaymentSelect(files);
We use from to send the files one by one into an observable
using map, we prepare our request for 1 file (but as we don't subscribe to it and the observable is cold, the request is just prepared, not triggered!)
we now use mergeMap to run a pool of calls. Thanks to the fact that mergeMap takes the concurrency as an argument, we can say "please run a maximum of 5 calls at the same time"
we then use scan for display purpose only (to count the number of files that have been uploaded successfully)
Here's a live demo: https://stackblitz.com/edit/rxjs-zuwy33?file=index.ts
Open up the console to see that we're not uploading all them at once

Very high memory/cpu usage while testing React components with Karma in Chrome

Currently in my project I'm using Karma and Mocha, with React Test-Utils to test my react components.
I currently have around 1200 tests, most of which are for my library of react components.
When these tests run, chrome often exceeds 2gb of memory, and its CPU usage spikes well over 30%
Most of my tests look something like this -
const React = require('react');
const TestUtils = require('react-dom/test-utils');
const ReactDOM = require('react-dom');
const expect = require('chai').expect;
const Users = require('./../../../../../../client/components/contentComponents/Example.jsx');
const componentProps = () => {
return {
exampleProp: 'ExampleProp'
};
};
it('Example Test Block', () => {
it('Component should be rendered on the page', () => {
const objComponent = TestUtils.renderIntoDocument(React.createElement(Users, componentProps()));
const objComponentHtmlElement = ReactDOM.findDOMNode(objComponent);
expect(objComponentHtmlElement).to.not.be.undefined;
});
it('Example Test 1', () => {
const objComponent = TestUtils.renderIntoDocument(React.createElement(Example, compnentProps()));
expect(ExampleAssertion).to.equal(true);
});
});
Is there anything obvious here that would cause such high CPU and memory usage? Is the usage I'm seeing expected with this number of tests?
I can see that when the test is running the chrome window fills up with lots of different rendered components at once, seemingly not unmounting them, am I maybe missing a step in my tests where the rendered component needs to be unmounted or destroyed?
Nothing immediate jumps out at me, but I do wonder if there are pieces of your components which are left in the DOM after your tests. Here is a recommended test cleanup proceedure from a Forbes article to address one aspect of this in Angular tests. It may apply to React as well:
export function cleanStylesFromDOM(): void {
const head: HTMLHeadElement = document.getElementsByTagName('head')[0];
const styles: HTMLCollectionOf<HTMLStyleElement> | [] = head.getElementsByTagName('style');
for (let i: number = 0; i < styles.length; i++) {
head.removeChild(styles[i]);
}
}
afterAll(() => {
cleanStylesFromDOM();
});

How to deal with setInterval in server tests

Lets say we are using setInterval inside of a hapi plugin, like so:
// index.js
internals = {
storage: {},
problemFunc: () => {
setInterval((storage) => {
storage.forEach((problem) => {
problem.foo = 'bar';
});
}, 500);
}
};
module.exports.register = (server, options, next) => {
server.on('start', () => { // called when the server starts
internals.storage.problem1 = {};
internals.storage.problem2 = {};
internals.storage.problem3 = {};
internals.problemFunc(internals.storage);
});
}
In our tests for this server, we may start up and stop the server many times to test different aspects of the server. Sometimes, we will get an error like cannot set property 'foo' of undefined. This is because the server gets shutdown right before that async code runs, and internals.storage.problem gets removed right along with the server stop.
This makes total sense, and I don't have a problem with that. I'd really like to know what would be some good ways to make sure my tests work 100% of the time rather than 90% of the time.
We could do:
problemFunc: () => {
setInterval((storage) => {
storage.forEach((problem) => {
if (problem !== undefined) { // check if deleted
problem.foo = 'bar';
}
});
}, 500);
}
or:
problemFunc: () => {
setInterval((storage = {}) => { // default assignment
storage.forEach((problem) => {
problem.foo = 'bar';
});
}, 500);
}
But I would rather not add conditionals to my code just so that my tests pass. Also, this can cause issues with keeping 100% code coverage because then sometimes that conditional will get run and sometimes it wont. What would be a better way to go about this?
It's absolutely normal to have slight differences in set-up and configuration when running code in a test environment.
A simple approach is to let the application know the current environment, so it can obtain the appropriate configuration and correctly set-up the service. Common environments are testing, development, staging and production.
Simple example, using an environment variable:
// env.js
module.exports.getName = function() {
return process.env['ENV'] || 'development'
}
// main.js
if (env.getName() !== 'testing') {
scheduleBackgroundTasks()
}
Then run your tests passing the ENV variable, or tell your test runner to do it:
ENV=testing npm test

Categories