When running jest, if a function makes use of performance.getEntries (or performance.getEntriesByName) then the test throws an exception:
performance.getEntries is not a function
How can I allow my tests to continue running without crashing?
According to the documentation, performance.getEntries was removed from perf_hooks in NodeJS v10.0.0 (https://github.com/nodejs/node/pull/19563) and the same functionality may be achieved using
const measures = []
const obs = new PerformanceObserver(list => {
measures.push(...list.getEntries())
obs.disconnect()
})
obs.observe({entryTypes: ['measure']})
function getEntriesByType(name) {
return measures
}
However the above solution is only viable when working directly in the NodeJS application, not when running tests as I do not want to change my implementation. I have tried mocking performance.getEntries as a global in setupAfterEnv, as I would for other globals. For example:
global.performance = {
getEntries: () => []
}
However when debugging, performance is always equal to {} (while performance.now() works well. I believe this is default jsdom (which is my testEnvironment).
Related
We have been using useFakeTimers() (sinon v11.x) in many spec files for quite a long time. Recently, we have updated our sinon to 14.x version, now the tests are failing with below error.
TypeError: Can't install fake timers twice on the same global object.
We have tried with createSandbox() also, didn't help.
The issue seems like after Sinon 12.x, not restoring the clock in the spec files, injecting it into global scope which throws the aforementioned error.
So the fix is, call clock.restore() in afterAll() or afterEach() based on whether you used beforeAll() or beforeEach().
So, I encountered this error, for instance, if I had two tests which both used fake timers. You have to call useFakeTimers independently of your sandbox creation.
Fails miserably because reasons
/// Somefile
const superTrialAndErrorSimulator = sinon.createSandbox({
useFakeTimers: true
});
// Some other file
const superTrialAndErrorSimulatorZool = sinon.createSandbox({
useFakeTimers: true
});
If you set fake timers after setting the sandbox, then reset them, it works. Welcome to the trial and error world of sinon.
Works miserably because reasons
const ifOnlyThereWereABetterLibrary = sinon.createSandbox();
before(() => {
ifOnlyThereWereABetterLibrary.useFakeTimers();
});
after(() => {
ifOnlyThereWereABetterLibrary.clock.restore();
});
// Works.
I'm trying to write a caching system for some time-intensive functions (Network requests/computation heavy) and I need to generate a fingerprint from the functions in order to invalidate the cached results once a developer changes the functions.
I have tried the following approach to generate the fingerprint:
const crypto = require('crypto')
const generateFingerprintOfFunc = (inputFunc) => {
const cypher = crypto.createHash('sha256');
cypher.update(inputFunc.toString());
return cypher.digest('hex');
}
However, the problem with this approach is that the fingerprint won't be changed once the developer changes any of the functions that are called inside the function that is being fingerprinted because that function's definition hasn't really changed.
const foo = () => {
return bar() + 1;
}
const bar = () => {
return 1;
}
const fingerprintOfFoo = generateFingerprintOfFunc(foo); // 489290d22f653965a59e2e5fbb7b626535babd660f7f49501fc88c3e7fbc0176
Now I will change the bar function:
const foo = () => {
return bar() + 1;
}
const bar = () => {
return 10;
}
const fingerprintOfFoo = generateFingerprintOfFunc(foo); // 489290d22f653965a59e2e5fbb7b626535babd660f7f49501fc88c3e7fbc0176
As you can see the fingerprint has not changed while the return value of the function has.
Why do I need to do this?
I'm trying to generate dynamic and automatic mocks for my expensive functions during development and testing. See this SO Question
I wanna know if there is a way that v8 would give me the path of the files making up a certain function and all of its internals.
No.
It can't. JavaScript is too dynamic.
Consider this example:
let helper;
const foo = () => helper() + 1;
const bar = () => 1;
const baz = () => 2;
helper = bar;
So far, this does the same as your example. Now suppose someone either changed the last line to read, or added a new line that reads helper = baz. In that case, no function's definition has changed, but foo's behavior has! In fact, one can take that same idea to construct an even simpler case:
let global = 42;
const foo = () => global;
If someone changes global (whether statically in the source, or dynamically), foo's return value will change, but no function definition will. In case of dynamic assignment, not even anything in the source will change. And of course such an assignment could depend on arbitrary conditions/circumstances, such as user interaction, time of day, Math.random(), whatever.
In general, the only way (for anyone, including the engine) to figure out what a function will return is to execute it.
Memoization works if developers carefully choose functions that lend themselves to memoization.
An automated system that takes any arbitrary function (without imposing any limitations on what the function does) and memoizes it, or determines whether it will the same value as last time, without actually executing it, is impossible to create in JavaScript.
What do you mean when you say "changes the function"? As in, you have jest in watch mode and want to only pick up changes when they make fundamental changes? I think the real question is why this is necessary, to begin with.
How are you generating dynamic mocks? Does this imply that you repeatedly creating these mocks in real-time with network requests? Then they aren't really mocks. You are just making real requests and then deciding to temporarily store it somewhere. You could just skip that data-saving step and your resulting testing process would be identical. That's a glorified integration test
I guess others can disagree, but your unit testing philosophy seems a little flawed. Dynamic mocks imply that you have no control over what situation ends up getting tested. If you are not in full control of your mocks, couldn't you end up testing the exact same case (or edge case) repeatedly? That's a waste of resources and can lead to flawed tests. Covering all of your intended cases would happen by coincidence, as opposed to explicit intent. Your unit tests should be deterministic. Not stochastic.
It seems like you need to solve your testing methodology, as opposed to accepting that your tests are slow and developing strategies on how to work around it.
I have an application that I created using Ext JS and I am writing tests for it using Selenium WebDriver (the Node package - version 4.0.0-alpha.1) and Jest. In one of my test scripts, I want to wait for a function to be called before continuing with the remaining test logic but I am not sure how to implement this. To help demonstrate what I am trying to accomplish, I created a sample app using Sencha Fiddle. All of the code for that app as well as a running version of the app can be found here: https://fiddle.sencha.com/#view/editor&fiddle/2o6m. If you look in the app folder of the fiddle, you'll see that there is a Test component with a simple controller. There is an onAdd function in the controller and that is the function I want to wait for before continuing since in my actual application there is code in that function that the rest of the tests are dependent on. I can access that function in dev tools by running the following line of code: Ext.ComponentQuery.query('test')[0].getController().onAdd (note that the activeElement needs to be set to the preview iFrame (document.getElementsByName('fiddle-run-iframe')[0]) in the fiddle for this to work). This means I can access the function in driver.executeScript the same way, but once I have the function, I am not sure how to wait for it to be called before continuing. I was trying to use the mock/spy feature in Jest, but this is not working because jest is not defined inside driver.executeScript so I can't call jest.fn or jest.spyOn. I created a sample test script that works with the sample app to demonstrate what I am trying to do, but right now it fails with an error since, as I said, jest is not defined inside driver.executeScript. Here is the code for the sample test script:
const {Builder, By, Key, until} = require('selenium-webdriver');
const chrome = require('selenium-webdriver/chrome');
const driver = global.driver = new Builder()
.forBrowser('chrome')
.setChromeOptions(new chrome.Options())
.build();
jest.setTimeout(10000);
beforeAll(async () => {
await driver.manage().window().maximize();
await driver.get('https://fiddle.sencha.com/#view/editor&fiddle/2o6m');
});
afterAll(async () => {
await driver.quit();
});
describe('check title', () => {
it('should be SAMPLE STORE LOAD', async () => {
expect(await driver.wait(until.elementLocated(By.css('.fiddle-title'))).getText()).toBe('SAMPLE STORE LOAD');
});
});
describe('check store add', () => {
it('should call add function', async () => {
let spy;
await driver.switchTo().frame(await driver.findElement(By.name('fiddle-run-iframe')));
await driver.wait(until.elementIsNotVisible(await driver.wait(until.elementLocated(By.xpath('//div[starts-with(#id, "loadmask")]')))));
await driver.executeScript(() => {
const test = document.getElementsByName('fiddle-run-iframe')[0].contentWindow.Ext.ComponentQuery.query('test')[0];
spy = jest.spyOn(test.getController(), 'onAdd'); //This throws an error since jest is not defined inside driver.executeScript
});
expect(spy).toHaveBeenCalled(); //wait for onAdd function to be called before continuing
});
//additional tests occur here after wait...
});
You can ignore all the logic related to switching to the iFrame because that is only necessary for the fiddle since it runs the preview of the app in an iFrame. My actual app does not exist inside an iFrame. Nonetheless, I think this script effectively demonstrates what I am trying to accomplish which is to wait until the onAdd function is called before continuing with my tests. I am not sure if I need to use Selenium or Jest, some combination of the two, or a different testing tool entirely to do this. I am relatively new to writing tests and this is my first time posting on Stack Overflow so I apologize if anything I said is unclear. I would be happy to clarify anything if you have any questions and grateful for any advice you have to offer!
From my point of view, combining 2 different frameworks (approaches to testing) in a single test might not be the best idea. Here are some thoughts on how I would deal with a similar case. Very little theory:
Selenium - is great in simulation end-user behavior with a browser. Basically, it can simulate actions (clicks, text inputs, etc.) and can get information from a browser window (presence of elements, texts, styles, etc.)
Jest - is a unit test framework to jest javascript code.
executeScript is a method of Selenium, it executes any javascript code in a browser. Browser does not know anything about jest - so it's expected that you faced an error you described. Your project knows about jest, as it's specifically imported in a project.
Back to the question "How to check in Selenium that a js function has been called?"
The most typical solution - is to wait until something has changed on a browser screen. Typical cases:
some element appears
some element disappears
some element's style changed
wait for condition in js example
(a bit of a hack) Another possible solution is to add a some flag in the js code of an app that will be false before function is called and will be set to true after function is called. Then you can access this flag's value in the executeScript.
some possible implementations here
Some of my tests in my React Native project affect global objects. These changes often affect other tests relying on the same objects.
For example: One test checks that listeners are added correctly, a second test checks if listeners are removed correctly:
// __tests__/ExampleClass.js
describe("ExampleClass", () => {
it("should add listeners", () => {
ExampleClass.addListener(jest.fn());
ExampleClass.addListener(jest.fn());
expect(ExampleClass.listeners.length).toBe(2);
});
it("should remove listeners", () => {
const fn1 = jest.fn();
const fn2 = jest.fn();
ExampleClass.addListener(fn1);
ExampleClass.addListener(fn2);
expect(ExampleClass.listeners.length).toBe(2);
ExampleClass.removeListener(fn1);
expect(ExampleClass.listeners.length).toBe(1);
ExampleClass.removeListener(fn2);
expect(ExampleClass.listeners.length).toBe(0);
});
});
The second test will run fine by itself, but fails when all tests are run, because the first one didn't clean up the ExampleClass. Do I always have to clean up stuff like this manually in each test?
It seems I'm not understanding how the scope works in Jest... I assumed each test would run in a new environment. Is there any documentation about this?
Other examples are mocking external libraries and checking if the mocked functions in them are called correctly or overriding Platform.OS to ios or android to test platform-specific implementations.
As far as I can see, the scope of a test is always the test file. For now I changed my code to have a beforeEach callback that mainly calls jest.resetAllMocks() and resets Platform.OS to its default value.
Fast and sandboxed
Jest parallelizes test runs across workers to maximize performance. Console messages are buffered and printed together with test results. Sandboxed test files and automatic global state resets for every test so no two tests conflict with each other.
https://facebook.github.io/jest/
[Testing angular 2 web app]
This error occurs with browser.ignoreSynchronization = false when set to true, this error does not occur, why is this ?
I also have useAllAngular2AppRoots: true set in my protractor config.
Protractor runs on top of WebDriverJS. WebDriverJS is a Javascript interface equivalent to Java's Webdriver that let's you control browsers programmatically, which in turn helps to write automated test cases
The problem in testing Angular apps using WebDriverJS is that Angular has its own event loop separate from the browser's. This means that when you execute WebDriverJS commands, Angular might still be doing its thing.
One work around for this is by telling WebDriverJS to wait for an arbitrary amount of time (i.e. 3000ms) so that Angular finishes its work of rendering but this was not the right way of doing things.Therefore Protractor was created to synchronize your tests with Angular's event loop, by deferring running your next command until after Angular has finished processing the previous one.
But again there is a catch here, this becomes problematic when you are testing Non-Angular Applications, Protractor keeps on waiting for angular to synchronise even though there is no Angular to complete its cycle and will throw an error which you are observing!
Thus, for non-Angular pages, you can tell Protractor not to look for Angular by setting browser.ignoreSynchronization = true -- which in practical terms will mean that you're just using WebDriverJS.
So by adding that to your configuration when Protractor cannot find Angular on your page, you're giving up on all that makes testing Angular apps easier than plain WebDriverJS. And yes, adding browser.sleep after all your commands will likely work, but it's cumbersome, will break as soon as Angular takes longer than the pause you set, and makes your tests take excessively long.
Conclusion : Only use browser.ignoreSynchronization = true when testing a page that does not use Angular.
Reference: https://vincenttunru.com/Error-Error-while-waiting-for-Protractor-to-sync-with-the-page/
You should absolutely make sure that your page only gets loaded one single time on the test. We had this problem with a login mockup that caused the loaded page to reload right after the first load was done inside the boostrap code of the Angular 2 application. This caused all sorts of unpredictable behaviour with tests failing with timeouts or the above error - or running fine.
So, make sure you have a consistent page lifecycle prior to the test.
To extend on the point #ram-pasala made:
One work around for this is by telling WebDriverJS to wait for an arbitrary amount of time (i.e. 3000ms) so that Angular finishes its work of rendering but this was not the right way of doing things
Here's how waiting for the function to become available might look like. It's for TestCafe, but should be easy to adapt to other testing frameworks:
export const waitForAngular = ClientFunction((timeoutMs: number = 60_000) => {
const waitForGetAllAngularTestabilities = (timeoutMs: number) => {
if (timeoutMs <= 0) throw new Error('Waited for window.getAllAngularTestabilities, but timed out.')
const win = (window as any);
if (win.getAllAngularTestabilities) {
return Promise.resolve(win.getAllAngularTestabilities)
} else {
return new Promise(res => setTimeout(() => res(true), 100)).then(() => waitForGetAllAngularTestabilities(timeoutMs - 100))
}
}
return waitForGetAllAngularTestabilities(30_000).then((getAllAngularTestabilities) => {
return Promise.all(
getAllAngularTestabilities().map(
(t) =>
new Promise((res, rej) => {
setTimeout(() => rej(`Waited ${timeoutMs}ms for Angular to stabilize but timed out.`), timeoutMs);
return t.whenStable(() => {
res(true);
console.log('Angular stable');
});
}),
),
)
})
});