My understanding of jest from observation is that it provides concurrent execution of tests by spawning helper processes and distributes test files to the workers to execute as they finish their current test files.
That suggests to me that jest won't attempt to execute tests in an individual test file concurrently. So I would expect that the following test would always pass (without needing to pass --runInBand):
describe('counting test', () => {
let variable = 0;
it('should start as 1', () => {
variable += 1;
expect(variable).toEqual(1);
});
it('should change to 2', () => {
variable += 1;
expect(variable).toEqual(2);
});
});
I.e. the second test is always run after the first test has finished. Is that safe, and is there an official document somewhere that specifies this behaviour? I couldn't find one.
Since this didn't have an official answer, I added one to the jest documentation after some further research / experimentation (and it was signed off by one of their moderators).
So, yes, jest runs each test in a file sequentially, waiting for each to finish before moving onto the next. This is now described in Setup and Teardown.
Further note that describe blocks are all executed before any of the test blocks.
For reference, the code that implements this is mostly in jest-circus/src/run.ts and eventHandler.ts.
Related
I am testing reading and writing (to a server, to a mongo db).
I know, I am not supposed to do this, I should be using mocks, ... but anyhow
I want to write a document, read that document to make sure it is was correctly written,
then delete that document, then verify it is gone. So I have 2 problems that I have solved but by using 2 hacks.
1) how do I pass along the mongo _id of the document from step to step. I'd like to have a simple variable in my Jasmine code that I can read and write from each test. I am hacking it now by creating a variable in the actual Angular module that I am testing, and reading and writing a variable over in that code.
2) since I have to wait for each IO operation before proceeding, I am taking advantage of the
setTimeout(() => {done();}, 2000); feature in a set of nested beforeEach(function(done){
sections.
I would like to learn simple, better ways of doing these if there are any.
thanks
What you're doing is called integration tests. Nothing wrong with doing them, but I usually write integration tests using Angular's E2E facilities.
That said, just save the value in a global variable and it will change each test. Some psuedo code
describe('integration test', () => {
let id;
it('should create a document', () => {
// code to create item and return id
id = _id;
}
it('should load document', () => {
console.log(id); // should be value from create test
}
it('should delete document, () => {
console.log(id); // should have value from create test
}
}
Since the id value is never set in a beforeEach() it will retain its value between tests in the same describe() block.
I have cautions about this when writing unit tests--because the tests must run in a specific order to execute. But, the desire is that E2E / integration tests are run sequentially.
tl;dr: When I run my test case, steps executed seem to work, but the test bails out early on a failure to find an element that hasn't even loaded yet. It seems like the waits I have around locating certain elements are loaded/launched as soon as the test is launched, not when the lines should actually be executed in the test case. I think this is happening because the page is barely (correctly) loaded before the "search" for the element to verify the page has loaded bails out. How do I wrangle the event loop?
This is probably a promise question, which is fine, but I don't understand what's going on. How do I implement my below code to work as expected? I'm working on creating automated E2E test cases using Jasmine2 and Protractor 5.3.0 in an Angular2 web app.
describe('hardware sets', () => {
it('TC3780:My_Test', async function() {
const testLogger = new CustomLogger('TC3780');
const PROJECT_ID = '65';
// Test Setup
browser.waitForAngularEnabled(false); // due to nature of angular project, the app never leaves zones, leaving a macrotask constantly running, thus protractor's niceness with angular is not working on our web app
// Navigate via URL to planviewer page for PROJECT_ID
await planListingPage.navigateTo(PROJECT_ID); // go to listing page for particular project
await planListingPage.clickIntoFirstRowPlans(); // go to first plan on listing page
await planViewerPage.clickOnSetItem('100'); // click on item id 100 in the plan
});
});
planViewerPage.po.ts function:
clickOnSetItem(id: string) {
element(by.id(id)).click();
browser.wait(until.visibilityOf(element(by.css('app-side-bar .card .info-content'))), 30000); // verify element I want to verify is present and visible
return expect(element(by.css('app-side-bar .card .info-content')).getText).toEqual(id); //Verify values match, This line specifically is failing.
}
This is the test case so far. I need more verification, but it is mostly done. I switched to using async function and awaits instead of the typical (done) and '.then(()=>{' statement chaining because I prefer not having to do a bunch of nesting to get things to execute in the right order. I come from a java background, so this insanity of having to force things to run in the order you write them is a bit much for me sometimes. I've been pointed to information like Mozilla's on event loop, but this line just confuses me more:
whenever a function runs, it cannot be pre-empted and will run entirely before any other code
runs (and can modify data the function manipulates).
Thus, why does it seem like test case is pre-evaluated and the timer's set off before any of the pages have been clicked on/loaded? I've implemented the solution here: tell Protractor to wait for the page before executing expect pretty much verbatim and it still doesn't wait.
Bonus question: Is there a way to output the event-loop's expected event execution and timestamps? Maybe then I could understand what it's doing.
The behavior
The code in your function is running asynchronously
clickOnSetItem(id: string) {
element(by.id(id)).click().then(function(){
return browser.wait(until.visibilityOf(element(by.css('app-side-bar .card .info-content'))), 30000);
}).then(function(){
expect(element(by.css('app-side-bar .card .info-content')).getText).toEqual(id);
}).catch(function(err){
console.log('Error: ' + err);
})
}
Some of my tests in my React Native project affect global objects. These changes often affect other tests relying on the same objects.
For example: One test checks that listeners are added correctly, a second test checks if listeners are removed correctly:
// __tests__/ExampleClass.js
describe("ExampleClass", () => {
it("should add listeners", () => {
ExampleClass.addListener(jest.fn());
ExampleClass.addListener(jest.fn());
expect(ExampleClass.listeners.length).toBe(2);
});
it("should remove listeners", () => {
const fn1 = jest.fn();
const fn2 = jest.fn();
ExampleClass.addListener(fn1);
ExampleClass.addListener(fn2);
expect(ExampleClass.listeners.length).toBe(2);
ExampleClass.removeListener(fn1);
expect(ExampleClass.listeners.length).toBe(1);
ExampleClass.removeListener(fn2);
expect(ExampleClass.listeners.length).toBe(0);
});
});
The second test will run fine by itself, but fails when all tests are run, because the first one didn't clean up the ExampleClass. Do I always have to clean up stuff like this manually in each test?
It seems I'm not understanding how the scope works in Jest... I assumed each test would run in a new environment. Is there any documentation about this?
Other examples are mocking external libraries and checking if the mocked functions in them are called correctly or overriding Platform.OS to ios or android to test platform-specific implementations.
As far as I can see, the scope of a test is always the test file. For now I changed my code to have a beforeEach callback that mainly calls jest.resetAllMocks() and resets Platform.OS to its default value.
Fast and sandboxed
Jest parallelizes test runs across workers to maximize performance. Console messages are buffered and printed together with test results. Sandboxed test files and automatic global state resets for every test so no two tests conflict with each other.
https://facebook.github.io/jest/
I'm trying to understand how The WebDriver Control Flow works exactly.
According to the linked documentation (https://github.com/angular/protractor/blob/master/docs/control-flow.md) no callback method / call is needed in jasmine:
Protractor adapts Jasmine so that each spec automatically waits until the control flow is empty before exiting.
However, I have to use cucumber. I'm using the library protractor-cucumber-framework as described here: https://github.com/angular/protractor/blob/master/docs/frameworks.md#using-cucumber
It works well, but for some reason it works better when I skip the callback variable then when I try using it. For instance, this code fails:
this.Given(/^the login page is active$/, function (callback) {
browser.get('/').then(callback);
});
With the error ...
TypeError: text.split is not a function
[launcher] Process exited with error code 1
On the other hand, this codes works as I want it to work and cucumber / protractor seems to be waiting until the page is loaded, before executing further functions:
me.Given(/^the login page is active$/, function () {
browser.get('/');
});
But I couldn't find any documentation confirming that I really can omit the callback function.
Currently the page I tried to test doesn't use Angular and therefore I have the following code in my config file:
onPrepare: function() {
browser.ignoreSynchronization = true;
}
Protractor uses WebDriverJS underneath. And WebDriverJS uses a promise manager where it queues its commands. Here is some excerpts from their wiki page here
Internally, the promise manager maintains a call stack. Upon each turn
of the manager's execution loop, it will pull a task to execute from
the queue of the top-most frame. Any commands scheduled within the
callback of a previous command will be scheduled in a new frame,
ensuring they run before any tasks previously scheduled. The end
result is that if your test is written-in line, with all callbacks
defined by function literals, commands should execute in the order
they are read vertically on the screen. For example, consider the
following WebDriverJS test case:
driver.get(MY_APP_URL);
driver.getTitle().then(function(title) {
if (title === 'Login page') {
driver.findElement(webdriver.By.id('user')).sendKeys('bugs');
driver.findElement(webdriver.By.id('pw')).sendKeys('bunny');
driver.findElement(webdriver.By.id('login')).click();
}
});
driver.findElement(webdriver.By.id('userPreferences')).click();
This test case could be rewritten using !WebDriver's Java API as follows:
driver.get(MY_APP_URL);
if ("Login Page".equals(driver.getTitle())) {
driver.findElement(By.id("user")).sendKeys("bugs");
driver.findElement(By.id("pw")).sendKeys("bunny");
driver.findElement(By.id("login")).click();
}
driver.findElement(By.id("userPreferences")).click();
Now going back to your question, since you are omitting callback from your steps, cucumber is treating your test code as synchronous. See documentation here. And because the way protractor/WebdriverJS handles promise manager the way described above, everything works as expected for you.
As far as the error you are getting when using callback, I'm not sure. I do it exactly the same way you are doing. See here. I'm using cucumber ^0.9.2. It could be that your cucumber version has issues.
On a side note, I found that you could return promises instead of using callbacks to let cucumber know that you are done executing. So something like this works as well (assuming you are using ^0.9.2). I tested it,
me.Given(/^the login page is active$/, function () {
return browser.get('/');
});
I'm automating running the ECMA-402 test suite against the Intl polyfill I wrote, and I've hit some problems. Currently, the tests are run against a fully-built version of the library, which means having to recompile every time a change is made before the tests can run. I'm trying to improve it by splitting the code up into separate modules and using require to run the tests.
The main problem comes into focus when I try and run the tests using the vm module. If I add the polyfill to the test's sandbox, some of the tests fail when checking native behaviour — the polyfill's objects don't inherit from the test context's Object.prototype, for example. Passing require to the tests will not work because the modules are still compiled and executed in the parent's context.
The easiest solution in my head was to spawn a new node process and write the code to the process's stdin, but the spawned node process doesn't execute the code written to it and just waits around forever. This is the code I tried:
function runTest(testPath, cb) {
var test,
err = '',
content = 'var IntlPolyfill = require("' + LIB_PATH + '");\n';
content += LIBS.fs.readFileSync(LIBS.path.resolve(TEST_DIR, testPath)).toString();
content += 'runner();';
test = LIBS.spawn(process.execPath, process.execArgv);
test.stdin.write(content, 'utf8');
// cb runs the next test
test.on('exit', cb);
}
Does anyone have any idea why Node.js doesn't execute the code written to its stdin stream, or if there's another way I can get the module to compile in the same context as the tests?
You must close the stdin for the child process to consume data and exit. Do this when you are done passing code.
test.stdin.end();
In the end, I chose to use the -e command line switch to pass the code directly to the new node instance. It only took a slight modification to the code:
function runTest(testPath, cb) {
var test,
err = '',
content = 'var IntlPolyfill = require("' + LIB_PATH + '");\n';
content += LIBS.fs.readFileSync(LIBS.path.resolve(TEST_DIR, testPath)).toString();
content += 'runner();';
test = LIBS.spawn(process.execPath, process.execArgv.concat('-e', content));
// cb runs the next test
test.on('exit', cb);
}