I am newbie to protractor and javascript. I am trying to execute several tests in parallel using multiCapabilities. However when I do this onPrepare or beforeAll are all executing once per every spec. Is there a way to execute onPrepare and onComplete only once for all tests?
I am facing this issue in two situations. 1. Different browsers. 2. Same browser with multiple instance i.e., as follows. capabilities : { browserName : 'chrome', shardTestFiles : true, maxInstances : 2 }, In both cases my code under onPrepare is executing twice. I have a requirement to write the test result of each test to a Json file and I am creating new file in onPrepare and it is getting over written when I use maxinstances > 1
When you are running your protractor test cases with multi capability option, Then the onPrepare method will be executed for each set of capability you have mentioned in multi capabilities object (i.e) Sharded tests run in a different process.
In you case, you need to create your test report file in beforeLaunch method. This method will execute only once before initializing protractor global objects.
Kindly refer https://github.com/angular/protractor/blob/master/lib/config.ts#L404 for additional details on beforeLaunch method.
Related
I have a series of functions like below, that thread through a web application that simulate login, and then runs through many features of the web app. I am using JS, nightwatch.js, and selenium via browserstack.. the problem is, it all reports through browser stack as one large test with this approach; how could I get each function to report within browserstack as separate test?
this.Settings = function(browser) {
browser
.url(Data.urls.settings)
.waitForElementVisible("div.status-editor .box", 1000)
Errors.checkForErrors(browser);
browser.end();
};
this.TeamPanel = function(browser) {
browser
Errors.checkForErrors(browser);
browser.end();
};
It seems you are using the same remote browser instance for all the test functions which therefore are being run as a single test case on BrowserStack. You need to create a new driver instance before every test function. You can either implement that parallelisation logic in your framework or use any sample nightwatch framework like the one here: https://github.com/browserstack/nightwatch-browserstack
I have written few test cases using selenium/protractor and running on single page. I need to run same test cases against multiple pages parallely / one by one. How to implement this type of scenario?
if I got you right, use it.
in protractor.conf add this piece of code
multiCapabilities: [
{'browserName': 'internet explorer'},
{'browserName': 'chrome'}
],
maxSessions: 1,
browserName - which browser you want to utilize
maxSessions - how many of them
you can run in parallel at a time in the remote system.
Use this library https://www.npmjs.com/package/jasmine-data-provider
You can define different pages as data and run the same test one after another.
We are using Grunt to kick off NightWatch.js tests.
There are like 30-40 tests and this number will grow a lot more.
For now, I am aware of only two reasonable ways to choose which tests get run and they are both manual:
1. Remove all tests that shouldn't be run from source folder
2. Comment/UnComment the '#disabled': true, annotation
I was thinking that a properly structured way of choosing which tests get run would be to have a certain file, say, "testPlan.txt" like this:
test1 run
test2 not
test3 run
And then the test could have some code, instead of the current annotation, such as this (sorry, my JS is non-existent):
if (checkTestEnabled()) then ('#disabled': true) else (//'#disabled': true)
You can use tags to group your tests into categories and then pick and choose which tests to run based on the flag you pass in. For example, if I wanted to create a smoke test suite I would just add '#tags': ['smokeTests'] at the top of each file that would be included in that suite and then you can run them using the command:
nightwatch --tag smokeTests
You can also add multiple tags to each test if you need to group them in additional categories. '#tags': ['smokeTests', 'login']
You should be able to use this the same way you are doing it in your grunt tasks. Read here for more details.
One of my angular modules has the following in its config:
angular.module('feature').config(function () {
CKEDITOR.plugins.add('expressioneditor', {});
});
CKEDITOR is a global object that is added via <script> tag to the page. As I understand, Karma reinitializes angular application for each beforeEach block, while the global CKEDITOR object remains the same. This reinitialization process makes config being called multiple times, and this code CKEDITOR.plugins.add('expressioneditor', {}); is also executed multiple times. The problem is that CKEDITOR.plugins.add throws an error if an attempt is made to add a plugin with the same name multiple times, so my tests fail. This problem doesn't exist in runtime environment, since config functions are run only once during angular's initialization stage.
My question is what is the common approach to reinitialization of global environment before each test? For my particular case, I can add a check for the existence of a plugin before adding it, but this question is about the general approach.
I'm using Protractor (v 1.3.1) to run E2E tests for my Angular 1.2.26 application.
But sometimes, tests are ok, sometimes not. It seems that sometimes the check is done before display is updated (or something like "synchronisation" problem).
I try many options :
add browser.driver.sleep instructions,
disable effects with browser.executeScript('$.fx.off = true')
add browser.waitForAngular() instructions
without success.
What are the bests practice to have reliables E2E tests with protractor?
JM.
Every time I have similar issues, I'm using browser.wait() with "Expected Conditions" (introduced in protractor 1.7). There is a set of built-in expected conditions that is usually enough, but you can easily define your custom expected conditions.
For example, waiting for an element to become visible:
var EC = protractor.ExpectedConditions;
var e = element(by.id('xyz'));
browser.wait(EC.visibilityOf(e), 10000);
expect(e.isDisplayed()).toBeTruthy();
Few notes:
you can specify a custom error message in case the conditions would not be met and a timeout error would be thrown, see Custom message on wait timeout error:
browser.wait(EC.visibilityOf(e), 10000, "Element 'xyz' has not become visible");
you can set EC to be a globally available variable pointing to protractor.ExpectedConditions. Add this line to the onPrepare() in your config:
onPrepare: function () {
global.EC = protractor.ExpectedConditions;
}
as an example of a custom expected condition, see this answer
Another point which is very important in testing with Protractor is understanding the ControlFlow. You may find explaination and code example here : When should we use .then with Protractor Promise?
Jean-marc
There are two things to consider.
The first is that you should properly sequence all protractor actions (as also hinted by #jmcollin92). For this, I typically use .then on every step.
The second important thing is to make sure that a new test it(...) only starts after the previous it(...) has completed.
If you use the latest version of Protractor, you can use Jasmine 2.x and its support for signalling the completion of a test:
it('should do something', function(done) {
clicksomething().then(function() {
expect(...);
done();
});
});
Here the done argument is invoked to signal that the test is ready. Without this, Protractor will schedule the clicksomething command, and then immediately move on with the next test, returning to the present test only once clicksomething has completed.
Since typically both tests inspect and possibly modify the same browser/page, your tests become unpredictable if you let them happen concurrently (one test clicks to the next page, while another is still inspecting the previous page).
If you use an earlier version of Protractor (1.3 as you indicate), the Jasmine 1.3 runs and waitsFor functions can be used to simulate this behavior.
Note that the whole point of using Protractor is that Protractor is supposed to know when Angular is finished. So in principle, there should be no need to ever call waitForAngular (my own test suite with dozens of scenarios does not include a single wait/waitForAngular). The better your application-under-test adheres to Angular's design principles, the fewer WaitForAngular's you should need.
I would add that disabling ngAnimate may not be enough. You may also have to disable all transition animation by injecting CSS (What is the cleanest way to disable CSS transition effects temporarily?).