Running a same test case on multiple pages using selenium/protractor - javascript

I have written few test cases using selenium/protractor and running on single page. I need to run same test cases against multiple pages parallely / one by one. How to implement this type of scenario?

if I got you right, use it.
in protractor.conf add this piece of code
multiCapabilities: [
{'browserName': 'internet explorer'},
{'browserName': 'chrome'}
],
maxSessions: 1,
browserName - which browser you want to utilize
maxSessions - how many of them
you can run in parallel at a time in the remote system.

Use this library https://www.npmjs.com/package/jasmine-data-provider
You can define different pages as data and run the same test one after another.

Related

Writing proper unit tests

I am brand new to creating unit tests, and attempting to create tests for a project I did not create. The app I'm working with uses python/flask as a web container and loads various data stored from js files into the UI. I'm using pytest to run my tests, and I've created a few very simple tests so far, but I'm not sure if what I'm doing is even relevant.
Basically I've created something extremely simple to check if the needed files are available for the app to run properly. I have some functions put together that look for critical files, below are 2 examples:
import pytest
import requests as req
def test_check_jsitems
url = 'https://private_app_url/DB-exports_check_file.js'
r = req.get(url)
print(r.status_code)
assert r.status_code == req.codes.ok
def test_analysis_html
url = 'https://private_app_url/example_page.html'
r = req.get(url)
print(r.status_code)
assert r.status_code == req.codes.ok
My tests do work - if I remove one of the files and the page doesn't load properly - my basic tests will show what file is missing. Does it matter that the app must be running for the tests to execute properly? This is my first attempt at unit testing, so kindly cut me some slack
While testing is such a big topic, and does not fit in a single answer here, a couple of thoughts.
It's great that you started testing! These tests at least show that some part of your application is working.
While it is ok, that your tests require a running server, getting rid of that requirement, would have some advantages.
you don't need to start the server :-)
you know how to run your tests, even in a year from now (it's just pytest)
your colleagues can run the tests more easily
you could run your test in CI (continuous integration)
they are probably faster
You could check out the official Flask documentation on how to run tests without a running server.
Currently, you only test whether your files are available - that is a good start.
What about testing, whether the files (e.g the html file) have the correct content?
If it is a form, whether you can submit it?
Think about how the app is used - which problems does it solve?
And then try to test these requirements.
If you are into learning more about testing, I'd recommend the testandcode.com podcast by Brian Okken - especially the first episodes teach a lot about testing.

run series of test functions to appear as separate tests within browserstack

I have a series of functions like below, that thread through a web application that simulate login, and then runs through many features of the web app. I am using JS, nightwatch.js, and selenium via browserstack.. the problem is, it all reports through browser stack as one large test with this approach; how could I get each function to report within browserstack as separate test?
this.Settings = function(browser) {
browser
.url(Data.urls.settings)
.waitForElementVisible("div.status-editor .box", 1000)
Errors.checkForErrors(browser);
browser.end();
};
this.TeamPanel = function(browser) {
browser
Errors.checkForErrors(browser);
browser.end();
};
It seems you are using the same remote browser instance for all the test functions which therefore are being run as a single test case on BrowserStack. You need to create a new driver instance before every test function. You can either implement that parallelisation logic in your framework or use any sample nightwatch framework like the one here: https://github.com/browserstack/nightwatch-browserstack

How to implement centralised test execution control of NightWatch.js tests?

We are using Grunt to kick off NightWatch.js tests.
There are like 30-40 tests and this number will grow a lot more.
For now, I am aware of only two reasonable ways to choose which tests get run and they are both manual:
1. Remove all tests that shouldn't be run from source folder
2. Comment/UnComment the '#disabled': true, annotation
I was thinking that a properly structured way of choosing which tests get run would be to have a certain file, say, "testPlan.txt" like this:
test1 run
test2 not
test3 run
And then the test could have some code, instead of the current annotation, such as this (sorry, my JS is non-existent):
if (checkTestEnabled()) then ('#disabled': true) else (//'#disabled': true)
You can use tags to group your tests into categories and then pick and choose which tests to run based on the flag you pass in. For example, if I wanted to create a smoke test suite I would just add '#tags': ['smokeTests'] at the top of each file that would be included in that suite and then you can run them using the command:
nightwatch --tag smokeTests
You can also add multiple tags to each test if you need to group them in additional categories. '#tags': ['smokeTests', 'login']
You should be able to use this the same way you are doing it in your grunt tasks. Read here for more details.

Protractor is executing onPrepare multiple times in multicapability mode

I am newbie to protractor and javascript. I am trying to execute several tests in parallel using multiCapabilities. However when I do this onPrepare or beforeAll are all executing once per every spec. Is there a way to execute onPrepare and onComplete only once for all tests?
I am facing this issue in two situations. 1. Different browsers. 2. Same browser with multiple instance i.e., as follows. capabilities : { browserName : 'chrome', shardTestFiles : true, maxInstances : 2 }, In both cases my code under onPrepare is executing twice. I have a requirement to write the test result of each test to a Json file and I am creating new file in onPrepare and it is getting over written when I use maxinstances > 1
When you are running your protractor test cases with multi capability option, Then the onPrepare method will be executed for each set of capability you have mentioned in multi capabilities object (i.e) Sharded tests run in a different process.
In you case, you need to create your test report file in beforeLaunch method. This method will execute only once before initializing protractor global objects.
Kindly refer https://github.com/angular/protractor/blob/master/lib/config.ts#L404 for additional details on beforeLaunch method.

Distinguish between different linux distros in nodejs

Basically I want to be able to run different code depending on what os you have.
I've found out that the os.platform() function will return "win32", "win64", "darwin", or "linux" (possibly others?), but I can't seem to get any more specific information.
Ideally I want to be able to tell if Gnome, Unity, KDE, or some other desktop environment is being used.
Getting the active desktop environment/window manager is not a node-specific problem. There are different approaches (some better than others) that include using pgrep to check running process names against known DE/WM binary names and using other tools such as HardInfo or wmctrl.
What I ended up using was a bash scripts from mscottnielsen. It seems to use the best of many different commands to find out what desktop environment is being used. Unfortunetally, it's kind of hard to figure out the exact string that gets outputed by it (It doesn't say anywhere what strings get outputed), but other then that it does the job.
See the script here.

Categories