Can ios UIAutomation run each test case from beginning? - javascript

I wrote some UIAutomation test cases to test my app, but I didn't find anyway to run each case from beginning. When a test case failed would cause other cases failed as well. Is there any way let UIAutomation run each script from app beginning. I means when a test failed app can quit test and continue run second test from beginning.
I also used tunneup.js to write my scripts. In a test.js file the structure of scripts are:
test("test1", function () {
some code.
});
test("test2", function () {
some code.
});
Currently when test1 failed, would let test2 failed as well, I want when test1 failed app can quit and start again to run test2 case.

One thing I'd suggest is to keep your test cases independent from one another so you don't get cascading failures. Nonetheless, you can setup a base state so your automation can "recover" and continue with the remaining test cases. For example, if you have a main view, start your test cases by making sure you're at the main view before continuing.

I would like to quote few points, that will help you in building good test scripts.
1.Try to maintain a different scripts for different test cases that will help you in solving the problem of not reaching the next test case.
2.Try to maintain some reusable scripts, like going to home from any point or moving to a particular screen when needed and import them in the scripts, which will help you in reaching to test case easily in each of the different script.
I am answering this question knowing that you have knowledge of writing and importing scripts. If no , please do comment with you query.

Related

Writing proper unit tests

I am brand new to creating unit tests, and attempting to create tests for a project I did not create. The app I'm working with uses python/flask as a web container and loads various data stored from js files into the UI. I'm using pytest to run my tests, and I've created a few very simple tests so far, but I'm not sure if what I'm doing is even relevant.
Basically I've created something extremely simple to check if the needed files are available for the app to run properly. I have some functions put together that look for critical files, below are 2 examples:
import pytest
import requests as req
def test_check_jsitems
url = 'https://private_app_url/DB-exports_check_file.js'
r = req.get(url)
print(r.status_code)
assert r.status_code == req.codes.ok
def test_analysis_html
url = 'https://private_app_url/example_page.html'
r = req.get(url)
print(r.status_code)
assert r.status_code == req.codes.ok
My tests do work - if I remove one of the files and the page doesn't load properly - my basic tests will show what file is missing. Does it matter that the app must be running for the tests to execute properly? This is my first attempt at unit testing, so kindly cut me some slack
While testing is such a big topic, and does not fit in a single answer here, a couple of thoughts.
It's great that you started testing! These tests at least show that some part of your application is working.
While it is ok, that your tests require a running server, getting rid of that requirement, would have some advantages.
you don't need to start the server :-)
you know how to run your tests, even in a year from now (it's just pytest)
your colleagues can run the tests more easily
you could run your test in CI (continuous integration)
they are probably faster
You could check out the official Flask documentation on how to run tests without a running server.
Currently, you only test whether your files are available - that is a good start.
What about testing, whether the files (e.g the html file) have the correct content?
If it is a form, whether you can submit it?
Think about how the app is used - which problems does it solve?
And then try to test these requirements.
If you are into learning more about testing, I'd recommend the testandcode.com podcast by Brian Okken - especially the first episodes teach a lot about testing.

TestCafe Runner.run(runOptions) never returns, browser hangs (Firefox & Chrome)

I've got a little sandbox project I've been playing around with for the last few weeks to learn the in's and out's of implementing a TestCafe runner.
I've managed to solve all my problems except one and at this point I've tried everything I can think of.
Reviewed the following similar questions:
How to close testcafe runner
How to get the testCafe exit code
But still my problem remains.
I've toyed around with my argv.json file.
I've toyed around with my CICDtestBranches.json file.
I've toyed around with my package.json file.
I've tested the same branch that has the problem on multiple
machines.
I've tested with multiple browsers (Firefox & Chrome) -
both produce the same problem.
I've tried to re-arrange the code, see
below
I've tried add multiple tests in a fixture and added a page
navigation to each one.
I've tried to remove code that is processing
irrelevant options like video logs & concurrency (parallel execution)
I also talked with some coworkers around the office who have done similar projects and asked them what they did to fix the problem. I tried their recommendations, and even re-arranging things according to what they tried and still no joy.
I've read through the TestCafe documentation on how to implement a test runner several times and still I haven't been able to find any specific information about how to solve a problem with the browser not closing at the end of the test/fixture/script run.
I did find a few bugs that describe similar behavior, but all of those bugs have been fixed and the remaining bugs are specific to either Firefox or Safari. In my case the problem is with both Chrome & Firefox. I am running TestCafe 1.4.2. I don't want to file a bug with TestCafe unless it really is a confirmed bug and there is nothing else that can be done to solve it.
So I know others have had this same problem since my coworker said he faced the same problem with his implementation.
Since I know I am out of options at this point, I'm posting the question here in the hopes that someone will have a solution. Thank you for taking the time to look over my problem.
When executing the below code, after the return returnData; is executed, the .then statement is never executed so the TestCafe command and browser window are never terminated.
FYI the following code is CommonJS implemented with pure NodeJS NOT ES6 since this is the code that starts TestCafe (app.js) and not the script code.
...**Boiler Plate testcafe.createRunner() Code**...
console.log('Starting test');
var returnData = tcRunner.run(runOptions);
console.log('Done running tests');
return returnData;
})
.then(failed => {
console.log(`Test finished with ${failed} failures`);
exitCode = failed;
if (argv.upload) return upload(jsonReporterName);
else return 0;
testcafe.close();
process.exit(exitCode);
})
.then(() => {
console.log('Killing TestCafe');
testcafe.close();
process.exit(exitCode);
});
I've tried to swap around the two final .then statements to try and see if having one before the other will cause it to close. I copied the testcafe.close() and process.exit() and put them after the if-else statement in the then-failed block, although I know they might-should not get called because of the if-else return statements just before that.
I've tried moving those close and exit statements before the if-else returns just to see if that might solve it.
I know there are a lot of other factors that could play into this scenario, like I said I played around with the runOptions:
const runOptions = {
// Testcafe run options, see: https://devexpress.github.io/testcafe/documentation/using-testcafe/programming-interface/runner.html#run
skipJSErrors: true,
quarantineMode: true,
selectorTimeout: 50000,
assertionTimeout: 7000,
speed: 0.01
};
Best way I can say to access this problem and project and all of the code would be to clone the git lab repo:
> git clone "https://github.com/SethEden/CAFfeinated.git"
Then checkout the branch that I have been working this problem with: master
You will need to create an environment variable on your system to tell the framework what sub-path it should work with for the test site configuration system.
CAFFEINATED_TEST_SITE_NAME value: SethEden
You'll need to do a few other commands:
> npm install
> npm link
Then execute the command to run all the tests (just 1 for now)
> CAFfeinated
The output should look something like this:
$ CAFfeinated
Starting test
Done running tests
Running tests in:
- Chrome 76.0.3809 / Windows 10.0.0
LodPage
Got into the setup Test
Got to the end of the test1, see if it gets here and then the test is still running?
√ LodPage
At this point the browser will still be spinning, and the command line is still busy. You can see from the console output above that the "Done running tests" console log has been output and the test/fixture should be done since the "Got to the end of the test1,..." console log has also been executed, that is run as part of the test.after(...). So the next thing to execute should be in the app.js with the .then(()) call.....but it's not. What gives? Any ideas?
I'm looking for what specifically will solve this problem, not just so that I can solve it, but so others don't run into the same pitfall in the future. There must be some magic sauce that I am missing that is probably very obvious to others, but not so obvious to me or others who are relatively new to JavaScript & NodeJS & ES6 & TestCafe.
The problem occurs because you specified the wrong value for the runner.src() method.
The cause of the issue is in your custom reporter. I removed your reporter and now it works correctly. Please try this approach and recheck your reporter.

How to specify which functions/methods should be covered by a test, using Karma, Jasmine, and Istanbul

I am trying to figure out how to restrict my tests, so that the coverage reporter only considers a function covered when a test was written specifically for that function.
The following example from the PHPUnit doc shows pretty good what I try to achieve:
The #covers annotation can be used in the test code to specify which
method(s) a test method wants to test:
/**
* #covers BankAccount::getBalance
*/
public function testBalanceIsInitiallyZero()
{
$this->assertEquals(0, $this->ba->getBalance());
}
If the test above would be executed, only the function getBalance will be marked as covered, and none other.
Now some actual code sample from my JavaScript tests. This test shows the unwanted behaviour that I try to get rid of:
it('Test get date range', function()
{
expect(dateService.getDateRange('2001-01-01', '2001-01-07')).toEqual(7);
});
This test will mark the function getDateRange as covered, but also any other function that is called from inside getDateRange. Because of this quirk the actual code coverage for my project is probably a lot lower than the reported code coverage.
How can I stop this behaviour? Is there a way to make Karma/Jasmine/Istanbul behave the way I want it, or do I need to switch to another framework for JavaScript testing?
I don't see any particular reason for what you're asking. I'd say if your test causes a nested function to be called, then the function is covered too. You are indeed indirectly testing that piece of code, so why shouldn't that be included in the code coverage metrics? If the inner function contains a bug, your test could catch it even if it's not testing that directly.
You can annotate your code with special comments to tell Istanbul to ignore certain paths:
https://github.com/gotwarlost/istanbul/blob/master/ignoring-code-for-coverage.md
but that's more for the opposite I think, not to decrease coverage if you know you don't want a particular execution path to be covered, maybe because it would be too hard to write a test case for it.
Also, if you care about your "low level" functions tested in isolation, then make sure your code is structured in a modular way so that you can test those by themselves first. You can also set up different test run configurations, so you can have a suite that tests only the basic logic and reports the coverage for that.
As suggested in the comments, mocking and dependency injections can help to make your tests more focused, but you basically always want to have some high level tests where you check the integrations of these parts together. If you mock everything then you never test the actual pieces working together.

AngularJS + Testem + Jasmine: Why is inject() giving this $injectorr error?

I'm using Testem with Jasmine to set up an environment to start unit testing in my AngularJS app. Everything was working great until the first time I tried to use the injector. This is what I got back:
test.js
describe('Custom events', function(){
beforeEach(module('AlchemyAdmin'));
beforeEach(inject());
it('should work', function() {
});
});
Error console output:
Custom events should work.
✘ Error: [$injector:modulerr] http://errors.angularjs.org/1.2.25/$in
jector/modulerr?p0=AlchemyAdmin&p1=Error%3A%20%5B%24injector%3Amodulerr%
5D%20http%3A%2F%2Ferrors.angularjs.org%2F1.2.25%2F%24injector%2Fmodulerr
%3Fp0%3DdateRangePicker%26p1%3DError%253A%2520%255B%2524injector%253Amod
ulerr%255D%2520http%253A%252F%252Ferrors.angularjs.org%252F1.2.25%252F%2
524injector%252Fmodulerr%253Fp0%253Dpasvaz.bindonce%2526p1%253DError%252
53A%252520%25255B%252524injector%25253Anomod%25255D%252520http%25253A%25
252F%25252Ferrors.angularjs.org%25252F1.2.25%25252F%252524injector%25252
Fnomod%25253Fp0%25253Dpasvaz.bindonce%25250A%252520%252520%252520%252520
at%252520Error%252520(native)%25250A%252520%252520%252520%252520at%25252
0http%25253A%25252F%25252Flocalhost%25253A7357%25252Fvendor%25252Fangula
r%25252Fangular.min.js%25253A6%25253A450%25250A%252520%252520%252520%252
520at%252520http%25253A%25252F%25252Flocalhost%25253A7357%25252Fvendor%2
5252Fangular%25252Fangular.min.js%25253A20%25253A494%25250A%252520%25252
0%252520%252520at%252520http%25253A%25252F%25252Flocalhost%25253A7357%25
252Fvendor%25252Fangular%25252Fangular.min.js%25253A21%25253A502%25250A%
252520%252520%252520%252520at%252520http%25253A%25252F%25252Flocalhost%2
5253A7357%25252Fvendor%25252Fangular%25252Fangular.min.js%25253A33%25253
A267%25250A%252520%252520%252520%252520at%252520r%252520(http%25253A%252
52F%25252Flocalhost%25253A7357%25252Fvendor%25252Fangular%25252Fangular.
min.js%25253A7%25253A290)%25250A%252520%252520%252520%252520at%252520e%2
52520(http%25253A%25252F%25252Flocalhost%25253A7357%25252Fvendor%25252Fa
ngular%25252Fangular.min.js%25253A33%25253A207)%25250A%252520%252520%252
520%252520at%252520http%25253A%25252F%25252Flocalhost%25253A7357%25252Fv
endor%25252Fangular%25252Fangular.min.js%25253A33%25253A284%25250A%25252
Seems like there is something obvious I'm missing, but I can't quite grasp it. Note that taking out the line with the beforeEach(inject()); and writing standard tests in the it block works like a charm. Also, if I just declare an angular.module('myApp'); and then try to module() and inject() that, it works fine. Seems like something is going on in my module definition, maybe, but the app itself works fine with no errors from what I can tell!
Anybody run into this or know what I should look into? Thanks in advance!
Edit:
I thought it might make more sense if I gave a little context to my question. I have been developing an Angular app for a few weeks now, and I've been bit one to many times by not having unit tests. Having decided to TDD from here on out, I setup Testem, wrote a .spec.js file and tried to get started. I'm not testing any existing code, which will come later, but just trying to test-drive the part of the app I'm on. Before even writing my first piece of code or test, just setting up the module() and inject() calls per the docs failed miserably. That is where I am right now.
Well, I shouldn't have gotten frustrated with the angular error links. If you keep clicking through them, I eventually found a sub-dependency that I was not linking to! If anyone else finds themselves in this particular pickle, I hope this helps them! I am closing the plunker I made to remove my live code from the public. Special thanks to PSL for responding so quickly and being so willing to try to understand my issue.

How to get QUnit to print backtrace on exception?

When an exception occurs in my QUnit tests, all it will say is
Died on test #n: message
How do I get it to print a backtrace or some other location information so that I can see where the exception occurred?
I don't think it is possible to make QUnit give you a trace of where the error happened. Your code has generated an exception, which QUnit has caught and reported. If you tick the 'notrycatch' checkbox at the top of the QUnit results, your tests will run again, but this time QUnit won't catch the exception. Your browser may then give you more information on what actually happened, but it will depend on what the error was.
edit:
While answering this, the suspicion arised in me that is not what you wanted to ask. So i edited the answer to show this, probably more usefull part, first:
Because you write "When an exception occurs in my QUnit tests" let me explain the concept of testing in a bit more depth:
First of all: The exception does not occur in your QUnit tests, but in your code. The good news is: qUnit is in your case doing exactly what it should do: it tests your code, and as your code is faulty, your code raises an exception when being tested.
As qUnit is a testing environment, it is not responsible for delivering exception tracebacks. It is only there to check if the functionality you implemented works the way you expect it to work, not to track down bugs. For such purpose tools like FireBug or Safari's developer tools are much more suitable.
Let me describe a scenario:
you write a function
you eliminate bugs from it with (ie.) FireBug
you write a qUnit testcase to proove the function really does what
you want it to do - tests pass
(and now it gets interesting, because this is what testing really is
for) You add some additional funcionality to your function, because
it is needed
If you have done everything right, your tests pass, and you can be
sure everything will continue to work as expected because they do
(if you have written them well)
To sum that up: Tests are not for debugging, but for assuring things work the way you think they work. If a bug appears, you do not write a test to solve it, but you write a test to reproduce it. Then you find the bug, remove it, and the test will pass. If the bug is being reinvented later on (ie. because of code changes) the test will fail again, and you immediately know the bug is back.
This can be taken even further by establishing test driven development, where you write tests before you write the functionality itself. Then the scenario above would change to this:
you write a test, describing the expected results of your code
you write the code, when bugs appear you track them down with (ie.)
Firebug
while progressing, one test after the other will start to pass
when adding additional functionality, you first write additional
tests
There are two major advantages in doing so:
you can be sure you have the necessary tests and
you are forced to pin down exactly what you want your code to do,
because otherwise you can't write the tests.
Happy testing.
edit end - original answer follows, just in case it is needed.
When using QUnit i would strongly recommend to follow the approach shown on the jQuery documentation site http://docs.jquery.com/Qunit:
To use QUnit, you have to include its qunit.js and qunit.css files and
provide a basic HTML structure for displaying the test results:
All you have to do is to load qunit.js and qunit.css files, then put this snippet to your page to get visual feedback about the testing process:
<h1 id="qunit-header">QUnit example</h1>
<h2 id="qunit-banner"></h2>
<div id="qunit-testrunner-toolbar"></div>
<h2 id="qunit-userAgent"></h2>
<ol id="qunit-tests"></ol>
<div id="qunit-fixture">test markup, will be hidden</div>
Doing so results in a neatly rendered and interactive console showing exact reports about the test results. There is a row for each test showing if it passed or not, clicking on that row unfolds the results of each single test. This will look somewhat like this:
To customize the error messages qUnit shows, you just have to append the string to be shown to your test. So instead of
ok($('.testitem:first').is(':data(droppable)'))
use
ok($('.testitem:first').is(':data(droppable)'),
"testitem is droppable after calling setup_items('.testitem')");
to get the error message shown in the picture. Otherwise qUnit falls back to some standard error message associated with the used test.

Categories