We are using Grunt to kick off NightWatch.js tests.
There are like 30-40 tests and this number will grow a lot more.
For now, I am aware of only two reasonable ways to choose which tests get run and they are both manual:
1. Remove all tests that shouldn't be run from source folder
2. Comment/UnComment the '#disabled': true, annotation
I was thinking that a properly structured way of choosing which tests get run would be to have a certain file, say, "testPlan.txt" like this:
test1 run
test2 not
test3 run
And then the test could have some code, instead of the current annotation, such as this (sorry, my JS is non-existent):
if (checkTestEnabled()) then ('#disabled': true) else (//'#disabled': true)
You can use tags to group your tests into categories and then pick and choose which tests to run based on the flag you pass in. For example, if I wanted to create a smoke test suite I would just add '#tags': ['smokeTests'] at the top of each file that would be included in that suite and then you can run them using the command:
nightwatch --tag smokeTests
You can also add multiple tags to each test if you need to group them in additional categories. '#tags': ['smokeTests', 'login']
You should be able to use this the same way you are doing it in your grunt tasks. Read here for more details.
Related
I am brand new to creating unit tests, and attempting to create tests for a project I did not create. The app I'm working with uses python/flask as a web container and loads various data stored from js files into the UI. I'm using pytest to run my tests, and I've created a few very simple tests so far, but I'm not sure if what I'm doing is even relevant.
Basically I've created something extremely simple to check if the needed files are available for the app to run properly. I have some functions put together that look for critical files, below are 2 examples:
import pytest
import requests as req
def test_check_jsitems
url = 'https://private_app_url/DB-exports_check_file.js'
r = req.get(url)
print(r.status_code)
assert r.status_code == req.codes.ok
def test_analysis_html
url = 'https://private_app_url/example_page.html'
r = req.get(url)
print(r.status_code)
assert r.status_code == req.codes.ok
My tests do work - if I remove one of the files and the page doesn't load properly - my basic tests will show what file is missing. Does it matter that the app must be running for the tests to execute properly? This is my first attempt at unit testing, so kindly cut me some slack
While testing is such a big topic, and does not fit in a single answer here, a couple of thoughts.
It's great that you started testing! These tests at least show that some part of your application is working.
While it is ok, that your tests require a running server, getting rid of that requirement, would have some advantages.
you don't need to start the server :-)
you know how to run your tests, even in a year from now (it's just pytest)
your colleagues can run the tests more easily
you could run your test in CI (continuous integration)
they are probably faster
You could check out the official Flask documentation on how to run tests without a running server.
Currently, you only test whether your files are available - that is a good start.
What about testing, whether the files (e.g the html file) have the correct content?
If it is a form, whether you can submit it?
Think about how the app is used - which problems does it solve?
And then try to test these requirements.
If you are into learning more about testing, I'd recommend the testandcode.com podcast by Brian Okken - especially the first episodes teach a lot about testing.
I'm working with HTMLhint but it just run with the command line is a plugin, I want this action to run like a test. Is it possible do it and how? I was googling but I don't find a way to do that.
Do you mean you'd like it to run every time you push code? Or, would you like it to run locally in your editor every time you save/as you type?
Every time you push code: look into TravisCI or another type of continuous integration. These services can run items on your code, including linters, each time you push new commits.
Every time you save/as you type: this depends on what your editor requires to run the lint. For example, Sublime Text has Sublime Linter which can automatically run any type of linter after installation of the corresponding package.
I am trying to figure out how to restrict my tests, so that the coverage reporter only considers a function covered when a test was written specifically for that function.
The following example from the PHPUnit doc shows pretty good what I try to achieve:
The #covers annotation can be used in the test code to specify which
method(s) a test method wants to test:
/**
* #covers BankAccount::getBalance
*/
public function testBalanceIsInitiallyZero()
{
$this->assertEquals(0, $this->ba->getBalance());
}
If the test above would be executed, only the function getBalance will be marked as covered, and none other.
Now some actual code sample from my JavaScript tests. This test shows the unwanted behaviour that I try to get rid of:
it('Test get date range', function()
{
expect(dateService.getDateRange('2001-01-01', '2001-01-07')).toEqual(7);
});
This test will mark the function getDateRange as covered, but also any other function that is called from inside getDateRange. Because of this quirk the actual code coverage for my project is probably a lot lower than the reported code coverage.
How can I stop this behaviour? Is there a way to make Karma/Jasmine/Istanbul behave the way I want it, or do I need to switch to another framework for JavaScript testing?
I don't see any particular reason for what you're asking. I'd say if your test causes a nested function to be called, then the function is covered too. You are indeed indirectly testing that piece of code, so why shouldn't that be included in the code coverage metrics? If the inner function contains a bug, your test could catch it even if it's not testing that directly.
You can annotate your code with special comments to tell Istanbul to ignore certain paths:
https://github.com/gotwarlost/istanbul/blob/master/ignoring-code-for-coverage.md
but that's more for the opposite I think, not to decrease coverage if you know you don't want a particular execution path to be covered, maybe because it would be too hard to write a test case for it.
Also, if you care about your "low level" functions tested in isolation, then make sure your code is structured in a modular way so that you can test those by themselves first. You can also set up different test run configurations, so you can have a suite that tests only the basic logic and reports the coverage for that.
As suggested in the comments, mocking and dependency injections can help to make your tests more focused, but you basically always want to have some high level tests where you check the integrations of these parts together. If you mock everything then you never test the actual pieces working together.
I am using Gulp to create a bunch of tasks for a project I have and when you type gulp I would like to show some instructions on the terminal to show people the commands they can Run and what they do.
I didn't want to use console.log because it blends in and I wanted to give bolds and styles to the lettering.
I was searching for a way to do that but I couldn't find any that worked properly, does anyone know?
Examples of people that have this is Yeoman and Foundation for Apps CLI
If you need to avoid using console.log you can use the underlying standard output, accessible in node through process.stdout
https://nodejs.org/api/process.html#process_process_stdout
The example provided in that link is the actual definition of console.log in node:
console.log = function(d) {
process.stdout.write(d + '\n');
};
For colouring and styling your strings you could use cli-color or chalk.
you can either use gulp-help, which gives you ability to provided even details to be printed for a given task... or use gulp-task-linking which prints tasks as main tasks & sub-tasks
visit the link to know all the options that they provide...
I wrote some UIAutomation test cases to test my app, but I didn't find anyway to run each case from beginning. When a test case failed would cause other cases failed as well. Is there any way let UIAutomation run each script from app beginning. I means when a test failed app can quit test and continue run second test from beginning.
I also used tunneup.js to write my scripts. In a test.js file the structure of scripts are:
test("test1", function () {
some code.
});
test("test2", function () {
some code.
});
Currently when test1 failed, would let test2 failed as well, I want when test1 failed app can quit and start again to run test2 case.
One thing I'd suggest is to keep your test cases independent from one another so you don't get cascading failures. Nonetheless, you can setup a base state so your automation can "recover" and continue with the remaining test cases. For example, if you have a main view, start your test cases by making sure you're at the main view before continuing.
I would like to quote few points, that will help you in building good test scripts.
1.Try to maintain a different scripts for different test cases that will help you in solving the problem of not reaching the next test case.
2.Try to maintain some reusable scripts, like going to home from any point or moving to a particular screen when needed and import them in the scripts, which will help you in reaching to test case easily in each of the different script.
I am answering this question knowing that you have knowledge of writing and importing scripts. If no , please do comment with you query.