Recently, I've noticed that one of our tests has the following line:
browser.actions().sendKeys(protractor.Key.RETURN);
The intent is understandable, but it would actually do nothing since perform() was not called. For some reason, the test itself was passing which indicates a problem in the logic of the test and the following expectations.
How can I spot this kind of problems as early as possible and, ideally, prevent this protractor/WebDriverJS usage violation to be committed into the repository?
One option would be to use static code analysis - there is an ESLint linting utility that has a set of different plugins. Nowadays, there is a eslint-plugin-protractor plugin that, aside from other protractor-specific violations, would catch browser.actions() without perform().
Here is the output of a ESLint run in this case:
/Users/user/job/app/specs/test.spec.js
36:13 error No perform() called on browser.actions() protractor/missing-perform
I expect this could be achieved via a (scriptable) editor plugin or linter rule.
That said, surely the best way to evaluate the test script is to run it for real - but also to ensure that all test actions have a corresponding assertion / validation.
Your Key.RETURN must presumably have some effect on the DOM, or initiate some action, the result of which can be detected (page changes, data changes, etc.) and which is probably meaningful and easier to read than a static analysis rule.
Related
I am writing a web app as a hobby using nodejs and react.
I have a file where I use some utilities functions, for example foo.
After using this function in some other files, I have decided to change the export and to wrap the function in an object, like Util.foo.
There was one file that I forgot to change the import statement to object instead of function, and I was calling foo() instead of Util.foo().
I couldn't catch it in my webpack build and not even in my unit tests, I cought it only when running the code and executing the appropriate function.
My question is, how can I avoid future mistakes like this? Are there any tools other than refactoring tools for this matter?
By the way, I am using Atom IDE.
This should have been caught by your unit tests if this part of your code is covered completely.
Calling a non-existing function will result in an error along the lines of undefined is not a function and should fail your test case.
To avoid issues like this, make sure your test coverage is exhausting. A test coverage tool like Istanbul may by helpful in determining areas for improvement.
I am trying to figure out how to restrict my tests, so that the coverage reporter only considers a function covered when a test was written specifically for that function.
The following example from the PHPUnit doc shows pretty good what I try to achieve:
The #covers annotation can be used in the test code to specify which
method(s) a test method wants to test:
/**
* #covers BankAccount::getBalance
*/
public function testBalanceIsInitiallyZero()
{
$this->assertEquals(0, $this->ba->getBalance());
}
If the test above would be executed, only the function getBalance will be marked as covered, and none other.
Now some actual code sample from my JavaScript tests. This test shows the unwanted behaviour that I try to get rid of:
it('Test get date range', function()
{
expect(dateService.getDateRange('2001-01-01', '2001-01-07')).toEqual(7);
});
This test will mark the function getDateRange as covered, but also any other function that is called from inside getDateRange. Because of this quirk the actual code coverage for my project is probably a lot lower than the reported code coverage.
How can I stop this behaviour? Is there a way to make Karma/Jasmine/Istanbul behave the way I want it, or do I need to switch to another framework for JavaScript testing?
I don't see any particular reason for what you're asking. I'd say if your test causes a nested function to be called, then the function is covered too. You are indeed indirectly testing that piece of code, so why shouldn't that be included in the code coverage metrics? If the inner function contains a bug, your test could catch it even if it's not testing that directly.
You can annotate your code with special comments to tell Istanbul to ignore certain paths:
https://github.com/gotwarlost/istanbul/blob/master/ignoring-code-for-coverage.md
but that's more for the opposite I think, not to decrease coverage if you know you don't want a particular execution path to be covered, maybe because it would be too hard to write a test case for it.
Also, if you care about your "low level" functions tested in isolation, then make sure your code is structured in a modular way so that you can test those by themselves first. You can also set up different test run configurations, so you can have a suite that tests only the basic logic and reports the coverage for that.
As suggested in the comments, mocking and dependency injections can help to make your tests more focused, but you basically always want to have some high level tests where you check the integrations of these parts together. If you mock everything then you never test the actual pieces working together.
I'm making an Opera Extension. It includes a background script which fails very silently. It runs in a unique environment, so I can't take it just anywhere to check if it works (it needs predefined variables). Is there a way to debug scripts without running them. That is, checking if the syntax is correct. I want something like JSLint, that instead of telling me how my code is bad tells me where the syntax errors are.
If you only want a quick search for SyntaxErrors, you could drop the code into Closure Compiler, and just choose the "Whitespace Only" option.
It'll notify you of invalid code without any code styling analysis to clutter things up.
http://closure-compiler.appspot.com/home
If you choose the "Pretty Print" option, it'll also give you a well indented result in case the original code needed some cleanup.
When an exception occurs in my QUnit tests, all it will say is
Died on test #n: message
How do I get it to print a backtrace or some other location information so that I can see where the exception occurred?
I don't think it is possible to make QUnit give you a trace of where the error happened. Your code has generated an exception, which QUnit has caught and reported. If you tick the 'notrycatch' checkbox at the top of the QUnit results, your tests will run again, but this time QUnit won't catch the exception. Your browser may then give you more information on what actually happened, but it will depend on what the error was.
edit:
While answering this, the suspicion arised in me that is not what you wanted to ask. So i edited the answer to show this, probably more usefull part, first:
Because you write "When an exception occurs in my QUnit tests" let me explain the concept of testing in a bit more depth:
First of all: The exception does not occur in your QUnit tests, but in your code. The good news is: qUnit is in your case doing exactly what it should do: it tests your code, and as your code is faulty, your code raises an exception when being tested.
As qUnit is a testing environment, it is not responsible for delivering exception tracebacks. It is only there to check if the functionality you implemented works the way you expect it to work, not to track down bugs. For such purpose tools like FireBug or Safari's developer tools are much more suitable.
Let me describe a scenario:
you write a function
you eliminate bugs from it with (ie.) FireBug
you write a qUnit testcase to proove the function really does what
you want it to do - tests pass
(and now it gets interesting, because this is what testing really is
for) You add some additional funcionality to your function, because
it is needed
If you have done everything right, your tests pass, and you can be
sure everything will continue to work as expected because they do
(if you have written them well)
To sum that up: Tests are not for debugging, but for assuring things work the way you think they work. If a bug appears, you do not write a test to solve it, but you write a test to reproduce it. Then you find the bug, remove it, and the test will pass. If the bug is being reinvented later on (ie. because of code changes) the test will fail again, and you immediately know the bug is back.
This can be taken even further by establishing test driven development, where you write tests before you write the functionality itself. Then the scenario above would change to this:
you write a test, describing the expected results of your code
you write the code, when bugs appear you track them down with (ie.)
Firebug
while progressing, one test after the other will start to pass
when adding additional functionality, you first write additional
tests
There are two major advantages in doing so:
you can be sure you have the necessary tests and
you are forced to pin down exactly what you want your code to do,
because otherwise you can't write the tests.
Happy testing.
edit end - original answer follows, just in case it is needed.
When using QUnit i would strongly recommend to follow the approach shown on the jQuery documentation site http://docs.jquery.com/Qunit:
To use QUnit, you have to include its qunit.js and qunit.css files and
provide a basic HTML structure for displaying the test results:
All you have to do is to load qunit.js and qunit.css files, then put this snippet to your page to get visual feedback about the testing process:
<h1 id="qunit-header">QUnit example</h1>
<h2 id="qunit-banner"></h2>
<div id="qunit-testrunner-toolbar"></div>
<h2 id="qunit-userAgent"></h2>
<ol id="qunit-tests"></ol>
<div id="qunit-fixture">test markup, will be hidden</div>
Doing so results in a neatly rendered and interactive console showing exact reports about the test results. There is a row for each test showing if it passed or not, clicking on that row unfolds the results of each single test. This will look somewhat like this:
To customize the error messages qUnit shows, you just have to append the string to be shown to your test. So instead of
ok($('.testitem:first').is(':data(droppable)'))
use
ok($('.testitem:first').is(':data(droppable)'),
"testitem is droppable after calling setup_items('.testitem')");
to get the error message shown in the picture. Otherwise qUnit falls back to some standard error message associated with the used test.
Where I work, all our JavaScript is run through a compiler before it's deployed for production release. One of the things this JavaScript compiler does (beside do things like minify), is look for lines of code that appear like this, and strip them out of the release versions of our JavaScript:
//#debug
alert("this line of code will not make it into the release build")
//#/debug
I haven't look around much but I have yet to see this //#debug directive used in any of our JavaScript.
What is it's possible usefulness? I fail to see why this could ever be a good idea and think #debug directives (whether in a language like C# or JavaScript) are generally a sign of bad programming.
Was that just a waste of time adding the functionality for //#debug or what?
If you were using a big JavaScript library like YUI that has a logger in it, it could only log debug messages when in debug mode, for performance.
Since it is a proprietary solution, we can only guess the reasons. A lot of browsers provide a console object to log various types of messages such as debug, error, etc.
You could write a custom console object is always disabled in production mode. However, the log statements will still be present, but just in a disabled state.
By having the source go though a compiler such as yours, these statements can be stripped out which will reduce the byte size of the final output.
Think of it as being equivalent to something like this:
// in a header somewhere...
// debug is off by default unless turned on at compile time
#ifndef DEBUG
#define DEBUG 0
#endif
// in your code...
var response = getSomeData({foo:1, bar:2});
#if DEBUG
console.log(response);
#endif
doStuffWith(response);
This kind of thing is perfectly acceptable in compiled languages, so why not in (preprocessed) javascript?
I think it was useful (perhaps extremely useful) back a few years, and was probably the easiest way for a majority of developers to know what was going on in their JavaScript. That was because IDE's and other tools either weren't mature enough or as widespread in their use.
I work primarily in the Microsoft stack (so I am not as familiar with other environments), but with tools like VS2008/VS2010, Fiddler and IE8's (ugh! - years behind FF) dev tools and FF tools like firebug/hammerhead/yslow/etc., peppering alerts in your JavaScript isn't really necessary anymore for debugging. (There's probably a few instances where it's useful - but not nearly as much now.) Able to step through JavaScript, inspect requests/responses, and modify on the fly really makes debugging alert statements almost obsolete.
So, the //#debug was useful - probably not so much now.
I've used following self-made stuf:
// Uncomment to enable debug messages
// var debug = true;
function ShowDebugMessage(message) {
if (debug) {
alert(message);
}
}
So when you've declared variable debug which is set to true - all ShowDebugMessage() calls would call alert() as well. So just use it in a code and forget about in place conditions like ifdef or manual commenting of the debug output lines.
For custom projects without any specific overridden console.
I would recommend using: https://github.com/sunnykgupta/jsLogger , it is authored by me.
Features:
It safely overrides the console.log. Takes care if the console is not available (oh yes, you need to factor that too.)
Stores all logs (even if they are suppressed) for later retrieval.
Handles major console functions like log, warn, error, info.
Is open for modifications and will be updated whenever new suggestions come up.