How to get QUnit to print backtrace on exception? - javascript

When an exception occurs in my QUnit tests, all it will say is
Died on test #n: message
How do I get it to print a backtrace or some other location information so that I can see where the exception occurred?

I don't think it is possible to make QUnit give you a trace of where the error happened. Your code has generated an exception, which QUnit has caught and reported. If you tick the 'notrycatch' checkbox at the top of the QUnit results, your tests will run again, but this time QUnit won't catch the exception. Your browser may then give you more information on what actually happened, but it will depend on what the error was.

edit:
While answering this, the suspicion arised in me that is not what you wanted to ask. So i edited the answer to show this, probably more usefull part, first:
Because you write "When an exception occurs in my QUnit tests" let me explain the concept of testing in a bit more depth:
First of all: The exception does not occur in your QUnit tests, but in your code. The good news is: qUnit is in your case doing exactly what it should do: it tests your code, and as your code is faulty, your code raises an exception when being tested.
As qUnit is a testing environment, it is not responsible for delivering exception tracebacks. It is only there to check if the functionality you implemented works the way you expect it to work, not to track down bugs. For such purpose tools like FireBug or Safari's developer tools are much more suitable.
Let me describe a scenario:
you write a function
you eliminate bugs from it with (ie.) FireBug
you write a qUnit testcase to proove the function really does what
you want it to do - tests pass
(and now it gets interesting, because this is what testing really is
for) You add some additional funcionality to your function, because
it is needed
If you have done everything right, your tests pass, and you can be
sure everything will continue to work as expected because they do
(if you have written them well)
To sum that up: Tests are not for debugging, but for assuring things work the way you think they work. If a bug appears, you do not write a test to solve it, but you write a test to reproduce it. Then you find the bug, remove it, and the test will pass. If the bug is being reinvented later on (ie. because of code changes) the test will fail again, and you immediately know the bug is back.
This can be taken even further by establishing test driven development, where you write tests before you write the functionality itself. Then the scenario above would change to this:
you write a test, describing the expected results of your code
you write the code, when bugs appear you track them down with (ie.)
Firebug
while progressing, one test after the other will start to pass
when adding additional functionality, you first write additional
tests
There are two major advantages in doing so:
you can be sure you have the necessary tests and
you are forced to pin down exactly what you want your code to do,
because otherwise you can't write the tests.
Happy testing.
edit end - original answer follows, just in case it is needed.
When using QUnit i would strongly recommend to follow the approach shown on the jQuery documentation site http://docs.jquery.com/Qunit:
To use QUnit, you have to include its qunit.js and qunit.css files and
provide a basic HTML structure for displaying the test results:
All you have to do is to load qunit.js and qunit.css files, then put this snippet to your page to get visual feedback about the testing process:
<h1 id="qunit-header">QUnit example</h1>
<h2 id="qunit-banner"></h2>
<div id="qunit-testrunner-toolbar"></div>
<h2 id="qunit-userAgent"></h2>
<ol id="qunit-tests"></ol>
<div id="qunit-fixture">test markup, will be hidden</div>
Doing so results in a neatly rendered and interactive console showing exact reports about the test results. There is a row for each test showing if it passed or not, clicking on that row unfolds the results of each single test. This will look somewhat like this:
To customize the error messages qUnit shows, you just have to append the string to be shown to your test. So instead of
ok($('.testitem:first').is(':data(droppable)'))
use
ok($('.testitem:first').is(':data(droppable)'),
"testitem is droppable after calling setup_items('.testitem')");
to get the error message shown in the picture. Otherwise qUnit falls back to some standard error message associated with the used test.

Related

Why does protractor not locate element when it's referenced on 2 consecutive lines?

If I use the following code in an it block in my spec.ts file:
element(by.css(".header-text")).getText().then(function(text) {
console.log(text);
});
expect(element(by.css(".header-text")).getText()).toEqual("Project");
I get the following outputted to my console:
Project
Expected '' to equal 'Project'.
Expected :"Project"
Actual :""
A few observations with this.
If I remove the first "line" of code with the console.log, the expect works as expected.
When I run this code first thing after starting up my machine it works the first 2-3 runs. Which is weird because protractor starts a new browser each run (I thought and appear to observe) and there shouldn't be some type of caching going on.
It appears to me that this seems to be some type of "when measured, the value changes" behavior that I really don't expect out of programming, but I'm new to Javascript/Angular/Protractor, so maybe this is just something that happens and you have to have the tribal knowledge that it behaves this way.
This isn't project breaking, I have my test working the way I want. I would just like an explanation as to why this behavior occurs.
You are already resolving the promise, so you could just use it at the same time.
element(by.css(".header-text")).getText()
.then(function(text) {
console.log(text);
expect(text).toEqual("Project");
});
The expect method is supposed to resolve promises, but why ask twice if you already have it.
The other thing to check for is anything in your application that will make it look like Angular has done it's job, but then changes the title - as it appears that you may be getting the title before it is populated in some cases which suggests a race condition.

How to specify which functions/methods should be covered by a test, using Karma, Jasmine, and Istanbul

I am trying to figure out how to restrict my tests, so that the coverage reporter only considers a function covered when a test was written specifically for that function.
The following example from the PHPUnit doc shows pretty good what I try to achieve:
The #covers annotation can be used in the test code to specify which
method(s) a test method wants to test:
/**
* #covers BankAccount::getBalance
*/
public function testBalanceIsInitiallyZero()
{
$this->assertEquals(0, $this->ba->getBalance());
}
If the test above would be executed, only the function getBalance will be marked as covered, and none other.
Now some actual code sample from my JavaScript tests. This test shows the unwanted behaviour that I try to get rid of:
it('Test get date range', function()
{
expect(dateService.getDateRange('2001-01-01', '2001-01-07')).toEqual(7);
});
This test will mark the function getDateRange as covered, but also any other function that is called from inside getDateRange. Because of this quirk the actual code coverage for my project is probably a lot lower than the reported code coverage.
How can I stop this behaviour? Is there a way to make Karma/Jasmine/Istanbul behave the way I want it, or do I need to switch to another framework for JavaScript testing?
I don't see any particular reason for what you're asking. I'd say if your test causes a nested function to be called, then the function is covered too. You are indeed indirectly testing that piece of code, so why shouldn't that be included in the code coverage metrics? If the inner function contains a bug, your test could catch it even if it's not testing that directly.
You can annotate your code with special comments to tell Istanbul to ignore certain paths:
https://github.com/gotwarlost/istanbul/blob/master/ignoring-code-for-coverage.md
but that's more for the opposite I think, not to decrease coverage if you know you don't want a particular execution path to be covered, maybe because it would be too hard to write a test case for it.
Also, if you care about your "low level" functions tested in isolation, then make sure your code is structured in a modular way so that you can test those by themselves first. You can also set up different test run configurations, so you can have a suite that tests only the basic logic and reports the coverage for that.
As suggested in the comments, mocking and dependency injections can help to make your tests more focused, but you basically always want to have some high level tests where you check the integrations of these parts together. If you mock everything then you never test the actual pieces working together.

Define custom function in Firebug

I am a chronic user of Firebug, and I frequently need to log various stuff so that I can see what I am doing. The console.log function is a lot to type. Even if I assign it to a single letter variable like q = console.log, I have to do it every time I fire up Firebug. Is there any way to do it such that q always refer to console.log (unless, of course, I override it in my session)?
To answer your question, the functionality doesn't currently exist, however I have found the firebug developers to be very responsive in the past. Why don't you put in a feature request on their forum, or better yet, code it up yourself, and ask them to add it?
Depending on your IDE, simply setup a code snippet (I use Flash Develop, so Tools -> Code Snippets).
I believe this to be a better way than setting up redirect scripts and what not, because it stops the Firebug namespace from being polluted, and makes it easier/more consistent to debug if your debugging breaks down.
The screenshot shows me using Flash Develop, hitting Ctrl+B, then hit enter. The pipe (|) in the snippet indicates where the cursor will be placed to start typing after inserting the snippet.

Programatically retrieve count of javascript errors on page

I'd like to write a test case (using Selenium, but not the point of this question) to validate that my web application has no script errors\warnings or unhanded exceptions at certain points in time (like after initializing a major library).
This information can easily be seen in the debug consoles of most browsers. Is it possible to execute a javascript statement to get this information programatically?
It's okay if it's different for each browser, I can deal with that.
not so far read about your issue (as far as I understood your problem) here
The idea be the following:
I found, however, that I was often getting JavaScript errors when the page first loaded (because I was working on the JS and was introducing errors), so I was looking for a quick way to add an assert to my test to check whether any JS errors occurred. After some Googling I came to the conclusion that there is nothing built into Selenium to support this, but there are a number of hacks that can be used to accomplish it. I'm going to describe one of them here. Let me state again, for the record, that this is pretty hacky. I'd love to hear from others who may have better solutions.
I simply add a script to my page that will catch any JS errors by intercepting the window.onerror event:
<script type="text/javascript">
window.onerror=function(msg){
$("body").attr("JSError",msg);
}
</script>
This will cause an attribute called JSError with a value corresponding to the JavaScript error message to be added to the body tag of my document if a JavaScript error occurs. Note that I'm using jQuery to do this, so this specific example won't work if jQuery fails to load. Then, in my Selenium test, I just use the command assertElementNotPresent with a target of //body[#JSError]. Now, if any JavaScript errors occur on the page my test will fail and I'll know I have to address them first. If, for some strange reason, I want to check for a particular JavaScript error, I could use the assertElementPresent command with a target of //body[#JSError='the error message'].
Hope this fresh idea helps you :)
try {
//code
} catch(exception) {
//send ajax request: exception.message, exception.stack, etc.
}
More info - MDN Documentation

"Introduced global variable(s) _firebug" in QUnit tests

I am using QUnit to perform various simple tests on my website. One of the tests is creating a dialog, showing it and then closing it. The test runs fine, but when run on Firefox with Firebug activated I get an error:
3. Introduced global variable(s): _firebug
I can live with it, but it is annoying: the same code on Chrome runs fine. I ruled out jQuery UI as the culprit since the same error appears without it. However, running without Firebug or without console.log traces does not show the problem.
I grepped all the javascript code I am using and found no mention of any "firebug" variables; and Google was silent on the matter. I want my green screen (all tests passed) back! Any ideas?
After googling a little bit more, I am not the first to find this problem: badglobals.js, blog, Google groups. The solution to my particular problem (QUnit reports a leaky global variable) is to add the declaration of the global before starting the tests, for example before the first module is run:
var _firebug;
module('myModule');
I am seeing a spurious xdc variable too; same solution. My first QUnit test file now looks like this:
/* declare spurious Firebug globals */
var _firebug;
var _xdc_;
/* run tests */
module('myModule');
My bar is all green now, even with noglobals checked! I hope this helps anyone else who finds this annoying issue.

Categories