Output tests to browser with Karma - javascript

I am using Karma to test my project and can see tests passing an failing in the console window, however, how do I get these to show in the browser? The browser only has a green bar (even though a test is failing) with
Karma v0.10.2 - connected
Written in it.
I have tried addong singleRun :false to the karma.config.js file.
The config file looks like this:
module.exports = function (config) {
config.set({
basePath: '../',
files: [
'app/lib/angular/angular.js',
'app/lib/angular/angular-*.js',
'test/lib/angular/angular-mocks.js',
'app/js/**/*.js',
'test/unit/**/*.js'
],
autoWatch: true,
singleRun: false,
frameworks: ['jasmine'],
browsers: ['Chrome'],
plugins: [
'karma-junit-reporter',
'karma-chrome-launcher',
'karma-firefox-launcher',
'karma-jasmine'
],
junitReporter: {
outputFile: 'test_out/unit.xml',
suite: 'unit'
}
})
}

matthias-schuetz wrote a plugin that claims to produce html test output.
https://npmjs.org/package/karma-htmlfile-reporter
Along with the instructions on the plugin page I had to include a reference to the plugin in the Karma config -
plugins: [
'karma-htmlfile-reporter'
]

Accroding to Documention, even not very perfect:
If 'singleRun' is false, it will set the ci-mode on, so, make it true, and you will see some failed red status on the top bars of browers.

There is no straight forward way to do this with Karma.
The "best" way to solve this problem would be to hunker down and write a html-reporter for karma (which would make a lot of us other Karma users very happy).
If this is too much work for you, the second best thing is to use the junit reporter which generates an xml file. You can then post-process the xml file in some way that turns it into a HTML-file which you can then view in your browser

I wanted to display HTML5 Web Notifications with Karma so I wrote something quick to get it to work with Karma version 0.11. Might behave slightly different with other versions. I load this script in with the rest of my application scripts, it will store the karma test results and after completion it will determine the success of the test and then reset to the original karma functions so they're not changed when this script gets run again.
// store all my test results
var results = [];
// Wrap the karma result function
var resultFunc = window.__karma__.result;
window.__karma__.result = function(result){
// run the original function
resultFunc(result);
// push each result on my storage array
results.push(result);
}
// wrap the karma complete function
var completeFunc = window.__karma__.complete;
window.__karma__.complete = function(result){
// run the original function
completeFunc(result);
// determine success
var success = results.every(function(r){ return r.success });
if (success) {
// display a success notification
}
else {
// display a test failure notification
}
// reset the result function
window.__karma__.result = resultFunc;
// reset the complete function
window.__karma__.complete = completeFunc;
}

Related

Jasmine Runs Test Three Times

I am running Karma/Jasmine/Angular 2.0 tests on my development box. Just recently, Jasmine on my development box decided to start running my tests three times. Yes, exactly three times, every time.
On the first run, everything passes as expected. However, on the second and third pass, all of the same things fail. It always acknowledges that there are 7 tests, but runs 21, and 10 fails (first-grade math out the window)????
This also fails on Travis with SauceLabs. (Note: That links to an older build with 3 tests, but ran 9, and 5 fail???)
I have a screenshot, karma.conf.js file, and one suite which started this whole thing. Any help with be greatly appreciated.
Culprit [TypeScript] (Remove this and problem solved on my dev box):
Full source
describe('From the Conductor Service', () => {
let arr: Array<ComponentStatusModel> = null;
let svc: ConductorService = null;
beforeEach(() => {
arr = [/* Inits the array*/];
svc = new ConductorService();
});
describe('when it is handed a container to hold objects which need to be loaded', () => {
// More passing tests...
/// vvvvv The culprit !!!!!
describe('then when you need to access the container', () => {
beforeEach(() => {
svc.loadedContainer = arr;
});
it('it should always be available', () => {
assertIsLocalDataInTheService(arr, svc.loadedContainer);
});
});
/// ^^^^^ End of culprit !!!!!
});
// More passing tests...
});
Failing Tests:
Browser Screenshots:
Not sure if this is related, but before all of the errors happen, the Jasmine call stack is smaller (left, observe scrollbar). After the errors start, the stack just gets bigger with repeating calls to the same functions (right, observe scrollbar).
Suite Stack is Wrong:
In my test, the Nanobar and Conductor spec files are totally separate. However, you can see the suites array includes stuff from the Nanobar and Conductor specs. Somehow Jasmine mashed these two spec files together (after everything started failing), and resulted in my describe() statements not making any sense when published to the console.
Simplified karma.conf.js:
Full source
module.exports = function (config) {
config.set({
autoWatch: false,
basePath: '.',
browsers: ['Chrome'],
colors: true,
frameworks: ['jasmine'],
logLevel: config.LOG_INFO,
port: 9876,
reporters: ['coverage', 'progress'],
singleRun: true,
coverageReporter: {
// Code coverage config
},
files: [
// Loads everything I need to work
],
plugins: [
'karma-chrome-launcher',
'karma-coverage',
'karma-jasmine'
],
preprocessors: {
'app/**/*.js': ['coverage']
},
proxies: {
// Adjust the paths
}
})
}
Can you try refreshing your browser in your first assertion in each of your test files?
Try this:
browser.restart();
I had the same problem and this fixed it for me.
The first thing is these test run randomly. If you pass some data in any test case if you think you can resue that it is not possible.
You have to declare the data in before each so all test cases get data.
All test cases run independently.
If you are using array or object you must have to use this after deep cloning because array and object works on the reference. If you manipulate any value it will also change the original array.
In most of the cases if the test fails there may be an error of data you are passing in test cases.
I would try to debug this and pinpoint the exact cause.
Usually happens when I have redirection code or any reload code inside the functions I'm testing.
You can try adding an f to the prefix of describe and it (i.e. fdescribe and fit)

Setting up Screenshot Reporter for Protractor

Since I'm a newbie with automated tests and protractor, I'm having some trouble setting this up in my tests.
According to the guide, every time that I create a new instance of screenshot reporter, I have to pass a directory path. Right, this means that every time I create a new instance in my spec file?
Also, there are functions to take screenshots of my skipped and my failed tests. Where i supposed to use takeScreenShotsForSkippedSpecs and takeScreenShotsOnlyForFailedSpecs? In my config file?
This is my onPrepare:
onPrepare: function () {
browser.driver.manage().window().maximize();
global.dvr = browser.driver;
global.isAngularSite = function (flag) {
browser.ignoreSynchronization = !flag;
}
jasmine.getEnv().addReporter(new ScreenShotReporter({
baseDirectory: '/tmp/screenshots',
takeScreenShotsForSkippedSpecs: true,
takeScreenShotsOnlyForFailedSpecs: true
}));
Note: If you are using jasmine2, use protractor-jasmine2-screenshot-reporter.
For jasmine1:
I've been using successfully using protractor-html-screenshot-reporterpackage. It is based on protractor-screenshot-reporter, but also provides a nice HTML report.
Here is what I have in the protractor config:
var HtmlReporter = require("protractor-html-screenshot-reporter");
exports.config = {
...
onPrepare: function () {
// screenshot reporter
jasmine.getEnv().addReporter(new HtmlReporter({
baseDirectory: "test-results/screenshots"
}));
},
...
}
After running tests, you would get an HTML file containing (example):
You can click "view" to see the test-case specific screenshot in the browser.
The readme in the library is pretty self explanatory. After installing the library, add it onto protractor's onPrepare in your protractor config file.
i.e.
protractorConf.js:
var ScreenShotReporter = require('protractor-screenshot-reporter');
exports.config = {
// your config here ...
onPrepare: function() {
// Add a screenshot reporter and store screenshots to `/tmp/screnshots`:
jasmine.getEnv().addReporter(new ScreenShotReporter({
baseDirectory: '/tmp/screenshots',
takeScreenShotsForSkippedSpecs: true
}));
}
}
then protractor protractorConf.js to run protractor.
Just recently I published a brand new plugin called protractor-screenshoter-plugin that captures for each browser instance a screenshot and console logs. The snapshot is made optionally for each expect or spec. It comes with a beautiful angular and bootstrap based analytics tool to visually check and fix tests results.
Check it out at https://github.com/azachar/protractor-screenshoter-plugin.
Also, I created a list of all available alternatives, so if you find something else, please do not hesitate to add it there.

Testing Angular with Gulp-mocha: "Window is not Defined"

I am setting up a project with Gulp to run unit tests with Mocha, including Angular tests. I have the basic set up working (indexOf, etc.), however when I include angular-mocks I get this error or a node-module error:
ReferenceError in 'gulp-mocha': "Window is not defined"
I've tried including angular-module-mocks, using gulp-mocha-phantomjs... but the result is the same. (With mocha-phantomjs my error was 'Init timeout'.) I've seen many examples of configurations with Mocha and Angular or Gulp and Karma but have not yet found a solution for Gulp, Mocha and Angular alone.
I'm thinking of something similar to this Karma solution to correctly load angular-mocks by specifying it in a config file and forcing Gulp to load it (Angular testing with Karma: "module is not defined"). However, even if this would work, it seems like gulp-mocha does not support loading a configuration file (mocha.opts - https://github.com/sindresorhus/gulp-mocha/issues/26). I would be happy to hear a more straightforward solution.
I am using angular-mocks 1.2.22 and gulp-mocha 1.1.0.
Code snippets:
var mocha = require('gulp-mocha');
gulp.task('test', function () {
return gulp.src('test/*.js', {read: false})
.pipe(mocha({reporter: 'nyan', timeout: 400}));
});
test/test.js
var assert = require('assert');
var angular_mocks = require('angular-mocks'); //Fails only when this line is present
//tests
What finally worked for me with Gulp/Browserify/Mocha was using Karma and Mocha combined.
Specifically, I used gulp-karma, and defined the configuration at karma.config.js and used a dummy file for gulp.src as others have done:
gulp.task('test', function () {
return gulp.src('./foobar.js').pipe(karma({
configFile:'karma.config.js',
action: 'run'
}))
.on('error', handleErrors);
});
Then I used this karma.config.js file. I needed the npm modules karma-mocha, karma-chai, and karma-bro. (With only the first two, I was getting 'require is not defined'. Then of course I tried including karma-requirejs, but that does not work with Browserify. Then I tried karma-commonjs, which still didn't work. Then I tried karma-browserify, and got a strange error involving bundle() that no one seems to have solved (https://github.com/xdissent/karma-browserify/issues/46). Karma-bro did the trick.)
I also needed to preprocess each file referenced in the tests as well as the tests themselves. (For using phantomJS also include karma-phantomjs-launcher. And I am using the bower version of angular-mocks simply because it is more recent: v1.2.25 compared to 1.2.22 for npm - but the npm version might work.)
module.exports = function(config) {
config.set({
basePath: '',
// frameworks to use
frameworks: ['browserify', 'mocha', 'chai'],
// list of files / patterns to load in the browser
files: [
'node_modules/angular/lib/angular.min.js',
'bower_components/angular-mocks/angular-mocks.js',
'source/javascript/controllers/*.js',
'source/javascript/*.js',
'test/*.js'
],
reporters: ['progress'],
port: 9876,
colors: true,
autoWatch: true,
browsers: ['PhantomJS'],
preprocessors: {
'source/javascript/controllers/*.js': ['browserify'],
'source/javascript/*.js': ['browserify'],
'test/*.js': ['browserify']
}
});
};
And finally this test passes. At the end I needed to make sure the names of my modules and controllers were consistent (capitals etc.) to resolve 'Module not defined' errors. For debugging I replaced node_modules/angular/lib/angular.min.js with node_modules/angular/lib/angular.js in the files.
describe('Angular', function() {
describe('App Controllers', function() {
beforeEach(angular.mock.module('App'));
describe('MessageCtrl', function() {
it('should retrieve the correct amount of messsages', angular.mock.inject(function($controller) {
var scope = {},
ctrl = $controller('MessageCtrl', {$scope:scope});
assert.equal(scope.messages.length, 2);
}));
});
});
});
I do get this: 'WARNING: Tried to load angular more than once.' I can live with it.

Why is Karma refusing to serve my JSON fixture (which I'd like to use in my jasmine / angularjs tests)

As indicated in this stackoverflow answer, it looks like Karma will serve JSON fixtures. However, I've spent too many hours trying to get it to work in my environment. Reason: I'm doing angular testing and need to load mock HTTP results into the test, as Jasmine doesn't support any global setup/teardown with mock servers and stuff.
In my karma config file, I'm defining a fixture as so:
files: [
// angular
'angular/angular.min.js',
'angular/angular-route.js',
'angular/mock/angular-mocks.js',
// jasmine jquery helper
'jquery-1.10.2.min.js',
'angular/jasmine-jquery.js',
// our app
'../public/js/FooApp.js',
// our tests
'angular/*-spec.js',
// fixtures
{ pattern: 'node/mock/factoryResults.json',
watched: 'true',
served: 'true',
included: 'false' }
]
Before I even attempt to use jasmine-jquery.js in my jasmine test to load the JSON, I see karma choking on trying to serve it:
...
DEBUG [web-server]: serving: /Users/XXX/FooApp/spec/node/mock/factoryResults.json
Firefox 25.0.0 (Mac OS X 10.8) ERROR
SyntaxError: missing ; before statement
at /Users/XXX/FooApp/spec/node/mock/factoryResults.json:1
...
Here's what factoryResults.json looks like:
{ "why": "WHY" }
Any idea what's going on here? I see plenty of examples on the web of folks successfully loading JSON into jasmine tests via karma fixtures. Karma can see the file; if I put the wrong path in my fixture block, I see an error stating that it couldn't find any files that match my fixture pattern. I've tried reformatting the .json file in different ways... Any ideas?
Your problem is that 'false' has to be a boolean, not a string.
There is already an issue to validate the config better and fix such a mistakes.
Also, you might write a simple "json" preprocessor (similar to karma-html2js) that would make it valid JS and put the JSON into some global namespace so that you can keep the tests synchronous...
I also needed json fixtures in my karma test suite.
I ended up just using the html2js preprocessor with json files as well as html.
karma.conf.js:
module.exports = function (config) {
config.set({
frameworks: ["jasmine"],
files: [
'**/*.js',
'**/*.html',
'**/*.json',
'**/*.spec.js'
],
plugins: [
'karma-html2js-preprocessor'
]
preprocessors: {
'**/*.html': ['html2js'],
'**/*.json': ['html2js']
}
});
};
Then it is just a matter of getting the json from the __html__ global.
e.g.
var exampleJson = __html__['example.json'];
var jsonObj = JSON.parse(exampleJson);
var exampleHtml = __html__['example.html'];
document.body.innerHTML = exampleHtml;
So, I had a lot of issues with jasmine-jquery and I got a pretty decent workaround.
It's a little hacky, but it works. Basically, I just create a function accessible on the window, then stack the JSON fixtures inside a little switch:
if (typeof(window.fixtures === "undefined")) {
window.fixtures = {};
}
window.setFixture = function(type) {
var json;
if (type == "catalog") {
json = { ... }
}
if (typeof(type) !== "undefined") {
window.fixtures[type] = json;
}
return json;
}
Then, I can just stub it inline in the view:
describe "App.Models.Catalog", ->
it "provides the 'App.Models.Catalog' function", ->
expect(App.Models.Catalog).toEqual(jasmine.any(Function))
it "sets up a fixture", ->
setFixture("catalog")
console.log(fixtures["catalog"])
expect(fixtures["catalog"]).toBeDefined()
Boom, tests pass, and the object comes out in the log:
{
catalog_id: '2212',
merchant_id: '114',
legacy_catalog_id: '2340',
name: 'Sample Catalog',
status: '1',
description: 'Catalog Description ',
}
Now, it's accessible within my test.
It's of course not perfect or ideal, but I kept hitting strange matchErrors and the like with the jasmine-jquery plugin, and it's simple enough (and fast) for me to paste in a couple of JSON blocks and get moving.
You also save yourself the time fiddling around with the configuration and making any changes to the files for Karma.
Anyone have any better suggestions or have any luck getting jasmine-jquery to work?

Testing with Karma and RequireJS with files in CommonJS syntax

I'm working on an angular application that is written in CommonJS syntax and uses a grunt task with the grunt-contrib-requirejs task to translate the source files to AMD format and compile it into one output file. My goal is to make Karma work with RequireJS and keep my source files and spec files in CommonJS syntax.
I've been able to get a simple test passing in AMD format with the following file structure:
-- karma-test
|-- spec
| `-- exampleSpec.js
|-- src
| `-- example.js
|-- karma.conf.js
`-- test-main.js
and the following files:
karma.conf.js
// base path, that will be used to resolve files and exclude
basePath = '';
// list of files / patterns to load in the browser
files = [
JASMINE,
JASMINE_ADAPTER,
REQUIRE,
REQUIRE_ADAPTER,
'test-main.js',
{pattern: 'src/*.js', included: false},
{pattern: 'spec/*.js', included: false}
];
// list of files to exclude
exclude = [];
// test results reporter to use
// possible values: 'dots', 'progress', 'junit'
reporters = ['progress'];
// web server port
port = 9876;
// cli runner port
runnerPort = 9100;
// enable / disable colors in the output (reporters and logs)
colors = true;
// level of logging
// possible values: LOG_DISABLE || LOG_ERROR || LOG_WARN || LOG_INFO || LOG_DEBUG
logLevel = LOG_DEBUG;
// enable / disable watching file and executing tests whenever any file changes
autoWatch = true;
// Start these browsers, currently available:
browsers = ['Chrome'];
// If browser does not capture in given timeout [ms], kill it
captureTimeout = 60000;
// Continuous Integration mode
// if true, it capture browsers, run tests and exit
singleRun = false;
example.js
define('example', function() {
var message = "Hello!";
return {
message: message
};
});
exampleSpec.js
define(['example'], function(example) {
describe("Example", function() {
it("should have a message equal to 'Hello!'", function() {
expect(example.message).toBe('Hello!');
});
});
});
test-main.js
var tests = Object.keys(window.__karma__.files).filter(function (file) {
return /Spec\.js$/.test(file);
});
requirejs.config({
// Karma serves files from '/base'
baseUrl: '/base/src',
// Translate CommonJS to AMD
cjsTranslate: true,
// ask Require.js to load these files (all our tests)
deps: tests,
// start test run, once Require.js is done
callback: window.__karma__.start
});
However, my goal is to write both the source file and the spec file in CommonJS syntax with the same results, like so:
example.js
var message = "Hello!";
module.exports = {
message: message
};
exampleSpec.js
var example = require('example');
describe("Example", function() {
it("should have a message equal to 'Hello!'", function() {
expect(example.message).toBe('Hello!');
});
});
But despite having the cjsTranslate flag set to true, I just receive this error:
Uncaught Error: Module name "example" has not been loaded yet for context: _. Use require([])
http://requirejs.org/docs/errors.html#notloaded
at http://localhost:9876/adapter/lib/require.js?1371450058000:1746
Any ideas on how this can be accomplished?
Edit: I found this issue for the karma-runner repo: https://github.com/karma-runner/karma/issues/552 and there's a few comments that may help with this problem, but I haven't had any luck with them so far.
The solution I ended up finding involved using grunt and writing some custom grunt tasks. The process goes like this:
Create a grunt task to build a bootstrap requirejs file by finding all specs using a file pattern, looping through them and building out a traditional AMD style require block and creating a temporary file with code like this:
require(['spec/example1_spec.js'
,'spec/example2_spec.js',
,'spec/example3_spec.js'
],function(a1,a2){
// this space intentionally left blank
}, "", true);
Create a RequireJS grunt task that compiles the above bootstrap file and outputs a single js file that will effectively include all source code, specs, and libraries.
requirejs: {
tests: {
options: {
baseUrl: './test',
paths: {}, // paths object for libraries
shim: {}, // shim object for non-AMD libraries
// I pulled in almond using npm
name: '../node_modules/almond/almond.min',
// This is the file we created above
include: 'tmp/require-tests',
// This is the output file that we will serve to karma
out: 'test/tmp/tests.js',
optimize: 'none',
// This translates commonjs syntax to AMD require blocks
cjsTranslate: true
}
}
}
Create a grunt task that manually starts a karma server and serve the single compiled js file that we now have for testing.
Additionally, I was able to ditch the REQUIRE_ADAPTER in the karma.conf.js file and then only include the single compiled js file instead of the patterns that matched all source code and specs, so it looks like this now:
// base path, that will be used to resolve files and exclude
basePath = '';
// list of files / patterns to load in the browser
files = [
JASMINE,
JASMINE_ADAPTER,
REQUIRE,
'tmp/tests.js'
];
// list of files to exclude
exclude = [];
// test results reporter to use
// possible values: 'dots', 'progress', 'junit'
reporters = ['progress'];
// web server port
port = 9876;
// cli runner port
runnerPort = 9100;
// enable / disable colors in the output (reporters and logs)
colors = true;
// level of logging
// possible values: LOG_DISABLE || LOG_ERROR || LOG_WARN || LOG_INFO || LOG_DEBUG
logLevel = LOG_INFO;
// enable / disable watching file and executing tests whenever any file changes
autoWatch = true;
// Start these browsers, currently available:
browsers = ['PhantomJS'];
// If browser does not capture in given timeout [ms], kill it
captureTimeout = 60000;
// Continuous Integration mode
// if true, it capture browsers, run tests and exit
singleRun = true;
In the grunt task configuration for the requirejs compilation, it was also necessary to use almond in order to start the test execution (test execution would hang without it). You can see this used in the requirejs grunt task config above.
There's a couple things. First of all: I might have missed some details in your question (as it is super huge) - so sorry about that.
In short, you may want to checkout Backbone-Boilerplate wip branch testing organization: https://github.com/backbone-boilerplate/backbone-boilerplate/tree/wip
First: RequireJS does not support unwrapped raw common.js module. cjsTranslate is a R.js (the build tool) option to convert Commonjs to AMD compatible during build. As so, requiring a CJS raw module won't work. To resolve this issue, you can use a server to filter the scripts sent and compile them to AMD format. On BBB, we pass file through a static serve to compile them:
karma proxies setting: https://github.com/backbone-boilerplate/backbone-boilerplate/blob/wip/Gruntfile.js#L232-L234
Server setting: https://github.com/backbone-boilerplate/backbone-boilerplate/blob/wip/Gruntfile.js#L173-L179
Second: The Karma requirejs plugin isn't working super well - and it's somehow easy to use requireJS directly. On BBB, that's how we managed it: https://github.com/backbone-boilerplate/backbone-boilerplate/blob/wip/test/jasmine/test-runner.js#L16-L36
Hope this helps!

Categories