I have a grunt configuration running a test suite that, for now, i can't change how it's structured.
At some point, i have an array of file paths, and do something like:
files.forEach(function(testFile){
grunt.task.run( 'shell:phantomjs:' + testFile );
});
and
grunt.initConfig({
phantomjs: {
command: function(testFile){
return 'node_modules/phantomjs/bin/phantomjs ' + testFile;
}
}
});
The problem with this approach is SPEED.
As i'm running phantomjs for each file, the setup and takedown of the server on each run makes my tests take longer than 4 minutes to run.
I am looking for a way of calling phantomjs with a blob path like tests/**/*.js, or even an array of the filenames, or something like that.
You have two problem that you need to deal with.
Passing test files
tests/**/*.js is a glob wildcard syntax for which there is a node.js module. The problem is that PhantomJS has a different runtime than node.js and the fs module has vastly different functions. Because of this, you cannot just require node-glob. You would have to port it to PhantomJS. There is a feature-request to include this right in PhantomJS.
If you have many test files and you want to pass them separately to the main PhantomJS script, you may run into a problem with the length of the command line call. Depending on the shell/OS the buffer may hold only a limited amount of characters for the command and test files.
It would be the easiest to curate all test files in a separate file as a list and just pass this file into the main PhantomJS script. You could also just have the list of files in the main script.
Running multiple scripts in series
Regardless, what you do, you will need to clear cookies and localStorage between tests. I'm not sure if cache can be cleared.
1. adjusted
You may have to adjust your test scripts. When you call phantom.exit(), the whole process terminates. It cannot be prevented by overwriting exit like this
phantom.exit = function(){};
because it is a native property. You would need to change your scripts in something like a module:
module.exports = function(done){
// your script ...
setTimeout(function(){
// some more of your script
// add clean up and queue next script...
page.close(); // clean up memory consumption; might still be not enough
done(); // call instead of "phantom.exit();"
}, 1000);
};
The main script would look like this:
var testfiles = ["..."];
var async = require("async"); // install async through npm
testfiles = testfiles.map(function(file){
return require(file);
});
async.series(testfiles, function(err){
console.log("ERROR", err);
phantom.exit();
});
If you only use one page instance throughout you can try to create it in the main file and pass it to every test file separately. That will likely fix your memory consumption, if this is a problem.
2. without change
There is also the possibility that you don't need to change your test files. You can read your test files into strings using fs.read. You can change some things using string operations/regex to exchange phantom.exit(); for done(); and adding a line to close the page.
When you're done, you can just eval the string (asynchronously). Using eval is likely not a security problem since it's likely that you have written the test scripts.
Related
I have two JS files with test methods, File1 and File2. File2.js should be executed only if the File1.js has passed all the execution. I am currently using Javascript with mocha framework and WDIO
Generally dependencies between tests in this fashion aren't desirable for a few reasons:
File2 becomes more complex to debug insolation due to the need of running File1 first.
You can't run your tests in parallel to speed up execution.
Testing File2 functionality is blocked if File1 fails.
In a hypothetical basic example let's say File1 tests user account creation and File2 tests posting a message.
One way to remove the dependency is to have a separate pre-seeded account (perhaps created via apis or database script) that File2 uses to test the posting feature. That means you can still test posting if the user account creation fails (and those can both run at once).
So if possible I'd advise thinking if there are ways you can avoid the dependency.
If there is no way around it could you potentially use a before block check in File2 that asserts things are in the state you expect, otherwise it will fail the before and therefore File2 checks are skipped?
before(`Did File 1 Run Successfully`, () => {
expect(isMyFile1DataInPlace()).toBe(true)
})
I'm trying to learn the A Test Driven Approach with MongodB. The folder structure
A user.js to test in the src folder
const mongoose = require('mongoose');
mongoose.Promise = require('bluebird');
const Schema = mongoose.Schema;
const UserSchema = new Schema ({
name: String
});
const User = mongoose.model('user', UserSchema);
module.exports = User;
Content of test_helper.js
const mongoose = require('mongoose');;
mongoose.connect('mongodb://localhost/users_test');
mongoose.connection
.once('open', () => {
console.log('Connected to Mongo!');
done()})
.on('error', (error) => {
console.warn('Warning', error);
});
create_test.js content
const assert = require('assert');
const User = require('../src/user');
describe('Creating records', () => {
it('Saves a user', (done) => {
const user = new User({ name: 'Ankur' });
user.save()
.then(() => {
assert(!user.isNew);
done();
});
Now when i run npm test the test are getting passed.
Connected to Mongo!
Creating records
√ Saves a user (779ms)
But My doubt is how does Mocha knows to run the test_helper.js file first, Everytime. (Also naming this file to any other name doesn't change the behavior).
Also i'm not using any root-level hook.
i know mocha loads files recursively in each directory, starting with the root directory, and since everything here is one directory only so its not making any difference here.
Can someone please suggest or help, how does Mocha exactly know that test_helper.js (or any filename with the same content) should be running first.
There is no default set order to how Mocha loads the test files.
When Mocha scans a directory to find files it it, it uses fs.readdirSync. This call is a wrapper around readdir(3), which itself does not guarantee order. Now, due to an implementation quirk the output of fs.readdir and fs.readdirSync is sorted on Linux (and probably POSIX systems in general) but not on Windows. Moreover, it is possible that the sorted behavior on Linux could eventually be removed because the documentation says fs.readdir is just readdir(3) and the latter does not guarantee order. There's a good argument to be made that the behavior observed on Linux is a bug (see the issue I linked to above).
Note that there is a --sort option that will sort files after Mocha finds them. But this is off by default.
The behavior you observe is explainable not merely by loading order but by execution order. Here is what happens:
Mocha loads the test files and executes them. So anything that is at the top level of your file executes right away. This means that the code in test_helper.js executes right away. Every call to describe immediately executes its callback. However, calls to it record the test for later execution. Mocha is discovering your tests while doing this but not executing them right away.
Once all files are executed, Mocha starts running the tests. By this time, the code in test_helper.js has already run and your test benefits from the connection it has created.
Major warning Connecting to a database is an asynchronous operation, and currently there is nothing that guarantees that the asynchronous operation in test_helper.js will have completed before the tests starts. That it works fine right now is just luck.
If this were me, I'd either put the connection creation in a global asynchronous before hook. (A global before hook appearing in any test file will be executed before any test whatsoever, even tests that appear in other files.) Or I'd use --delay and explicitly call run() to start the suite after the connection is guaranteed to be made.
It doesn't
Tests should not have a specific order.
All test suites should be working as standalone agnostic to other suites. Inside the suite you can use "before" and "beforeEach" (or "after", "afterEach") in-order to create setup and teardown steps.
But if the order of the tests matters, something is broken in the design.
There is a very easy way to load tests sequentially.
Step 1 : Set up a test script in package.json:
e.g.
"scripts": {
"test": "mocha ./tests.js"
}
Let us assume that tests.js is a file which defines the order to execute tests.
require("./test/general/test_login.js");
require("./test/Company/addCompany.js");
...
...
So here test_login will run first and then others one by one.
Step 2: Then run tests with:
$ npm test
Let's say I need CasperJS to report progress steps to a localhost server. I couldn't user casper.open to send a POST request because it'd "switch" pages so to speak and would be unable to continue other steps properly.
I sidestepped this issue by evaluating an XMLHttpRequest() inside the browser to ping to localhost. Not ideal but it works.
As the number of the scripts grow, I'd rather move this common functionality into a module, which is to say, I want to move a number of functions into a separate module.
It's my understanding CasperJS doesn't work like node.js does so require() rules are different. How do I go about accomplishing this?
Since CasperJS is based on PhantomJS you can use its module system, which is "modelled after CommonJS Modules 1.1"
You can require the module file by its path, full or relative.
var tools = require("./tools.js");
var tools = require("./lib/utils/tools.js");
var tools = require("/home/scraping/project/lib/utils/tools.js");
Or you can follow node.js convention and create a subfolder node_modules/module_name in your project's folder, and place module's code into index.js file. It would then reside in this path:
./node_modules/tools/index.js
After that require it in CasperJS script:
var tools = require("tools");
Module would export its functions in this way:
function test(){
console.log("This is test");
}
module.exports = {
test: test
};
I'm completely new to sails, node and js in general so I might be missing something obvious.
I'm using sails 0.10.5 and node 0.10.33.
In the sails.js documentation there's a page about tests http://sailsjs.org/#/documentation/concepts/Testing, but it doesn't tell me how to actually run them.
I've set up the directories according to that documentation, added a test called test/unit/controllers/RoomController.test.js and now I'd like it to run.
There's no 'sails test' command or anything similar. I also didn't find any signs on how to add a task so tests are always run before a 'sails lift'.
UPDATE-2: After struggling a lil bit with how much it takes to run unit test this way, i decided to create a module to load the models and turn them into globals just as sails does, but without taking so much. Even when you strip out every hook, but the orm-loader depending on the machine, it can easily take a couple seconds WITHOUT ANY TESTS!, and as you add models it gets slower, so i created this module called waterline-loader so you can load just the basics (Its about 10x faster), the module is not stable and needs test, but you are welcome to use it or modify it to suit your needs, or help me out to improve it here -> https://github.com/Zaggen/waterline-loader
UPDATE-1:
I've added the info related to running your tests with mocha to the docs under Running tests section.
Just to expand on what others have said (specially what Alberto Souza said).
You need two steps in order to make mocha work with sails as you want. First, as stated in the sails.js Docs you need to lift the server before running your test, and to do that, you create a file called bootstrap.test.js (It can be called anything you like) in the root path (optional) of your tests (test/bootstrap.test.js) that will be called first by mocha, and then it'll call your test files.
var Sails = require('sails'),
sails;
before(function(done) {
Sails.lift({
// configuration for testing purposes
}, function(err, server) {
sails = server;
if (err) return done(err);
// here you can load fixtures, etc.
done(err, sails);
});
});
after(function(done) {
// here you can clear fixtures, etc.
sails.lower(done);
});
Now in your package.json, on the scripts key, add this line(Ignore the comments)
// package.json ....
scripts": {
// Some config
"test": "mocha test/bootstrap.test.js test/**/*.test.js"
},
// More config
This will load the bootstrap.test.js file, lift your sails server, and then runs all your test that use the format 'testname.test.js', you can change it to '.spec.js' if you prefer.
Now you can use npm test to run your test.
Note that you could do the same thing without modifying your package.json, and typying mocha test/bootstrap.test.js test/**/*.test.js in your command line
PST: For a more detailed configuration of the bootstrap.test.js check Alberto Souza answer or directly check this file in hist github repo
See my test structure in we.js: https://github.com/wejs/we-example/tree/master/test
You can copy and paste in you sails.js app and remove we.js plugin feature in bootstrap.js
And change you package.json to use set correct mocha command in npm test: https://github.com/wejs/we-example/blob/master/package.json#L10
-- edit --
I created a simple sails.js 0.10.x test example, see in: https://github.com/albertosouza/sails-test-example
Given that they don't give special instructions and that they use Mocha, I'd expect that running mocha from the command line while you are in the parent directory of test would work.
Sails uses mocha as a default testing framework.
But Sails do not handle test execution by itself.
So you have to run it manually using mocha command.
But there is an article how to make all Sails stuff included into tests.
http://sailsjs.org/#/documentation/concepts/Testing
Let's say I have some tests that require jQuery. Well, we don't have to make believe, I actually have the tests. The test themselves are not important, but the fact they depend on jQuery is important.
Disclaimer: this is node.js so you cannot depend on global variables in your solution. Any dependency must be called into the file with require.
On the server we need this API (to mock the window object required by server-side jquery)
// somefile.js
var jsdom = require("jsdom").jsdom;
var window = jsdom().parentWindow();
var $ = require("jquery")(window);
// my tests that depend on $
// ...
On the client we need a slightly different API
// somefile.js
// jsdom is not required obviously
// window is not needed because we don't have to pass it to jquery explicitly
// assume `require` is available
// requiring jquery is different
var $ = require("jquery");
// my tests that depend on $
// ...
This is a huge problem !
The setup for each environment is different, but duplicating each test just to change setup is completely stupid.
I feel like I'm overlooking something simple.
How can I write a single test file that requires jQuery and run it in multiple environments?
in the terminal via npm test
in the browser
Additional information
These informations shouldn't be necessary to solve the fundamental problem here; a general solution is acceptable. However, the tools I'm using might have components that make it easier to solve this.
I'm using mocha for my tests
I'm using webpack
I'm not married to jsdom, if there's something better, let's use it !
I haven't used phantomjs, but if it makes my life easier, let's do it !
Additional thoughts:
Is this jQuery's fault for not adhering to an actual UMD? Why would there be different APIs available based on which env required it?
I'm using karma to run my unit tests from the command line directly (CI too, with gulp).
Karma uses phantomjs to run the tests inside of a headless browser, you can configure it to run in real browsers too.
Example of karma configuration inside of gulp:
// Run karma tests
gulp.task("unit", function (done) {
var parseConfig = require("karma/lib/config").parseConfig,
server = karma.server,
karmaConfig = path.resolve("karma.conf.js"),
config = parseConfig(karmaConfig, {
singleRun: true,
client: {
specRegexp: ".spec.js$"
}
});
server.start(config, done);
});
In case of my tests it takes approx. 10 seconds to run 750 tests, so it's quite fast.