Let's say I have some tests that require jQuery. Well, we don't have to make believe, I actually have the tests. The test themselves are not important, but the fact they depend on jQuery is important.
Disclaimer: this is node.js so you cannot depend on global variables in your solution. Any dependency must be called into the file with require.
On the server we need this API (to mock the window object required by server-side jquery)
// somefile.js
var jsdom = require("jsdom").jsdom;
var window = jsdom().parentWindow();
var $ = require("jquery")(window);
// my tests that depend on $
// ...
On the client we need a slightly different API
// somefile.js
// jsdom is not required obviously
// window is not needed because we don't have to pass it to jquery explicitly
// assume `require` is available
// requiring jquery is different
var $ = require("jquery");
// my tests that depend on $
// ...
This is a huge problem !
The setup for each environment is different, but duplicating each test just to change setup is completely stupid.
I feel like I'm overlooking something simple.
How can I write a single test file that requires jQuery and run it in multiple environments?
in the terminal via npm test
in the browser
Additional information
These informations shouldn't be necessary to solve the fundamental problem here; a general solution is acceptable. However, the tools I'm using might have components that make it easier to solve this.
I'm using mocha for my tests
I'm using webpack
I'm not married to jsdom, if there's something better, let's use it !
I haven't used phantomjs, but if it makes my life easier, let's do it !
Additional thoughts:
Is this jQuery's fault for not adhering to an actual UMD? Why would there be different APIs available based on which env required it?
I'm using karma to run my unit tests from the command line directly (CI too, with gulp).
Karma uses phantomjs to run the tests inside of a headless browser, you can configure it to run in real browsers too.
Example of karma configuration inside of gulp:
// Run karma tests
gulp.task("unit", function (done) {
var parseConfig = require("karma/lib/config").parseConfig,
server = karma.server,
karmaConfig = path.resolve("karma.conf.js"),
config = parseConfig(karmaConfig, {
singleRun: true,
client: {
specRegexp: ".spec.js$"
}
});
server.start(config, done);
});
In case of my tests it takes approx. 10 seconds to run 750 tests, so it's quite fast.
Related
I am just re-looking at Jest as it's been getting a lot of good reports. However struggling to find a good way the access internal functions to test them.
So if I have:
const Add2 = (n)=> n+2;
export default (list)=>{
return list.map(Add2());
}
Then if I was using Jasmine or Mocha I'd use rewire or babel-plugin-rewire to get the internal Add2 function like this:
var rewire = require('rewire');
var Add2 = rewire('./Adder').__get__('Add2');
it('Should add 2 to number', ()=>{
let val = Add2(1);
expect(val).toEqual(3);
});
However neither of them seem to work with jest and while there looks like an excellent mocking syntax I can't see any way to get internal function.
Is there a good way to do this, something I'm missing on the jest api or set up?
You can actually achieve this if you are willing to use babel to transform your files before each test. Here's what you need to do (I'll assume you know how to get babel itself up and running, if not there are multiple tutorials available for that):
First, we need to install the babel-jest plugin for jest, and babel-plugin-rewire for babel:
npm install --save-dev babel-jest babel-plugin-rewire
Then you need to add a .babelrc file to your root directory. It should look something like this:
{
"plugin": ["rewire"]
}
And that should be it (assuming you have babel set up correctly). babel-jest will automatically pick up the .babelrc, so no additional config needed there unless you have other transforms in place already.
Babel will transform all the files before jest runs them, and speedskater's rewire plugin will take care of exposing the internals of your modules via the rewire API.
I was struggling with this problem for some time, and I don't think the problem is specific to Jest.
I know it's not ideal, but in many situations, I actually just decided to export the internal function, just for testing purposes:
export const Add2 = x => x + 2;
Previously, I would hate the idea of changing my code, just to make testing possible/easier. This was until I learned that this an important practice in hardware design; they add certain connection points to their piece of hardware they're designing, just so they can test whether it works properly. They are changing their design to facilitate testing.
Yes, you could totally do this with something like rewire. In my opinion, the additional complexity (and with it, mental overhead) that you introduce with such tools is not worth the payoff of having "more correct" code.
It's a trade-off, I value testing and simplicity, so for me, exporting private functions for testing purposes is fine.
This is not possible with jest. Also you should not test the internals of a module, but only the public API, cause this is what other module consume. They don't care how Add2 is implemented as long as yourModule([1,2,3]) returns [3,4,5].
I'm completely new to sails, node and js in general so I might be missing something obvious.
I'm using sails 0.10.5 and node 0.10.33.
In the sails.js documentation there's a page about tests http://sailsjs.org/#/documentation/concepts/Testing, but it doesn't tell me how to actually run them.
I've set up the directories according to that documentation, added a test called test/unit/controllers/RoomController.test.js and now I'd like it to run.
There's no 'sails test' command or anything similar. I also didn't find any signs on how to add a task so tests are always run before a 'sails lift'.
UPDATE-2: After struggling a lil bit with how much it takes to run unit test this way, i decided to create a module to load the models and turn them into globals just as sails does, but without taking so much. Even when you strip out every hook, but the orm-loader depending on the machine, it can easily take a couple seconds WITHOUT ANY TESTS!, and as you add models it gets slower, so i created this module called waterline-loader so you can load just the basics (Its about 10x faster), the module is not stable and needs test, but you are welcome to use it or modify it to suit your needs, or help me out to improve it here -> https://github.com/Zaggen/waterline-loader
UPDATE-1:
I've added the info related to running your tests with mocha to the docs under Running tests section.
Just to expand on what others have said (specially what Alberto Souza said).
You need two steps in order to make mocha work with sails as you want. First, as stated in the sails.js Docs you need to lift the server before running your test, and to do that, you create a file called bootstrap.test.js (It can be called anything you like) in the root path (optional) of your tests (test/bootstrap.test.js) that will be called first by mocha, and then it'll call your test files.
var Sails = require('sails'),
sails;
before(function(done) {
Sails.lift({
// configuration for testing purposes
}, function(err, server) {
sails = server;
if (err) return done(err);
// here you can load fixtures, etc.
done(err, sails);
});
});
after(function(done) {
// here you can clear fixtures, etc.
sails.lower(done);
});
Now in your package.json, on the scripts key, add this line(Ignore the comments)
// package.json ....
scripts": {
// Some config
"test": "mocha test/bootstrap.test.js test/**/*.test.js"
},
// More config
This will load the bootstrap.test.js file, lift your sails server, and then runs all your test that use the format 'testname.test.js', you can change it to '.spec.js' if you prefer.
Now you can use npm test to run your test.
Note that you could do the same thing without modifying your package.json, and typying mocha test/bootstrap.test.js test/**/*.test.js in your command line
PST: For a more detailed configuration of the bootstrap.test.js check Alberto Souza answer or directly check this file in hist github repo
See my test structure in we.js: https://github.com/wejs/we-example/tree/master/test
You can copy and paste in you sails.js app and remove we.js plugin feature in bootstrap.js
And change you package.json to use set correct mocha command in npm test: https://github.com/wejs/we-example/blob/master/package.json#L10
-- edit --
I created a simple sails.js 0.10.x test example, see in: https://github.com/albertosouza/sails-test-example
Given that they don't give special instructions and that they use Mocha, I'd expect that running mocha from the command line while you are in the parent directory of test would work.
Sails uses mocha as a default testing framework.
But Sails do not handle test execution by itself.
So you have to run it manually using mocha command.
But there is an article how to make all Sails stuff included into tests.
http://sailsjs.org/#/documentation/concepts/Testing
I have a grunt configuration running a test suite that, for now, i can't change how it's structured.
At some point, i have an array of file paths, and do something like:
files.forEach(function(testFile){
grunt.task.run( 'shell:phantomjs:' + testFile );
});
and
grunt.initConfig({
phantomjs: {
command: function(testFile){
return 'node_modules/phantomjs/bin/phantomjs ' + testFile;
}
}
});
The problem with this approach is SPEED.
As i'm running phantomjs for each file, the setup and takedown of the server on each run makes my tests take longer than 4 minutes to run.
I am looking for a way of calling phantomjs with a blob path like tests/**/*.js, or even an array of the filenames, or something like that.
You have two problem that you need to deal with.
Passing test files
tests/**/*.js is a glob wildcard syntax for which there is a node.js module. The problem is that PhantomJS has a different runtime than node.js and the fs module has vastly different functions. Because of this, you cannot just require node-glob. You would have to port it to PhantomJS. There is a feature-request to include this right in PhantomJS.
If you have many test files and you want to pass them separately to the main PhantomJS script, you may run into a problem with the length of the command line call. Depending on the shell/OS the buffer may hold only a limited amount of characters for the command and test files.
It would be the easiest to curate all test files in a separate file as a list and just pass this file into the main PhantomJS script. You could also just have the list of files in the main script.
Running multiple scripts in series
Regardless, what you do, you will need to clear cookies and localStorage between tests. I'm not sure if cache can be cleared.
1. adjusted
You may have to adjust your test scripts. When you call phantom.exit(), the whole process terminates. It cannot be prevented by overwriting exit like this
phantom.exit = function(){};
because it is a native property. You would need to change your scripts in something like a module:
module.exports = function(done){
// your script ...
setTimeout(function(){
// some more of your script
// add clean up and queue next script...
page.close(); // clean up memory consumption; might still be not enough
done(); // call instead of "phantom.exit();"
}, 1000);
};
The main script would look like this:
var testfiles = ["..."];
var async = require("async"); // install async through npm
testfiles = testfiles.map(function(file){
return require(file);
});
async.series(testfiles, function(err){
console.log("ERROR", err);
phantom.exit();
});
If you only use one page instance throughout you can try to create it in the main file and pass it to every test file separately. That will likely fix your memory consumption, if this is a problem.
2. without change
There is also the possibility that you don't need to change your test files. You can read your test files into strings using fs.read. You can change some things using string operations/regex to exchange phantom.exit(); for done(); and adding a line to close the page.
When you're done, you can just eval the string (asynchronously). Using eval is likely not a security problem since it's likely that you have written the test scripts.
I want to publish a module to several component manager systems: npmjs, bower, etc... plus I want to create downloadable builds as well, for example one in AMD style for requirejs, one in commonJS style, one for the global namespace in browser, minified for each of them, etc... These are more than 10 builds.
I currently have a single AMD build, and I wrote unit tests for it using karma, jasmine and requirejs in an amd style. What do you suggest, how to generate the other builds and the tests for them?
I mean I cannot decide what should I have as a base of transformations. There is a common part in every output package, and there is a package dependent part either.
AMD - requirejs (I am not sure about using the config options)
define(["module", "dependency"], function (module, dependency) {
var m = {
config: function (options){
//...
},
//...
//do something with the dependency
};
m.config(module.config()); //load config options set by require.config()
return m;
});
commonJS
var dependency = require("dependency");
module.exports = {
config: function (options){
//...
},
//...
//do something with the dependency
};
global
var m = (function (){
return {
config: function (options){
//...
},
//...
//do something with the dependency
};
})(dependency);
I don't know, should I develop the common code and build before every test, or should I develop one of the packages, test it, and write a transformation from that into the other builds?
I intend to use gulp for creating the builds and call unit tests automatically for each of them before automatically publishing them. Ohh and ofc I need an auto version number change as well. Btw. is it necessary to call unit tests after the building procedure, what do you think? I just want to be sure, that not a buggy code is published...
There are transformation libraries for gulp:
https://github.com/phated/gulp-wrap-amd (commonjs to amd)
https://github.com/gfranko/amdclean (amd to standard js)
https://github.com/phated/gulp-wrap-umd (commonjs to umd I guess)
https://github.com/adamayres/gulp-wrap (not sure of its capabilities yet, maybe universal with custom templates, maybe nothing)
So it is easy to transform one package format to the two other formats. This can be done with the tests and the code as well. The commonjs can be tested with node and jasmine-node, the standard js can be tested with karma and jasmine, the amd can be tested with karma, requirejs and jasmine.
It would be a poor choice to create a common descriptor and convert that before every test. I don't want to create another language, like coffeescript, etc... so conversion between packages is okay.
Calling unit tests before publishing is okay. There are no other common package types, so this 3 packages will be enough for every component manager I will use.
I am unsure about the versioning. It is not hard to set it manually, but maybe a system like travis or maven could help more... I'll figure out and edit this answer.
I'm creating a file that requires huge libraries such as jquery and three.js using browserify. The compiling process takes several seconds, probably because it's recompiling all the libs for each minor change I make. Is there a way to speed it up?
Have you tried using the --insert-globals, --ig, or --fast flags? (they're all the same thing)
The reason it's slow may be that it's scanning all of jquery and d3 for __dirname, __filename, process, and global references.
EDIT:
I just remembered: Browserify will take any pre-existing require functions and fall back to using that. more info here
This means you could build a bundle for your static libs, and then only rebuild the bundle for your app code on change.
This coupled with my pre-edit answer should make it a lot faster.
There are a few options that can help:
--noparse=FILE is a must for things like jQuery and three.js that are huge but don't use require at all.
--detect-globals Set to false if your module doesn't use any node.js globals. Directs browserify not to parse a file looking for process, global, __filename, and __dirname.
--insert-globals Set to true if your module does use node.js globals. This will define those globals without parsing the module and checking to see if they're used.
I was able to speed up my build by externalizing ThreeJS, using noparse with it, and setting it not to create a source map for it.
Use https://github.com/substack/watchify while developing.
If you use grunt, you can use my grunt task : https://github.com/amiorin/grunt-watchify
It caches the dependencies and watches the filesystem. Because of this the build is very fast. You can use it with grunt-contrib-watch and grunt-contrib-connect or alone. You can find a Gruntfile.js example in the github repository.
If you don't use grunt, you can use the original watchify from #substack : https://github.com/substack/watchify
Using watchify is practically a must, as it actually caches your deps between reloads.
My builds dropped from 3-8s to under 1s. (The >3s builds were still using ignoreGlobals, detectGlobals=false, and even noParseing jQuery).
Here's how I use it with gulp and coffeescript:
gutil = require("gulp-util")
source = require("vinyl-source-stream")
watchify = require("watchify")
browserify = require("browserify")
coffeeify = require("coffeeify")
gulp.task "watchify", ->
args = watchify.args
args.extensions = ['.coffee']
bundler = watchify(browserify("./coffee/app.coffee", args), args)
bundler.transform(coffeeify)
rebundle = ->
gutil.log gutil.colors.green 'rebundling...'
bundler.bundle()
# log errors if they happen
.on "error", gutil.log.bind(gutil, "Browserify Error")
# I'm not really sure what this line is all about?
.pipe source("app.js")
.pipe gulp.dest("js")
.pipe livereload()
gutil.log gutil.colors.green 'rebundled.'
bundler.on "update", rebundle
rebundle()
gulp.task "default", ["watchify", "serve"]
EDIT: here's a JS translation:
var gutil = require("gulp-util")
var source = require("vinyl-source-stream")
var watchify = require("watchify")
var browserify = require("browserify")
var coffeeify = require("coffeeify")
gulp.task("watchify", function() {
var args = watchify.args
args.extensions = ['.coffee']
var bundler = watchify(browserify("./coffee/app.coffee", args), args)
bundler.transform(coffeeify)
function rebundle() {
gutil.log(gutil.colors.green('rebundling...'))
bundler.bundle()
// log errors if they happen
.on("error", gutil.log.bind(gutil, "Browserify Error"))
// I'm not really sure what this line is all about?
.pipe(source("app.js"))
.pipe(gulp.dest("js"))
.pipe(livereload())
gutil.log(gutil.colors.green('rebundled.'))
}
bundler.on("update", rebundle)
rebundle()
})
gulp.task("default", ["watchify", "serve"])
Update
You can also give it a try to persistify which can be used as a drop in replacement for watchify from the command line and from code.
Original answer below
=======
I'm currently using bundly: https://www.npmjs.com/package/bundly
FULL DISCLOUSURE: I wrote it
But the main difference of this wrapper is that it provides incremental building. It persists the browserify cache between runs and only parse the files that have changed without the need for the watch mode.
Currently the module does a bit more than only adding the cache, but I'm thinking that the logic that handles the incremental build part could be moved to a plugin, that way it can be used with browserify directly.
Check a demo here: https://github.com/royriojas/bundly-usage-demo
I wrote this to solve the problem of slow builds with browserify and commonjs-everywhere. If you run it in "watch" mode then it will automatically watch your input files and incrementally rebuild just any files that changed. Basically instantaneous and will never get slower as your project grows.
https://github.com/krisnye/browser-build