Grunt uses PhantomJS to run headless QUnit tests in a very interesting way (correct me if I'm wrong please). Since I just started experimenting with those tools I don't fully understand it and don't know how to configure nor how to extend it.
I manage to get all working on my machine but I would like to not use the $PATH system variable. Instead, I would like to provide the path to PhantomJS's executable file via a setting which I could easily change and port to other environments.
How can I achieve this?
I suppose there are many ways and I think the Qunit Task from Grunt might have an easy answer. Ideally it would be just a matter of defining the path on the grun.js file, something like this:
qunit: {
phantomjsPath: 'path/to/phantomjs',
files: ['test/**/*.html']
},
My environment is a MacOSX but I accept solutions for any kind of environments like Windows - my build server.
Thanks in advance.
UPDATE The version of Grunt I am using is v0.3.17. The next big version, v0.4.x, has many changes and some are not backwards compatible.
Well I think you finally migrated onto Grunt 0.4. And propably you got grunt-contrib-qunit plugin for running qunit tests under PhantomJS. Unfortunately you'll encountered the same issue - it's not possible to supply path to phantomjs executable. That's because grunt-contrib-qunit/grunt-contrib-phantomjs use phantomjs npm module which downloads PhantomJS on installation and hard-codes path to the executable in its js code. If you're experiencing such an issue then please check my blog post.
Unfortunately, grunt 0.3.x doesn't have a built-in option to specify a path to phantomjs -- it just executes phantomjs directly on the command line. Take a look at this helper function:
https://github.com/gruntjs/grunt/blob/master/tasks/qunit.js#L231
The situation seems to have changed in the has-yet-to-be-released grunt-0.4, however:
https://github.com/gruntjs/grunt-lib-phantomjs/blob/master/lib/phantomjs.js#L22
As you can see, the next version of grunt uses the npm module phantomjs which "exports a path string that contains the path to the phantomjs binary/executable.". Since the npm module phantomjs is installed locally by grunt, it seems like this would avoid you having to worry about setting the PATH variable or installing a conflicting version of phantomjs.
Anyway, I'd consider taking a look at grunt-0.4 if you're willing to live on the bleeding edge.
Otherwise, you can always fork the qunit task and modify the grunt-qunit task to look at your custom configuration variable.
Related
I am relatively new to Node.js and NPM, and I have a kind of naive question. I would like to know if there is a way to know if package published on NPM is tested, and if there is away could we automate that process, and if there is tool or framwork that tell me this packages is tested. Also, does NPM require developers to provide a test for their packages.
Thank you
NPM is just a package manager. As they say in their site,
It's a way to reuse code from other developers, and also a way to
share your code with them, and it makes it easy to manage the
different versions of code.
NPM does not require developers to provide a test for their packages.
Best to use a package that has more stars and downloads cos that vouch for the package.
P.S: Never forget that a developer can pull his/her code from npm anytime :)
There is no way to know absolutely for sure, but usually a good indicator is if the author/maintainer has a test script set in the module's package.json. npm does not require modules to have tests.
NPM doesn't require package developers to write tests for their code.
To understand if a specific package is tested, the best you can do is browse the source code of the package: does it have tests? Just unit tests or other types like integration tests and the like? Are these tests ready to run with straightforward commands? Do these tests offer good code coverage of the package? Do they actually test relevant cases?
To automate a process that tells you if a package has been tested, this process will have to make multiple checks within the source code of the package, as there are multiple conventions on how to write, name and structure tests within a Node.js codebase (not to mention the amount of available testing frameworks that can be used). My concern with this approach is how complicated (if possible) would it be to automatically determine if a package is well tested, without actually having a human look at the tests.
TLDR; My question is: is there a way to make browserify NOT override require with its own implementation, and instead have it use a different method name (e.g. browserifyRequire) for all of its own internal requiring. To find out why I need to do this, please read on...
The Scenario
I'm trying to write some automated tests using CasperJS and running them in SlimerJS -- as opposed to the default PhantomJS (although for all I know, I would run into the same following issues with PhantomJS).
I really want to figure out how to write these in CoffeeScript. As it turns out, CasperJS or SlimerJS don't do well with CoffeeScript nowadays. The docs' recommendation is to compile to JS prior running casper. Ok... not super convenient, but I can handle it. In fact, I'm also finding out that the way require resolves paths in these tools is not as straightforward as in Node, so bundling ahead of running should help with that too.
But now I'm running into a new set of issues when trying to run the bundled code. I'm using Browserify for that.
The Problem
In my test code, I need to require('casper'). Standard practice in CasperJS world. So I had to tell browserify NOT to bundle CasperJS, by putting "browser": { "casper": false } in my package.json. No probs so far. But the problem comes next:
Browserify overrides the built-in require function, supplying its own implementation of require that does all the things that make browserify work. CasperJS is fine with it until it encounters the require('casper') directive. That's the one time CasperJS has to do the require'ing, not browserify. And that fails.
The Incomplete Solution
I'm quite sure that CasperJS just can't deal with the fact that Browserify overrides require, because CasperJS implements its own way of requireing. To test that hypothesis, I edited the resulting bundle manually, renaming every occurence of require to browserifyRequire -- including browserify's implementation of require. The only require I left unchanged was that call to require('casper'), because that's the one time I need CasperJS to handle the requireing. And indeed, this made things work as expected.
The Question
Again, is there a way to make browserify use a different name for its own internal require? I suppose I can write a script to make this change after the bundling, but I'd much rather figure out how to do this via config.
An Alternate Question
Maybe instead of Browserify there's another solution for bundling and running CoffeeScript inside CasperJS? I haven't found one yet....
Found a reasonable solution — one that can be run as an npm script, eg npm run build-test-bundle by adding to package.json
"scripts": {
"build-test-bundle": "browserify -t coffeeify casper-coffee-test.coffee | derequire | sed 's/_dereq_..casper../require(\"casper\")/g' > casper-coffee-test.compiled.js"
},
This sequence of commands does the following:
browserify -t coffeeify casper-coffee-test.coffee builds the bundle
| derequire pipes the browserify output to derequire, an npm that renames all occurrences of require function to _dereq_
| sed 's/_dereq_..casper../require(\"casper\")/g' pipes previous output to the sed command, which replaces back to normal require all occurrences of _dereq_("casper")
I have an browser javascript app which uses browserify and Mocha tests which are run in Phantom.js and other browsers.
The tests use a test/tests.js file as an entry point where I require each file:
// ...
// Require test files here:
require('./framework/extendable.test');
require('./framework/creator.test');
require('./framework/container.test');
require('./framework/api_client.test');
// ...
This is very tedious and I would like to be able to require the entire folder.
I have tried using include-folder which only loads the contents of each file (I don´t want to eval for obvious reasons).
I have also looked at require-dir but Browserify does not seem to pick up on the require calls.
You can use Karma (https://github.com/karma-runner/karma) to run your Mocha tests in multiple browsers (PhantomJS, FF, IE, locally or remote via WebDriver, as you want).
Then you can use the karma-bro (https://github.com/Nikku/karma-bro) preprocessor. It will bundle your tests on the fly with browserify, only for the testing purposes.
So you can just specify the folder, that contains your tests, in the Karma config.
That's the way I do it.
You could also write your own simple transform that will simply replace specified folder name with the list of require calls. Even with the random in place if necessary. It's not that hard. I am making many transforms for myself to ease up on stuff like this.
I'm trying to find the best way for debug gulp plugins during developing using webstorm. I have a project example and couple gulp plugins, and I want to trace and inspect the code in webstorm right after I run gulp command in terminal. Ideally I want to add debugger statement or breakpoint inside the webstorm to trace the code execution.
Use this guide (shameless self promotion) to setup your configurations. Then debug should just work as is.
Also, you won't need to run gulp from commandline separately, since webstorm will do that for you.
this is an old question, and has an very good answer,
here is another one with VS Code
debugging-gulp-in-VS-Code
I've noticed that in trying to get seemingly simple node packages to install with npm (e.g. nerve, a "micro-framework") I often run into some form of dependency pain. After some digging, I tracked down the problem with nerve to the bcrypt module, which is apparently written in C/C++ and has to be compiled after the package manager downloads it.
Unfortunately, it seems like if you want this to work on Windows, the answer is (from one of the bcrypt issues threads) "install a Linux VM". So earlier today I did just that, and started running into other dependencies (you need certain unnamed apt packages installed before you can even think about building, despite GCC being installed), then eventually after seeing yet another C compiler error (about some package or other not being able to find "Arrays.c" I think), I actually gave up, and switched from nerve to express instead. Ironically, the larger and more complicated express installs with npm on Linux and Windows without a single issue.
So, my question is: is there any filter / dependency tracking available that lets you see if a package has additional dependencies besides node core? Because to me the allure of node is "everything in Javascript", and this kind of stuff dispels the illusion quite unpleasantly. In fact, despite having done more than my time working with C/C++, whenever I see a requirement to "make" something these days I generally run in the other direction screaming. :)
The first solution doesn't tell you if a dependency makes the package impure or not. Much better to search for gyp generated output:
find node_modules/ | grep binding.gyp || echo pure
Look out for the "scripts" field in the package.json.
If it contains something like
"scripts": {
"install": "make build",
}
and a Makefile in the root directory, there's a good possibility that the package has some native module which would have to be compiled and built. Many packages include a Makefile only to compile tests.
This check on the package documents does not exclude the possibility that some dependency will have to be compiled and built. That would mean repeating this process for each dependency in the package.json, their dependencies and so on.
That said many modules have been updated to install, without build on Windows, express for one. However that cannot be assured of all packages.
Using a Linux VM seems to be the best alternative. Developing Node.js applications on Window gives you step by step instructions on installing a VM, Node.js and Express.
Node is not "everything javascript" , since one way to extend node core is to write c/c++ plugins.
So Node is more a javascript wrapper around c/c++ modules using V8.
How could you write efficient database drivers in pure javascript for instance ? it would be possible but slow.
as for the filters , it is up to the author to document his package. there is no automatic filter.