I am trying to write test driven Javascript. Testing each function, I know, is crucial. But I came to a stumbling block, in that the plugin which I am writing needs to have some private functions. I cannot peek into how they are functioning. What would I need to do if I want to keep my code well tested without changing the structure of it too much? (I am ok with exposing some API, though within limits.)
I am using sinon, QUnit, and Pavlov.
If you are doing test driven development (as suggested by the tags) each line of production code is first justified by failing test case.
In other words the existence of each and every line of your production code is implicitly tested because without it some test must have failed. That being said you can safely assume that private function/lambda/closure is already tested from the definition of TDD.
If you have a private function and you are wondering how to test it, it means you weren't doing TDD on the first place - and now you have a problem.
To sum up - never write production code before the test. If you follow this rule, every line of code is tested, no matter how deep it is.
Related
So I have a new assignment in university consisting of lots of people collaborating, and we want to use continuous integration, thinking of using CircleCI, and we want to use a TDD approach.
My biggest question is how do you correctly use TDD. I might have the wrong idea but from what I understand you write all your tests first and make them fail, because you don't have any code yet, but how can I write all my tests if I don't even know yet all the units I will have/need?
In this case since using CircleCI, assuming it won't let me merge code if it doesn't pass the tests, how can this work? Since there will be tests written but no code for that test specifically.
Am I wrong and you write the tests as you go along on the development of the features?
This is a subject that I am really having a hard time grasping but I would really love to understand it right as I believe it will really help on the future.
My biggest question is how do you correctly use TDD. I might have the wrong idea but from what I understand you write all your tests first and make them fail, because you don't have any code yet, but how can I write all my tests if I don't even know yet all the units I will have/need?
Not quite the right idea.
You might start by thinking about the problem, and creating a checklist of tests that you expect to implement before you are done.
But the actual implementation cycle is incremental. We work on one test at a time, starting from the first. We make that test pass, and clean up all of the code, before we introduce a second test.
The idea here being that we'll be learning as we go -- we may think of some more tests, which get added to the checklist, or we may decide the tests we thought would be important aren't after all, so they get crossed off the checklist.
At any given point in time, we expect that either (a) all of the implemented tests are passing, or (b) exactly one implemented test is failing, and it is the one we are currently working on. Any time we discover some other condition holds, then we back up, reverting to some previously well understood state, and then proceed forwards again.
We don't normally push/publish/share code when it has broken tests. Instead, the test and a working implementation are shared together. We don't share the broken intermediate stages, or known mistakes; instead, we share progress.
A review of the slides in the Bowling Game Kata may help to clarify what the rhythm of the work looks like.
It is completely normal to feel like the first test is hard -- you are writing a test implementation against code that doesn't exist yet. We tend to employ imagination here; suppose that the production code you need already exists, how would you invoke it? what data would you pass to it? What data would you get back? and you write the test as though the perfect interface for what you want to do already exists. Then you create production code that matches that interface; then you give that production code the correct behavior; then you give the production code a design that will make the code easy to change later.
And when you are happy with all of that, you introduce the second test, which usually looks like the first test with slightly different data, and a different expected result. So the second test fails, and then you go to the easy-to-change code you wrote before, and adapt it so that the second test also passes. And then you again clean up the design so that the code is easily changed.
And so it goes, until you reach the end of your checklist.
One of the developers on our team (me) keeps accidentally checking in fdescribe and fit in Jasmine tests. Which sometimes results in masking broken tests.
Is there an easy way to either fail a build (TFS) if fdescribe is used, or (better?) configure jasmine server side to treat fdescribe (and fit) as regular describe (and it)?
I would rather use those then fall back to ?spec approach.
I also would like to take this opportunity to apologize for doing that.
We've approached it with static code analysis and eslint.
There is a specific plugin: eslint-plugin-jasmine, see:
Disallow use of focused tests (no-focused-tests)
Sample output:
test/e2e/specs/test.spec.js
5:0 error Unexpected fdescribe jasmine/no-focused-tests
Currently, the plugin also checks for disabled tests and duplicate suite names.
If you actually want to make fit and fdescribe behave like it and describe respectively, you can override one with the other in a separate file and insert it before your actual specs (how to insert depends on your test runner):
// assuming Jasmine is already loaded and has published it's public API
fit = it;
fdescribe = describe;
But I would not suggest to use this approach, because if you really want to focus some spec, you'd have to comment these things or exclude the file from a runner. I would rather add this as another step for the build process, if you have one, or pre-commit hook, which would run some tool, like #alecxe has suggested, analyzing the code for existing fit and fdescribe and fail a build / reject a commit.
Recently I started unit testing JavaScript app I'm working on. No matter if I use Jasmine,QUnit or other I always write set of tests. Now, I have my source code with lets say:
function calc()
{
// some code
someOtherFunction();
// more code
}
I also have a test (no matter what framework, with Jasmine spies or sinon.js or something) that confirms that someOtherFunction() is called when calc() is executed. Test passes. Now at some point I refactor the calc function so the someOtherFunction() call doesn't exist e.g.:
function calc()
{
// some code
someVariable++;
// more code
}
The previous test will fail, yest the function still will function as expected, simply its code is different.
Now, I'm not sure if I understand correctly how testing is done. It seems obvious that I will have to go back and rewrite the test but if this happens is there something wrong with my approach? Is it bad practice? If so at which point I went wrong.
The general rule is you don't test implementation details. So given you decided that it was okay to remove the call, the method was an implementation detail and therefore you should not have tested that it was called.
20/20 hindsight is great thing is n't it?
In general I wouldn't test that a 'public' method called a 'private' one. Testing delegation should be reserved for when one class calls another.
You have written a great unit test.
The unit tests should notice sneaky side-effects when you change the implementation.
And the test must fail when the expected values don't show up.
So look to your unit test where the expected value don't match and decide what was the problem:
1) The unit test tested things it shouldn't (change the test)
2) The code is broken (add the missing side-effect)
It's fine to rewrite this test. A lot of tests fail to be perfect on the first pass. The most common test smell is tight-coupling with implementation details.
Your unit test should verify the behavior of the object, not how it achieved the result. If you're strict about doing this in a tdd style, maybe you should revert the code you changed and refactor the test first. But regardless of what technique you use, it's fine to change the test as long as you're decoupling it from the details of the system under test.
I inherited a legacy JavaScript library simply written as a list of functions as follow:
function checkSubtree(targetList, objId)
{
...
}
function checkRootSubtree(targetList, rootLength, rootInfo, level)
{
...
}
To test it with JsTestDriver do I have to 'clean' it to adhere to some JavaScript best practice or can I test it without modification?
Thanks
The HtmlDoc document fragment feature of jsTestDriver helps when unit testing DOM-dependent JavaScript code. You probably want a little HTML chunk to apply those functions to, as you go along.
When you've proven that they work, you'll see ways of making the functions more testable. This is one of the hidden gems of unit testing: emergent software design. Since you want to be able to test your code in isolation, you'll have an incentive to reduce coupling.
You can still test it using JsTestDriver. JsTestDriver is only a test runner, it doesn't require your code to be written in any special way. It's hard to give any advice on actual testing without seeing some code (i.e. function bodies).
I've got a collection of javascript files from a 3rd party, and I'd like to remove all the unused methods to get size down to a more reasonable level.
Does anyone know of a tool that does this for Javascript? At the very least give a list of unused/used methods, so I could do the manually trimming? This would be in addition to running something like the YUI Javascript compressor tool...
Otherwise my thought is to write a perl script to attempt to help me do this.
No. Because you can "use" methods in insanely dynamic ways like this.
obj[prompt("Gimme a method name.")]();
Check out JSCoverage . Generates code coverage statistics that show which lines of a program have been executed (and which have been missed).
I'd like to remove all the unused methods to get size down to a more reasonable level.
There are a couple of tools available:
npm install -g fixmyjs
fixmyjs <filename or folder>
A configurable module that uses JSHint (Github, docs) to flag functions that are unused and perform clean up as well.
I'm not sure that it removes undefined functions as opposed to flagging them. though it is a great tool for cleanup, it appears to lack compatibility with later versions of ECMAScript (more info below).
There is also the Google Closure Compiler which claims to remove dead JS but this is more of a build tool.
Updated
If you are using something like Babel, consider adding ESLint to your text editor, which can trigger a warning on unused methods and even variables and has a --fix CLI option for autofixing some errors and style issues.
I like ESLint because it contains multiple plugins for alternate libs (like React warnings if you're missing a prop), allowing you to catch bugs in advance. They have a solid ecosystem.
As an example: on my NodeJS projects, the config I use is based off of the Airbnb Style Guide.
You'll have to write a perl script. Take no notice of the nay-sayers above.
Such a tool could work with libraries that are designed to only make function calls explicitly. That means no delegates or pointers to functions would be allowed, the use of which in any case only results in unreadable "spaghetti code" and is not best practice. Even if it removes some of these hidden functions you'll discover most if not all of them in testing. The ones you dont discover will be so infrequently used that they will not be worth your time fixing them. Dont obsess with perfection. People go mad doing that.
So applying this one restriction to JavaScript (and libraries) will result in incredible reductions in page size and therefore load times, not to mention readability and maintainability. This is already the case for tools that remove unused CSS such as grunt_CSS and unCSS (see http://addyosmani.com/blog/removing-unused-css/) and which report typical reductions down to one tenth the original size.
Its a win/win situation.
Its noteworthy that all interpreters must address this issue of how to manage self modifying code. For the life of me I dont understand why people want to persist with unrestrained freedom. As noted by Triptych above Javascript functions can be called in ways that are literally "insane". This insane fexibility corrupts the fundamental doctrine of separation of code and data, enables real-time code injection, and invalidates any attempt to maintain code integrity. The result is always unreadable code that is impossible to debug and the side effect to JavaScript - removing the ability to run automatic code pre-optimisation and validation - is much much worse than any possible benefit.
AND - you'd have to feel pretty insecure about your work to want to deliberately obsficate it from both your collegues and yourself. Browser clients that do work extremely well take the "less is more" approach and the best example I've seeen to date is Microsoft Office combination of Access Web Forms paired with SharePoint Access Servcies. The productivity of having a ubiquitous heavy tightly managed runtime interpreter client and its server side clone is absolutely phenomenal.
The future of JavaScript self modifying code technologies therfore is bringing them back into line to respect the...
KISS principle of code and data: Keep It Seperate, Stupid.
Unless the library author kept track of dependencies and provided a way to download the minimal code [e.g. MooTools Core download], it will be hard to to identify 'unused' functions.
The problem is that JS is a dynamic language and there are several ways to call a function.
E.g. you may have a method like
function test()
{
//
}
You can call it like
test();
var i = 10;
var hello = i > 1 ? 'test' : 'xyz';
window[hello]();
I know this is an old question by UglifyJS2 supports removing unused code which may be what you are looking for.
Also worth noting that eslint supports an option called no-unused-vars which actually does some basic handling of detecting if functions are being used or not. It definitely detects it if you make the function anonymous and store it as a variable (but just be aware that as a variable the function declaration doesn't get hoisted immediately)
In the context of detecting unused functions, while extreme, you can consider breaking up a majority of your functions into separate modules because there are packages and tools to help detect unused modules. There is a little segment of sindreshorus's thoughts on tiny modules which might be relevant to that philosophy but that may be extreme for your use case.
Following would help:
If you have fully covered test cases, running Code Coverage tool like istanbul (https://github.com/gotwarlost/istanbul) or nyc (https://github.com/istanbuljs/nyc), would give a hint of untouched functions.
At least the above will help find the covered functions, that you may thought unused.