I have some projects in Javascript/JQuery but AngularJS is a whole new concept to me.
In the docs I have seen the following functions appear alot of time.
describe('PhoneListCtrl', function(){
it('should create "phones" model with 3 phones', function() {
var scope = {},
ctrl = new PhoneListCtrl(scope);
expect(scope.phones.length).toBe(3);
});
});
Are describe() and it() legitimate functions? From the tutorial docs I understand that they are for testing and mock up, but it is still unclear to me how I should run these functions, and if they are merely used as 'foo' and 'bar'.
describe is used to scope tests and it is used to declare them. When the test fails, in your case, you would see a message along the lines of
'should create "phones" model with 3 phones' FAILED!
Followed by some failing assertions. The string provided to it gives context to the assertions.
describe can be used to scope a number of tests to a single topic. Including before and after functions. This is common in lots of testing libraries, not just in javascript. Mocha, Jasmine, but also rspec (from Ruby) use a similar approach.
These are from the Jasmine testing framework. describe defines a test suite, and it defines a "spec" or a test. From the Jasmine documentation:
A test suite begins with a call to the global Jasmine function
describe with two parameters: a string and a function. The string is a
name or title for a spec suite - usually what is under test. The
function is a block of code that implements the suite.
and regarding it:
Specs are defined by calling the global Jasmine function it, which,
like describe takes a string and a function. The string is a title for
this spec and the function is the spec, or test. A spec contains one
or more expectations that test the state of the code under test.
I am not familiar with Angular, but my assumption is that this is perhaps used in the documentation for illustrative purposes in a style similar to that used by the various code koans such as Ruby Koans, where tests are used to illustrate how certain aspects of the language are meant to function. I could be wrong about that last part though.
Related
I've been writing test (in JavaScript) for the last two months. And, I have the habit of checking if module has some properties.
For example:
// test/foo.js
const Foo = require('../lib/foo');
const Expect = require('chai').expect;
describe('Foo API', () => {
it('should have #do and #dont properties', () => {
Expect(foo).to.have.property('do')
.and.to.be.a('function');
Expect(foo).to.have.property('dont')
.and.to.be.a('function');
});
});
});
And, I've been wondering if I am doing the right things. Just wanna know a few things:
Is this pattern "right"?
Is it widely used?
Are there other ways to do it?
If its not "right"?
Why?
Does it even makes sense?
I mean, it is unnecesary or redundant?
Don't test for types. Do test that specific property values conform to expected values.
So instead of "is foo a function", write a test that calls foo and expects a specific result.
If foo is not a function, you'll generate an error and the test will fail (which is good). If foo is a function, you'll have a proper test of that function's behavior.
There's a paradigm called duck typing that says (from Wikipedia):
In duck typing, a programmer is only concerned with ensuring that
objects behave as demanded of them in a given context, rather than
ensuring that they are of a specific class. For example, in a
non-duck-typed language, one would create a function that requires
that the object passed into it be of type Duck, or descended from type
Duck, in order to ensure that that function can then use the object's
walk and quack methods. In a duck-typed language, the function would
take an object of any type and simply call its walk and quack methods,
producing a run-time error if they are not defined. Instead of
specifying types formally, duck typing practices rely on
documentation, clear code, and testing to ensure correct use.
I would focus on the following part of above text:
In a duck-typed language, the function would take an object of any
type and simply call its walk and quack methods, producing a run-time
error if they are not defined
And...
[...] duck typing practices rely on documentation, clear code, and testing
to ensure correct use.
That is, since JavaScript is a dynamically-typed language fits very well with duck typing.
In other words, you should avoid these tests. If a module has a missing property or it exists with an undesired type, you'll get a runtime error, which is enough to note that the caller doesn't fulfills the implicit contract to work fine with a given module.
The whole contract defined by the behavior of your code during run-time can be enforced by good documentation pages.
If you write a test it is because you want to be sure that further changes of the code will not change the behavior that you're testing.
So it is useful if your module expose those property as an interface and other code on your app or other apps depends on it.
But if the property, is just something "internals", I mean it is just dependent on the module implementation, than it is dangerous and a waste of time, as you should always be able to change implementation.
That's not true for interfaces.
On your tests, instead to test if a property is a function, you should test if the function do what should do when is called.
If there is a documentation that this function is an interface of the module.
I see two popular libraries in NPM - chai and check-types. I am trying to understand their intended purpose.
I know that chai is used for unit testing TDD/BDD style and has a rich assertion library.
check-types (https://github.com/philbooth/check-types.js) on the other hand is simply an assertion library to check for arguments to be of the correct types. Does not look like this is meant to be used for unit testing. I am assuming this is to be used inside my Javascript functions to ensure that argument types passed into the function are of the expected type.
So the question is check-types library redundant if chai already supports a rich assertion library? Or are they meant for different uses? Can I use chai in my code also (outside of my tests) to check for variables to be of the right type?
As you already assumed correctly, there are two different use cases here:
chai is an Assertion-Library intended just for tests - and therefore it is not optimized in any way to run within a normal app. There is no minifed version of it and requiring it brings you a lot of library code for different possibilities of testing styles (should, expect and assert). And the most important: If a condition for an assertion is not met, chai will immidiately throw a special AssertionError that is intended to be processed by popular test-harnesses like Karma or Mocha.
check-types on the other hand is just intended to make type- and value-checking easier and more readable within an app. In most cases, it let's you decide what to do when an assertion is not met (doesn't throw).
CONCLUSION:
While of course you could use chai outside of your tests, I definitly wouldn't recommend it as it would just increase the size of your build with lots of unused methods and you would need a try{} catch(){} block around every assertion.
And while you could use check-types for your tests, you would need to throw the AssertionErrors for every test yourself (which is tiresome).
So: No, none of both libraries is redundant. You can think of chai as a kind of superset of libraries like check-types (chai itself uses it's own type-detection library called type-detect (Of which I am one of the maintainers ;) )) that utilizes them for usage within test-harnesses.
With Jasmine it is possible to spyOn methods, but I'm not clear on when would it be actually useful. My understanding is that unit tests should not be concerned with implementation details and testing if method is called would be implementation detail.
One place I might think of is spying on scope.$broadcast (Angular) etc but then again this would be implementation detail and not sure if unit tests should even bother with how the code works, as long as it gives expected result.
Obviously there are good reasons to use spyOn so what would be good place to use it?
The spyOn you describe is more commonly known in testing as a MOCK, although to be more clear it allows for 2 operations:
Create a new implementation for a method via createSpy (this is the classical mock)
Instrument an existing method via spyOn (this allows you to see if the method was called and with what args, the return value etc.)
Mocking in probably the most used technique in unit testing. When you are testing a unit of code, you'll often find that there are dependencies to other units of code, and those dependencies have their own dependencies etc. If you try to test everything you'll end up with an module / UI test, which are expensive and difficult to maintain (they are still valuable, but you want as few of those as possible)
This is where mocking comes in. Imagine your unit calls to a REST service for some data. You don't want to take a dependency on a service in your unit test. So you mock the method that calls the service and you provide your own implementation that simply returns some data. Want to check that your unit handles REST errors? Have your mock return an error. etc.
It can sometimes be useful to know if your code actually calls another unit of code - imagine that you want to make sure your code correctly calls a logging module. Just mock (spyOn) that logging module and assert that it was called X number of times with the proper parameters.
You can spy functions and then you will be able to assert a couple of things
about it. You can check if it was called, what parameters it had, if it
returned something or even how many times it was called!
Spies are highly useful when writing tests, so I am going to explain how to
use the most common of them here.
// This is our SUT (Subject under test)
function Post(rest) {
this.rest = rest;
rest.init();
}
We have here our SUT which is a Post constructor. It uses a RestService to
fetch its stuff. Our Post will delegate all the Rest work to the RestService
which will be initialized when we create a new Post object. Let’s start testing
it step by step:
`describe('Posts', function() {
var rest, post;
beforeEach(function() {
rest = new RestService();
post = new Post(rest);
});
});`
Nothing new here. Since we are going to need both instances in every test,
we put the initialization on a beforeEach so we will have a new instance every
time.
Upon Post creation, we initialize the RestService. We want to test that, how
can we do that?:
`it('will initialize the rest service upon creation', function() {
spyOn(rest, 'init');
post = new Post(rest);
expect(rest.init).toHaveBeenCalled();
});`
We want to make sure that init on rest is being called when we create a new
Post object. For that we use the jasmine spyOn function. The first parameter is
the object we want to put the spy and the second parameter is a string which
represent the function to spy. In this case we want to spy the function init
on the spy object. Then we just need to create a new Post object that will call
that init function. The final part is to assert that rest.init have been
called. Easy right? Something important here is that the when you spy a
function, the real function is never called. So here rest.init doesn’t actually
run.
Recently I started unit testing JavaScript app I'm working on. No matter if I use Jasmine,QUnit or other I always write set of tests. Now, I have my source code with lets say:
function calc()
{
// some code
someOtherFunction();
// more code
}
I also have a test (no matter what framework, with Jasmine spies or sinon.js or something) that confirms that someOtherFunction() is called when calc() is executed. Test passes. Now at some point I refactor the calc function so the someOtherFunction() call doesn't exist e.g.:
function calc()
{
// some code
someVariable++;
// more code
}
The previous test will fail, yest the function still will function as expected, simply its code is different.
Now, I'm not sure if I understand correctly how testing is done. It seems obvious that I will have to go back and rewrite the test but if this happens is there something wrong with my approach? Is it bad practice? If so at which point I went wrong.
The general rule is you don't test implementation details. So given you decided that it was okay to remove the call, the method was an implementation detail and therefore you should not have tested that it was called.
20/20 hindsight is great thing is n't it?
In general I wouldn't test that a 'public' method called a 'private' one. Testing delegation should be reserved for when one class calls another.
You have written a great unit test.
The unit tests should notice sneaky side-effects when you change the implementation.
And the test must fail when the expected values don't show up.
So look to your unit test where the expected value don't match and decide what was the problem:
1) The unit test tested things it shouldn't (change the test)
2) The code is broken (add the missing side-effect)
It's fine to rewrite this test. A lot of tests fail to be perfect on the first pass. The most common test smell is tight-coupling with implementation details.
Your unit test should verify the behavior of the object, not how it achieved the result. If you're strict about doing this in a tdd style, maybe you should revert the code you changed and refactor the test first. But regardless of what technique you use, it's fine to change the test as long as you're decoupling it from the details of the system under test.
I'm getting started with javascript unit testing (with Jasmine).
I have experience in unit testing C# code. But given that javascript is a dynamic language, I find it very useful to exploit that, and writing tests using the expressive power of javascript, for instance:
describe('known plugins should be exported', function(){
var plugins = ['bundle','less','sass','coffee','jsn','minifyCSS','minifyJS','forward','fingerprint'];
plugins.forEach(function(plugin){
it('should export plugin named ' + plugin, function(){
expect(all[plugin]).toBeDefined();
});
});
});
As far as doing this kind of non-conventional test-writing, I haven't gone further than doing this kind of tests (array with a list of test cases that are very similar)
So I guess my question is
Is it fine to write tests like this, or should I constrain myself to a more "statically typed" test fixture?
Great question!
Yes, it is perfectly fine to write unit tests like this. It's even encouraged.
JavaScript being a dynamic language lets you mock objects really easily. DI and IoC are really easy to do.
In general, testing with Jasmine (or Mocha which I personally prefer) is a pleasant and fun experience.
It's worth mentioning that since you're in a dynamic language you need to have tests you did not in statically typed languages. Tests commonly enforce members and methods existing, and types.
Having no interfaces to define your contract, often, your tests define your code's contract so it's really not uncommon to see tests do this sort of verification (like in your code) where you wouldn't in C#.