If I have a popover say, and I want to test it:
being created
being manipulated
being destroyed
It is beneficial to have it declared in one place (tucked inside "describe"), so that it can be shared across the "its".
Should one share things between tests? I.E test2 relies on test1 being run first?
What is the best way to do this with Jasmine?
It`s a bad thing to rely to test order. To share things between tests you can have a way to set state of an object. Assume the pseudo code below:
var popover = getPopover({state:'init'});
//checking init state
...
//other test starting
var popover = getPopover({state:'manipulated'});
//checking the state
So the main idea is to be able to init your object at the state you need.
Note, that if it's not much code to perform initialization and you are not going to need to reuse it much, you can hardcode the state setup for every test. Sure, it's not dry, but you benefit from tests that can be read, without references to other methods. Sometime it's a good thing, but it depends.
Also, you can use beforeEach and afterEach for setup and teardown before and after every test( it's a describe level thing ). Which is one of the preferred ways to perform state initialization and cleanup.
Related
I am writing unit test cases using Jasmine for Angular 13 project. there is a test case which passes sometimes and fails sometimes. I presume this happens because of order of the tests execution. Any idea how to deal with it?
An error was thrown in afterall
By default the tests run in a random order each time. There is a seed value so that you can recreate the order. You can read on how to approach that in this answer.
Once you have yours executing where it fails each time you will easily know if any of the following has actually fixed your issue.
You can also check for anywhere you are subscribing to something - every time you subscribe in your test you need to make sure that it gets unsubscribed at the end of the test. To do this you can put .pipe(take(1)) or capture the subscription object and call unsubscribe on it.
const sub = someService.callObservable().subscribe();
// verify what you need to
sub.unsubscribe();
A third concept to look at - any variables you have defined above the beforeEach should be set to a new value in the beforeEach. Otherwise you will have the same objects reused between tests and that can lead to issues.
I am new to Jasmine testing and looking for a "best practice" for unit testing a stateful AngularJS service. Most tutorials I could find focus on test cases that run atomic calls to stateless services.
This matches nicely the Jasmine syntax
it("should do something", function(){ expect(target.doSomething()).toBe... })
However, I found no obvious way to extend this pattern to a test case that involves multiple calls to one or several of the service's functions.
Let's imagine a service like this one:
angular.module("someModule").factory("someService", function(){
return {
enqueue: function(item){
// Add item to some queue
}
}
});
For such a service, it makes sense to test that sequential calls to enqueue() process the items in the correct order. This involves writing a test case that calls enqueue() multiple times and checks the end result (which obviously cannot be achieved with a service as simple as the one above, but this is not the point...)
What doesn't work:
describe("Some service", function(){
// Initialization omitted for simplicity
it("should accept the first call", function() {
expect(someService.enqueue(one)).toBe... // whatever is correct
});
it("should accept the second call", function() {
expect(someService.enqueue(two)).toBe... // whatever is correct
});
it("should process items in the correct order", function() {
// whatever can be used to test this
});
});
The code above (which actually defines not one but three test cases) fails randomly as the three test cases are executed... just as randomly.
A poster in this thread suggested that splitting the code into several describe blocks would have these to be executed in the given order, but again this seems to be different from one Jasmine version to another (according to other posters in same thread). Moreover, executing suites and tests in a random order seems to be the intended way; even if it was possible, by setup, to override this behaviour, that would probably NOT be the right way to go.
Thus it seems that the only correct way to test a multi-call scenario is to make it ONE test case, like this:
describe(("Some service", function(){
// Initialization omitted for simplicity
it("should work in my complex scenario", function(){
expect(someService.enqueue(one)).toBe... // whatever is correct
expect(someService.enqueue(two)).toBe... // whatever is correct
expect(/* whatever is necessary to ensure the order is correct */);
});
});
While technical-wise this seems the logical way to go (after all, a complex scenario is one test case not three), the Jasmine "description + code" pattern seems disturbed in this implementation as:
There is no way to associate a message to each "substep" that can fail within the test case;
The description for the single "it" is inevitably bulky, like in example above, to really say something useful about a complex scenario.
This makes me wonder whether this is the only correct solution (or ist it?) to this kind of testing needs. Again I am especially interested in "doing it the right way" rather than using some kind of hack that would make it work... where it should not.
Sorry no code for this... Not sure it needs, I think you just need to adjust your expectations of your tests.
For the general rule of testing you don't really care how external dependencies handle the service, you can't control that. You want to test what you think the expected results are going to be for your service.
For your example you'll just want to invoke the dependencies for your service and call the function and test what the expected results are from calling the enqueue function. If it returns a promise, check success and error. It calls an API check that and so on.
If you want to see how the external dependencies use your service, you'll test it on those dependencies tests.
For example you have a controller that invokes enqueue. In the test you'll have to inject your provider (the service). And handle the expectations.
I'm using Angular, and we have a factory which does some initialization of restangular defaults. It's responsible for adding a number of hooks into restangular paths for later.
I want to test the logic of this factory, but the problem is that most of the hooks are set before the factory is returned, their done once at initialization. As such I don't know how to test the logic. I can't add a spy, because by the time I have a factory object it's already been initialized and it's too late to test that init logic.
I can test that specific paths received the expected hooks, but I feel like that not a proper unit level test. Since were adding a number of general helper methods to every one of our paths I would really like to test in isolation that each general helper function was added done, in a way that is not depending on any of the other hooks we set specific to individual paths.
Is there a good way to plug in to Angular and test the init logic that occurs before a factory is returned? I would like to either modify a default array (the one that associates paths to specific hooks), or to be able to call helper functions in isolation.
I'm quite new to Javacript Unit testing. One thing keep bothering me. When testing javascript, we often need to do the DOM manipulation. It looks like I am unit testing a method/function in a Controller/Component, but I still need to depend on the HTML elements in my templates. Once the id(or attributes used to be selectors in my test cases) is changed, my test cases also need to be CHANGED! Wouldn't this violate the purpose of unit testing?
One of the toughest parts of javascript unit testing is not the testing, it's learning how to architect your code so that it is testable.
You need to structure your code with a clear separation of testable logic and DOM manipulation.
My rule of thumb is this:
If you are testing anything that is dependent on the DOM structure, then you are doing it wrong.
In summary:Try to test data manipulations and logical operations only.
I respectfully disagree with #BentOnCoding. Most often a component is more than just its class. Component combines an HTML template and a JavaScript/TypeScript class. That's why you should test that the template and the class work together as intended. The class-only tests can tell you about class behavior. But they cannot tell you if the component is going to render properly and respond to user input.
Some people say you should test it in integration tests. But integration tests are slower to write/run and more expensive (in terms of time and resources) to run/maintain. So, testing most of your component functionality in integration tests might slow you down.
It doesn't mean you should skip integration tests. While integration and E2E tests may be slower and expensive than unit tests, they bring you more confidence that your app is working as intended. Integration test is where individual units/components are combined and tested as a group. It shouldn't be considered as an only place to test your component's template.
I think I'd second #BentOnCoding's recommendation that what you want to unit test is your code, not anything else. When it comes to DOM manipulation, that's browser code, such as appendChild, replaceChild etc. If you're using jQuery or some other library, the same still applies--you're calling some other code to do the manipulation, and you don't need to test that.
So how do you assert that calling some function on your viewmodel/controller resulted in the DOM structure that you wanted? You don't. Just as you wouldn't unit test that calling a stored procedure on a DB resulted in a specific row in a specific table. You need to instead think about how to abstract out the parts of your controller that deal with inputs/outputs from the parts that manipulate the DOM.
For instance, if you had a method that called alert() based on some conditions, you'd want to separate the method into two:
One that takes and processes the inputs
One that calls window.alert()
During the test, you'd substitute window.alert (or your proxy method to it) with a fake (see SinonJS), and call your input processor with the conditions to cause (or not cause) the alert. You can then assert different values on whether the fake was called, how many times, with what values, etc. You don't actually test window.alert() because it's external to your code. It's assumed that those external dependencies work correctly. If they don't, then that's a bug for that library, but it's not your unit test's job to uncover those bugs. You're only interested in verifying your own code.
I'm working within a Javascript + BackboneJS (an MVC framework) + RequireJS framework, but this question is somewhat OO generic.
Let me start by explaining that in Backbone, your Views are a mix of traditional Views and Controllers, and your HTML Templates are the traditional MVC Views
Been racking my head about this for a while and I'm not sure what the right/pragmatic approach should be.
I have a User object that contains user preferences (like unit system, language selection, anything else) that a lot of code depends on.
Some of my Views do most of the work without the use of templates (by using 3rd party libs, like Mapping and Graphing libs), and as such they have a dependency on the User object to take care of unit conversion, for example. I'm currently using RequireJS to manage that dependency without breaking encapsulation too much.
Some of my Views do very little work themselves, and only pass on Model data to my templating engine / templates, which do the work and DO have a dependency on the User object, again, for things like units conversion. The only way to pass this dependency into the template is by injecting it into the Model, and passing the model into the template engine.
My question is, how to best handle such a widely needed dependency?
- Create an App-wide reference/global object that is accessible everywhere? (YUK)
- Use RequireJS managed dependencies, even though it's generally only recommended to use managed dependency loading for class/object definitions rather than concrete objects.
- Or, only ever use dependency injection, and manually pass that dependency into everything that needs it?
From a purely technical point of view, I would argue that commutable globals (globals that may change), especially in javascript, are dangerous and wrong. Especially since javascript is full of parts of code that get executed asynchronously. Consider the following code:
window.loggedinuser = Users.get("Paul");
addSomeStuffToLoggedinUser();
window.loggedinuser = Users.get("Sam");
doSomeOtherStuffToLoggedinUser();
Now if addSomeStuffToLoggedinUser() executes asynchronously somewhere (e.g. it does an ajax call, and then another ajax call when the first one finishes), it may very well be adding stuff to the new loggedinuser ("Sam"), by the time it gets to the second ajax call. Clearly not what you want.
Having said that, I'm even less of a supporter of having some user object that we hand around all the time from function to function, ad infinitum.
Personally, having to choose between these two evils, I would choose a global scope for things that "very rarely change" --- unless perhaps I was building a nuclear powerstation or something. So, I tend to make the logged in user available globally in my app, taking the risk that if somehow for some reason some call runs very late, and I have a situation where one user logs out and directly the other one logs in, something strange may happen. (then again, if a meteor crashes into the datacenter that hosts my app, something strange may happen as well... I'm not protecting against that either). Actually a possible solution would be to reload the whole app as soon as someone logs out.
So, I guess it all depends on your app. One thing that makes it better (and makes you feel like you're still getting some OO karma points) is to hide your data in some namespaced singleton:
var myuser = MyApp.domain.LoggedinDomain.getLoggedinUser();
doSomethingCoolWith(myuser);
in stead of
doSomethingCoolWith(window.loggedinuser);
although it's pretty much the same thing in the end...
I think you already answered your own question, you just want someone else to say it for you : ) Use DI, but you aren't really "manually" passing that dependency into everything since you need to reference it to use it anyways.
Considering the TDD approach, how would you test this? DI is best for a new project, but JS gives you flexible options to deal with concrete global dependencies when testing, ie: context construction. Going way back, Yahoo laid out a module pattern where all modules were loosely coupled and not dependent on each other, but that it was ok to have global context. That global context can make your app construction more pragmatic for things that are constantly reused. Its just that you need to apply that judiciously/sparingly and there need be very strong cases for those things being dynamic.