Related
I use mocha/chai to test my graphql endpoints for a nodejs/express server - which works fine. However, in the first test, I check if any .env variables have been set correctly.
If not, all further tests will be affected anyways. So I would like to terminate all further tests when any test in this block fails (preferably finishing the block to capture all 'missing' variables).
So can I somehow terminate the entire test, when a certain describe-block fails?
Note: I found the bail flag/config option, but this determines the whole test on any error, which is not what I'm looking for.
You can try add after hook and add a condition there
after('Fail for some describe', function () {
if(this.test.parent.title.startsWith('Describe title that should fail') && this.test.parent.isFailed())
this.test.parent._bail = true
});
This could be tricky since this context may contain several layers of parents so if not sure you can always log the keys for current this.test object. Also, bear in mind that the following describe might have different this context with overriden _bail value. This solution is good when it comes to tracking specific critical it failures.
What is the point of mocking the method of an external library when unit testing?
Let's say that I have a function that uses XYZ library to get the current user using a given authorization token, and then if it finds the user, it returns a valid AWS policy:
export const getUser = async token => {
const user = await XYZ.getUser(token)
return {
// valid policy
context: user,
}
}
If an invalid token is given, the getUser should throw an error.
What is the point of testing this function if the XYZ.getUser is already well tested?
Why mock the getUser instead of using the real one?
You wouldn't want to invoke the actual HTTP request for XYZ.getUser(token), due to the following reasons.
1) The objective of unit testing is to test a specific behaviour, and in your very case, it is to test that your getUser function actually works.
2) You wouldn't want to make an actual request to the backend, as that would usually mean writing/reading/deleting something from the database. In many cases, it is not a good idea to overcomplicate your unit testing, as such operations should not be tested on the front-end, since you would want to limit and simplify your unit testing to test for a very specific behaviour.
3) Since the main objective is to test the above behaviour, you would want to simplify and limit what you are testing. As such, you will need to understand that unit testing of network requests of XYZ.getUser(token) should be done at the server-side, instead of client-side (or front-end).
4) By mocking the response from XYZ.getUser(token), we can simulate multiple scenarios and responses, such as successful responses (Code 200 OK), and other types of errors (business logic errors from the backend, or network errors such as error 400, 500, etc). You can do so by hardcoding the required JSON responses from XYZ.getUser(token) on the different types of scenarios you need to test.
this goes back to the definition and usages of unit tests.
Unit tests are supposed to test well defined functionality, for example you might write an algorithm and the functionality of that is very well defined. Then, because you know how it's supposed to function then you would unit test that.
Unit tests by definition, are not meant to touch any external systems, such as but not limited to files, APIs, databases, anything that requires going outside of the logical unit which is the code you want to test.
There are multiple reasons for that among which is execution speed. Calling an API, or using a third party call, takes a while. That's not necessary a problem when you look at one test, but when you have 500 tests or more then the delays will be important. The main idea is that you are supposed to run your unit tests all the time, very quickly, you get immediate feedback, is anything broken, have I broken something in other parts of the system. In any mature system you will run these tests as part of your CI/CD process and you cannot afford to wait too long for them to finish. no, they need to run in seconds, which is why you have the rules of what they should or should not touch.
You mock external things, to improve execution speed and eliminate third party issues as you really only want to test your own code.
What if you call a third party API and they are down for maintenance? Does that mean you can't test your code because of them?
Now, on the topic of mocking, do not abuse it too much, if you code in a functional way then you can eliminate the need for a lot of mocking. Instead of passing or har-coding dependencies, pass data. This makes it very easy to alter the input data without mocking much. You test the system as a while using integration / e2e tests, this gives you the certainty that your dependencies are resolved correctly.
I recently watched a number of talks from the AssertJS conference (which I highly recommend), among them #kentcdodds "Write Tests, Not Too Many, Mostly Integration." I've been working on an Angular project for over a year, have written some unit tests, and just started playing with Cypress, but I still feel this frustration around integration tests, and where to draw the lines. I'd really love to talk to some pro who does this day in and day out, but I don't know any where I work. Since I'm tired of not being able to figure this out, I thought I'd just ask the world here, cause you all are fantastic.
So in Angular (or React or Vue, etc), you have component code, and then you have the HTML template, and usually they interact in some way. The component code has functions in it that can be unit tested, and that part I'm ok with.
Where I haven't gotten things straight in my mind is, do you call it an integration test when you're testing how a component function changes the UI? If you're testing that kind of thing, should that be done just in E2E tests? Because Angular/Jasmine(or Jest) lets you do this kind of thing, referencing the UI:
const el = fixture.debugElement.queryAll(By.css('button'));
expect(el[0].nativeElement.textContent).toEqual('Submit')
But does that mean you should? And if you do, then do you not cover that in your E2E tests?
And regarding integration with things like services, how far do you go with integrating? If you mock the actual HTTP call, and just test that it would get called with the right functions, is that an integration test, or is it still a unit test?
To sum up, I intuitively know what I need to test to have confidence that things are working as they should, I'm just not sure how to discern when something requires all three kinds of tests or not.
I know this is getting long, but here's my example app:
There's a property called hasNoProducts that is set after a product is chosen and data is returned from the server (or not if there is none). If hasNoProducts is true, UI (through an *ngIf) shows that "Sorry" message. If false, then other selections become available. Depending on the product picked, those options change.
So I know I can write a unit test and mock the HTTP request so that I can test that hasNoProducts is set correctly. But then, I want to test that the message is displayed, or that the additional options are displayed. And if there is data, test that switching the product changes the data in the other lists that would subsequently show on screen. If I do that using Angular/Jasmine, is it an integration test since I'm "integrating" component and template? If not, then what would be an integration test?
I could keep asking questions, but I'll stop there in the hopes that someone has read this far and has some insight. Again, I've read tons of articles, watched tons of videos and done tutorials, but every time I sit down to apply to a real project, I get stuck on things like this, and I want to get past this! Thanks in advance.
What distinguishes unit-tests and integration-tests (and then subsystem-tests and system-tests) is the goal that you want to achieve with the tests.
The goal of unit-testing is to find those bugs in small pieces of code that can be found if these pieces of code are isolated. Note that this does not mean you truly must isolate the code, but it means your focus is the isolated code. In unit-testing, mocking is very common, since it allows to stimulate scenarios that otherwise are hard to test, or it speeds up build and execution times etc., but mocking is not mandatory: For example, you would not mock calls to a mathematical sin() function from the standard library, because the sin() function does not keep you from reaching your testing goals. But, leaving the sin() function in does not turn these tests into integration tests. Strictly speaking, you could even have unit-tests where some real network accesses take place (if you are too lazy to mock the network access), but due to the non-determinism, delays etc. these unit tests would be slow and unreliable, which means they would simply not be well suited to specifically find the bugs in the isolated code. That's why everybody says that "if there is some real network access, it is not a unit-test", which is not formally but practically correct.
Since in unit-testing you intentionally only focus on the isolated code, you will not find bugs that are due to misunderstandings about interactions with other components. If you mock some depended-on-component, then you implement these mocks based on your understanding of how the other component behaves. If your understanding is wrong, your mock implementations will reflect your wrong understanding, and your unit-tests will succeed, although in the integrated system things will break. That is not a flaw of unit-testing, but simply the reason why there are other test levels like integration testing. In other words, even if you do unit-testing perfectly, there will unavoidably remain some bugs that unit-testing is not even intending to find.
Now, what are integration tests then? They are defined by the goal to find bugs in the interactions between (already tested) components. Such bugs can, for example, be due to mutual misconceptions of the developers of the components about how an interface is meant to work. For example, in the case of a library component B that is used from A: Does A call functions from the right component B (rather than from C), do the calls happen while B is already in a proper state (B might not be initialized yet or in error state), do the calls happen in the proper order, are the arguments provided in the correct order and have values in the expected form (e.g. zero based index vs. one based index?, null allowed?), are the return values provided in the expected form (returned error code vs. exception) and have values in the expected form? This is for one integration scenario - there are many others, like, components exchanging data via files (binary or text? which end-of-line marker: unix, dos, ...?, ...).
There are many possible interaction errors. To find them, in integration testing you integrate the real components (the real A and the real B, no mocks, but possibly mocks for other components) and stimulate them such that the different interactions actually take place - ideally in all interesting ways, like, trying to force some boundary cases in the interaction (exchanged file is empty, ...). Again, just the fact that the test operates on a software where some components are integrated does not make it an integration test: Only if the test is specifically designed to initiate interactions such that bugs in these interactions become apparent, then it is an integration test.
Subsystem tests (which are the next level) then focus, again, on the remaining bugs, that is, those bugs which neither unit-testing nor integration testing intend to find. Examples are requirements on the component C that were not considered when C was decomposed into A and B, or, if C is built using some outdated version of A where some bug was still in. However, when climbing up from unit-testing via integration testing to subsystem testing and above, it is a challenge to stay focused: Only to have tests for bugs that could not have been found before, and not to, say, repeat unit-tests on subsystem level.
I've got some UI tests that are attempting to test that a click on an element makes something else appear. There's an existing check for all tests that looks to see if the DOM is Ready, however there's a small amount of time between that even firing and the app.controller() calls all completing where my test could jump in and wrongly determine that the click handler has not been added.
I can use angular.element('[ng-controller=myController]').scope() to determine if the scope is defined, however there is still a very small window where the test could run before the click handler is bound (a very, very small window).
<div ng-controller="myController">
<div ng-click="doWork()"></div>
</div>
Can anyone see a way to tell that the click handler has been added?
PS: There's an event that fires within a controller when the controller has loaded:$scope.$on('$viewContentLoaded', function(){ }); But that doesn't really help me unless I subscribe to it in the controller and flag a variable somewhere that I can check via Selenium.
PPS: A lot of these have classes that change when scope changes and they can be used to trigger the test, but many do not.
There is a specialized tool for testing AngularJS application - protractor. It is basically a wrapper around WebDriverJS - selenium javascript webdriver.
The main benefit of using protractor is that it knows when Angular is settled down and ready. It makes your tests flow in a natural way without having to use Explicit Waits:
You no longer need to add waits and sleeps to your test. Protractor
can automatically execute the next step in your test the moment the
webpage finishes pending tasks, so you don’t have to worry about
waiting for your test and webpage to sync.
It also provides several unique AngularJS-specific locators, like by.model, by.binding etc. And, in general, it provides a very convenient and well-designed API for end-to-end testing.
There are two issues to overcome here:
How do we know when Angular is done (with the sub issue of "what does done mean?"
How do we get that information to Selenium
Angular provides a method called "getTestability" that can be called with any element (assuming you've included it, it is optional). Usage:
angular.getTestability(angular.element('body')).whenStable(function(){/*Do things*/})
That seems to solve the first problem...
But, now what does Done mean in this case. Done means that anything that uses $browser.defer will have been executed. What does that mean? No idea, but in practice it at least verifies that there are no http requests in play when the callback is called.
Ok, now Selenium... You can ask it to execute JavaScript on the client and use the code above to set a variable. .whenStable(function(){window.someVar=true}) and then poll in the test until that variable is set.
Will this catch all cases of "Done"? Probably not, but it made my tests pass more consistently. As long as it works I'm not going to think any harder on the issue.
That said, I'm not marking this as the answer. It feels like a dirty solution.
If you look at the beginning of the Node.js documentation for domains it states:
By the very nature of how throw works in JavaScript, there is almost never any way to safely "pick up where you left off", without leaking references, or creating some other sort of undefined brittle state.
Again in the code example it gives in that first section it says:
Though we've prevented abrupt process restarting, we are leaking resources like crazy
I would like to understand why this is the case? What resources are leaking? They recommend that you only use domains to catch errors and safely shutdown a process. Is this a problem with all exceptions, not just when working with domains? Is it a bad practice to throw and catch exceptions in Javascript? I know it's a common pattern in Python.
EDIT
I can understand why there could be resource leaks in a non garbage collected language if you throw an exception because then any code you might run to clean up objects wouldn't run if an exception is thrown.
The only reason I can imagine with Javascript is if throwing an exception stores references to variables in the scope where the exception was thrown (and maybe things in the call stack), thus keeping references around, and then the exception object is kept around and never gets cleaned up. Unless the leaking resources referred to are resources internal to the engine.
UPDATE
I've Written a blog explaining the answer to this a bit better now. Check it out
Unexpected exceptions are the ones you need to worry about. If you don't know enough about the state of the app to add handling for a particular exception and manage any necessary state cleanup, then by definition, the state of your app is undefined, and unknowable, and it's quite possible that there are things hanging around that shouldn't be. It's not just memory leaks you have to worry about. Unknown application state can cause unpredictable and unwanted application behavior (like delivering output that's just wrong -- a partially rendered template, or an incomplete calculation result, or worse, a condition where every subsequent output is wrong). That's why it's important to exit the process when an unhandled exception occurs. It gives your app the chance to repair itself.
Exceptions happen, and that's fine. Embrace it. Shut down the process and use something like Forever to detect it and set things back on track. Clusters and domains are great, too. The text you were reading is not a caution against throwing exceptions, or continuing the process when you've handled an exception that you were expecting -- it's a caution against keeping the process running when unexpected exceptions occur.
I think when they said "we are leaking resources", they really meant "we might be leaking resources". If http.createServer handles exceptions appropriately, threads and sockets shouldn't be leaked. However, they certainly could be if it doesn't handle things properly. In the general case, you never really know if something handles errors properly all the time.
I think they are wrong / very misleading when they said "By the .. nature of how throw works in JavaScript, there is almost never any way to safely ..." . There should not be anything about how throw works in Javascript (vs other languages) that makes it unsafe. There is also nothing about how throw/catch works in general that makes it unsafe - unless of course you use them wrong.
What they should have said is that exceptional cases (regardless of whether or not exceptions are used) need to be handled appropriately. There are a few different categories to recognize:
A. State
Exceptions that occur while external state (database writing, file output, etc) is in a transient state
Exceptions that occur while shared memory is in a transient state
Exceptions where only local variables might be in a transient state
B. Reversibility
Reversible / revertible state (eg database rollbacks)
Irreversible state (Lost data, unknown how to reverse, or prohibitive to reverse)
C. Data criticality
Data can be scrapped
Data must be used (even if corrupted)
Regardless of the type of state you're messing with, if you can reverse it, you should do that and you're set. The problem is irreversible state. If you can destroy the corrupted data (or quarantine it for separate inspection), that is the best move for irreversible state. This is done automatically for local variables when an exception is thrown, which is why exceptions excel at handling errors in purely functional code (ie functions with no possible side-effects). Likewise, any shared state or external state should be deleted if that's acceptable. In the case of shared state, either throw exceptions until that shared state becomes local state and is cleaned up by unrolling of the stack (either statically or via the GC), or restart the program (I've read people suggesting the use of something like nodejitsu forever). For external state, this is likely more complicated.
The last case is when the data is critical. Well, then you're gonna have to live with the bugs you've created. Everyone has to deal with bugs, but its the worst when your bugs involve corrupted data. This will usually require manual intervention (reconstructing the lost/damaged data, selectively pruning, etc) - exception handling won't get you the whole way in the last case.
I wrote a similar answer related to how to handle mid-operation failure in various cases in the context of multiple updates to some data storage: https://stackoverflow.com/a/28355495/122422
Taking the sample from the node.js documentation:
var d = require('domain').create();
d.on('error', function(er) {
// The error won't crash the process, but what it does is worse!
// Though we've prevented abrupt process restarting, we are leaking
// resources like crazy if this ever happens.
// This is no better than process.on('uncaughtException')!
console.log('error, but oh well', er.message);
});
d.run(function() {
require('http').createServer(function(req, res) {
handleRequest(req, res);
}).listen(PORT);
});
In this case you are leaking connections when an exception occurs in handleRequest before you close the socket.
"Leaked" in the sense that you finished processing the request without cleaning up afterwards. Eventually the connection will time out and close the socket, but if your server is under high load it may run out of sockets before that happens.
Depending on what you do in handleRequest you may also be leaking file handles, database connections, event listeners, etc.
Ideally you should handle your exceptions so you can clean up after them.