I use mocha/chai to test my graphql endpoints for a nodejs/express server - which works fine. However, in the first test, I check if any .env variables have been set correctly.
If not, all further tests will be affected anyways. So I would like to terminate all further tests when any test in this block fails (preferably finishing the block to capture all 'missing' variables).
So can I somehow terminate the entire test, when a certain describe-block fails?
Note: I found the bail flag/config option, but this determines the whole test on any error, which is not what I'm looking for.
You can try add after hook and add a condition there
after('Fail for some describe', function () {
if(this.test.parent.title.startsWith('Describe title that should fail') && this.test.parent.isFailed())
this.test.parent._bail = true
});
This could be tricky since this context may contain several layers of parents so if not sure you can always log the keys for current this.test object. Also, bear in mind that the following describe might have different this context with overriden _bail value. This solution is good when it comes to tracking specific critical it failures.
Related
I am writing unit test cases using Jasmine for Angular 13 project. there is a test case which passes sometimes and fails sometimes. I presume this happens because of order of the tests execution. Any idea how to deal with it?
An error was thrown in afterall
By default the tests run in a random order each time. There is a seed value so that you can recreate the order. You can read on how to approach that in this answer.
Once you have yours executing where it fails each time you will easily know if any of the following has actually fixed your issue.
You can also check for anywhere you are subscribing to something - every time you subscribe in your test you need to make sure that it gets unsubscribed at the end of the test. To do this you can put .pipe(take(1)) or capture the subscription object and call unsubscribe on it.
const sub = someService.callObservable().subscribe();
// verify what you need to
sub.unsubscribe();
A third concept to look at - any variables you have defined above the beforeEach should be set to a new value in the beforeEach. Otherwise you will have the same objects reused between tests and that can lead to issues.
I am new to Jasmine testing and looking for a "best practice" for unit testing a stateful AngularJS service. Most tutorials I could find focus on test cases that run atomic calls to stateless services.
This matches nicely the Jasmine syntax
it("should do something", function(){ expect(target.doSomething()).toBe... })
However, I found no obvious way to extend this pattern to a test case that involves multiple calls to one or several of the service's functions.
Let's imagine a service like this one:
angular.module("someModule").factory("someService", function(){
return {
enqueue: function(item){
// Add item to some queue
}
}
});
For such a service, it makes sense to test that sequential calls to enqueue() process the items in the correct order. This involves writing a test case that calls enqueue() multiple times and checks the end result (which obviously cannot be achieved with a service as simple as the one above, but this is not the point...)
What doesn't work:
describe("Some service", function(){
// Initialization omitted for simplicity
it("should accept the first call", function() {
expect(someService.enqueue(one)).toBe... // whatever is correct
});
it("should accept the second call", function() {
expect(someService.enqueue(two)).toBe... // whatever is correct
});
it("should process items in the correct order", function() {
// whatever can be used to test this
});
});
The code above (which actually defines not one but three test cases) fails randomly as the three test cases are executed... just as randomly.
A poster in this thread suggested that splitting the code into several describe blocks would have these to be executed in the given order, but again this seems to be different from one Jasmine version to another (according to other posters in same thread). Moreover, executing suites and tests in a random order seems to be the intended way; even if it was possible, by setup, to override this behaviour, that would probably NOT be the right way to go.
Thus it seems that the only correct way to test a multi-call scenario is to make it ONE test case, like this:
describe(("Some service", function(){
// Initialization omitted for simplicity
it("should work in my complex scenario", function(){
expect(someService.enqueue(one)).toBe... // whatever is correct
expect(someService.enqueue(two)).toBe... // whatever is correct
expect(/* whatever is necessary to ensure the order is correct */);
});
});
While technical-wise this seems the logical way to go (after all, a complex scenario is one test case not three), the Jasmine "description + code" pattern seems disturbed in this implementation as:
There is no way to associate a message to each "substep" that can fail within the test case;
The description for the single "it" is inevitably bulky, like in example above, to really say something useful about a complex scenario.
This makes me wonder whether this is the only correct solution (or ist it?) to this kind of testing needs. Again I am especially interested in "doing it the right way" rather than using some kind of hack that would make it work... where it should not.
Sorry no code for this... Not sure it needs, I think you just need to adjust your expectations of your tests.
For the general rule of testing you don't really care how external dependencies handle the service, you can't control that. You want to test what you think the expected results are going to be for your service.
For your example you'll just want to invoke the dependencies for your service and call the function and test what the expected results are from calling the enqueue function. If it returns a promise, check success and error. It calls an API check that and so on.
If you want to see how the external dependencies use your service, you'll test it on those dependencies tests.
For example you have a controller that invokes enqueue. In the test you'll have to inject your provider (the service). And handle the expectations.
I have a suite of MochaJS tests, where it is possible that the body of a before block would throw an exception and thus tests in the corresponding describe would not be run. I would want to consider such tests as failed.
To add some more context, there is a global afterEach which looks up the results of each finished test from this.currentTest and sends live updates to a web API.
Can you guys think of any neat way to catch this event (a failed before), so that I would there and then be able to marked all of its corresponding tests as failed?
Currently the only thing I can possible think of is simply altering all existing before blocks to put their logical body into a try...catch, which would be quite painful and very repetitive.
Is there an option to mark a test case with known issue/limitation as passed?
Actually, I want the test case will run with the bugs but to present him as "passed" in the generated report until I'll fix him or to leave it with the known issue for good.
What we do in such cases is marking these tests as pending referencing the Jira issue number in the test description:
pending("should do something (ISSUE-442)", function () {
// ...
});
Tests like these would not be failures (and they would not actually be executed) and would not change the exit code, but would be separately reported on the console (we are using jasmine-spec-reporter).
When an issue is resolved, we would check if we have a pending test with the issue number, and, if yes, we'll make the test executable again by renaming pending back to it. If the test passes, this usually serves, at least partially and assuming the test actually checks the functionality, as a prove that the fix was made and the issue can be resolved.
This is probably not ideal since it involves "human touch" in keeping track of pending specs (tried to solve it statically, but failed), but that proved to work for us.
I am currently considering issues of running user-supplied code in node. I have two issues:
The user script must not read or write global state. For that, I assume I can simply spawn of a new process. Are there any other considerations? Do I have to hide the parent process from the child somehow, or is there no way a child can read, write or otherwise toy with the parent process?
The user script must not do anything funky with the system. So, I am thinking of disallowing any system calls. How do I achieve this? (Note that if I can disallow the process module, point 1 should be fixed as well, no?)
You are looking for the runInNewContext function from the vm module (vm documentation).
When you use this function it creates a VERY limited context. You'll need to pass anything you want into the sandbox object which become global objects. For example: You will need to include console in the sandbox object if you want your untrusted code to write to the console.
Another thing to consider: Creating a new context is a VERY expensive operation - takes extra time and memory to do. Seriously consider if you absolutely need this. Also seriously consider how often this is going to happen.
Example:
var vm = require('vm');
var sandbox = {
console: console,
msg: "this is a test",
};
vm.runInNewContext('console.log(msg);', sandbox, 'myfile.vm');
// this is a test
More to consider: You will want to create a new process to run this in. Even though it's in a new context it's still in the same process that it's being called from. So a malicious user could simply set a never ending for loop so that it never exits. You'll need to figure out logic to know when something like this happens so that you can kill the process and create a new one.
Last thought: A new context does not have setTimeout or setInterval. You may or may not want to add these. However, if you create a setInterval in the untrusted code and the untrusted code never stops it then it will continue on forever. You'll need to figure a way to end the script, it's probably possible I just haven't looked into it.