I grep the karma repository and it seems like no logic is using that field besides simply being set in the config.
Does anyone know what is that field for?
For some reason I have to set it to 20000 or else my karma test Disconnects
It's referenced as noActivityTimeout internally, in this file:
https://github.com/karma-runner/karma/blob/de55bc63205c656eb5f5534894aa4ae92228efb8/lib/browser.js
Basically, the effect of the line is supposed to be that the test stops running if no activity is detected in the amount of time specified by the config. This helps the tests stop when your code is in an infinite loop or otherwise not responding (maybe it has an async test condition and the test never resolves)
Related
I've got a little sandbox project I've been playing around with for the last few weeks to learn the in's and out's of implementing a TestCafe runner.
I've managed to solve all my problems except one and at this point I've tried everything I can think of.
Reviewed the following similar questions:
How to close testcafe runner
How to get the testCafe exit code
But still my problem remains.
I've toyed around with my argv.json file.
I've toyed around with my CICDtestBranches.json file.
I've toyed around with my package.json file.
I've tested the same branch that has the problem on multiple
machines.
I've tested with multiple browsers (Firefox & Chrome) -
both produce the same problem.
I've tried to re-arrange the code, see
below
I've tried add multiple tests in a fixture and added a page
navigation to each one.
I've tried to remove code that is processing
irrelevant options like video logs & concurrency (parallel execution)
I also talked with some coworkers around the office who have done similar projects and asked them what they did to fix the problem. I tried their recommendations, and even re-arranging things according to what they tried and still no joy.
I've read through the TestCafe documentation on how to implement a test runner several times and still I haven't been able to find any specific information about how to solve a problem with the browser not closing at the end of the test/fixture/script run.
I did find a few bugs that describe similar behavior, but all of those bugs have been fixed and the remaining bugs are specific to either Firefox or Safari. In my case the problem is with both Chrome & Firefox. I am running TestCafe 1.4.2. I don't want to file a bug with TestCafe unless it really is a confirmed bug and there is nothing else that can be done to solve it.
So I know others have had this same problem since my coworker said he faced the same problem with his implementation.
Since I know I am out of options at this point, I'm posting the question here in the hopes that someone will have a solution. Thank you for taking the time to look over my problem.
When executing the below code, after the return returnData; is executed, the .then statement is never executed so the TestCafe command and browser window are never terminated.
FYI the following code is CommonJS implemented with pure NodeJS NOT ES6 since this is the code that starts TestCafe (app.js) and not the script code.
...**Boiler Plate testcafe.createRunner() Code**...
console.log('Starting test');
var returnData = tcRunner.run(runOptions);
console.log('Done running tests');
return returnData;
})
.then(failed => {
console.log(`Test finished with ${failed} failures`);
exitCode = failed;
if (argv.upload) return upload(jsonReporterName);
else return 0;
testcafe.close();
process.exit(exitCode);
})
.then(() => {
console.log('Killing TestCafe');
testcafe.close();
process.exit(exitCode);
});
I've tried to swap around the two final .then statements to try and see if having one before the other will cause it to close. I copied the testcafe.close() and process.exit() and put them after the if-else statement in the then-failed block, although I know they might-should not get called because of the if-else return statements just before that.
I've tried moving those close and exit statements before the if-else returns just to see if that might solve it.
I know there are a lot of other factors that could play into this scenario, like I said I played around with the runOptions:
const runOptions = {
// Testcafe run options, see: https://devexpress.github.io/testcafe/documentation/using-testcafe/programming-interface/runner.html#run
skipJSErrors: true,
quarantineMode: true,
selectorTimeout: 50000,
assertionTimeout: 7000,
speed: 0.01
};
Best way I can say to access this problem and project and all of the code would be to clone the git lab repo:
> git clone "https://github.com/SethEden/CAFfeinated.git"
Then checkout the branch that I have been working this problem with: master
You will need to create an environment variable on your system to tell the framework what sub-path it should work with for the test site configuration system.
CAFFEINATED_TEST_SITE_NAME value: SethEden
You'll need to do a few other commands:
> npm install
> npm link
Then execute the command to run all the tests (just 1 for now)
> CAFfeinated
The output should look something like this:
$ CAFfeinated
Starting test
Done running tests
Running tests in:
- Chrome 76.0.3809 / Windows 10.0.0
LodPage
Got into the setup Test
Got to the end of the test1, see if it gets here and then the test is still running?
√ LodPage
At this point the browser will still be spinning, and the command line is still busy. You can see from the console output above that the "Done running tests" console log has been output and the test/fixture should be done since the "Got to the end of the test1,..." console log has also been executed, that is run as part of the test.after(...). So the next thing to execute should be in the app.js with the .then(()) call.....but it's not. What gives? Any ideas?
I'm looking for what specifically will solve this problem, not just so that I can solve it, but so others don't run into the same pitfall in the future. There must be some magic sauce that I am missing that is probably very obvious to others, but not so obvious to me or others who are relatively new to JavaScript & NodeJS & ES6 & TestCafe.
The problem occurs because you specified the wrong value for the runner.src() method.
The cause of the issue is in your custom reporter. I removed your reporter and now it works correctly. Please try this approach and recheck your reporter.
I'm using Kubernete to run a node script in a permanent loop. When the pod is created, it runs "npm start" which start the script with default parameters in loop mode.
It works perfectly for that.
I also sometimes need to run some node commands in the pod.
(eg: node dist/index run --parameter=xyz)
For that I use kubectl :
kubectl exec -it PODNAME NAMESPACE --
/bin/ash
It allows me to run the script with other parameters as I wish BUT
I don't find a way to put the main process ('npm start') on hold while I run my others commands.
I want the loop to be paused while I execute those node commands (They can't run in parallel). I tried to kill the main processed which are shown by typing "ps -aef" but it doesn't work. It either restart automatically (restartPolicy: Always) or it make the pods go in error and I can't type my node commands.
Any idea on how to achieve this ?
What you could do is send a signal to the main process, SIGUSR2 is meant for this kind of purpose. You can catch the signal with an event handler and set a variable, then let your normal code check the variable at intervals or at set points in the code to judge wether it should do stuff.
On the CLI side of issuing the pause you can simply use the kill command to send the signal to the process.
See also:
https://nodejs.org/api/process.html#process_signal_events
http://man7.org/linux/man-pages/man7/signal.7.html
how to use kill SIGUSR2 in bash?
But, maybe you shouldn't try to pause a pod, simply gracefully let it shut down, and pick up remaining workloads later, in a new pod? Would probably be a cleaner way to go about it, but depends on what you're trying to achieve ofcourse.
I am trying to figure out how to restrict my tests, so that the coverage reporter only considers a function covered when a test was written specifically for that function.
The following example from the PHPUnit doc shows pretty good what I try to achieve:
The #covers annotation can be used in the test code to specify which
method(s) a test method wants to test:
/**
* #covers BankAccount::getBalance
*/
public function testBalanceIsInitiallyZero()
{
$this->assertEquals(0, $this->ba->getBalance());
}
If the test above would be executed, only the function getBalance will be marked as covered, and none other.
Now some actual code sample from my JavaScript tests. This test shows the unwanted behaviour that I try to get rid of:
it('Test get date range', function()
{
expect(dateService.getDateRange('2001-01-01', '2001-01-07')).toEqual(7);
});
This test will mark the function getDateRange as covered, but also any other function that is called from inside getDateRange. Because of this quirk the actual code coverage for my project is probably a lot lower than the reported code coverage.
How can I stop this behaviour? Is there a way to make Karma/Jasmine/Istanbul behave the way I want it, or do I need to switch to another framework for JavaScript testing?
I don't see any particular reason for what you're asking. I'd say if your test causes a nested function to be called, then the function is covered too. You are indeed indirectly testing that piece of code, so why shouldn't that be included in the code coverage metrics? If the inner function contains a bug, your test could catch it even if it's not testing that directly.
You can annotate your code with special comments to tell Istanbul to ignore certain paths:
https://github.com/gotwarlost/istanbul/blob/master/ignoring-code-for-coverage.md
but that's more for the opposite I think, not to decrease coverage if you know you don't want a particular execution path to be covered, maybe because it would be too hard to write a test case for it.
Also, if you care about your "low level" functions tested in isolation, then make sure your code is structured in a modular way so that you can test those by themselves first. You can also set up different test run configurations, so you can have a suite that tests only the basic logic and reports the coverage for that.
As suggested in the comments, mocking and dependency injections can help to make your tests more focused, but you basically always want to have some high level tests where you check the integrations of these parts together. If you mock everything then you never test the actual pieces working together.
Unfortunately, I don't have good recreation steps. This only happens on my computer.
Some of my tests seem to run just fine, but none of the expect actually execute. With other test the browser comes up and goes down before loading the page. Might be the same issue.
I have a test with:
expect(page.courseTitle.getText()).toBe 'Symphony'
expect(page.courseTitle.getText()).toBe 'garbage'
I expect this to fail, but it does not. If I add the line:
expect(true).toBe false
The test fails with both errors. If I add the line:
expect(true).toBe true
the test does not fail at all.
If you need to compare the string values, use toEqual() matcher:
expect(page.courseTitle.getText()).toEqual('Symphony');
expect(true).toBe false
expect(true).toBe true
Should not you call toBe here:
expect(true).toBe(false);
expect(true).toBe(true);
You can use this:
expect(page.courseTitle.getText()).toBe('Symphony');
expect(page.courseTitle.getText()).toBe('garbage')
expect(true).toBeTruthy();
expect(true).toBeFalsy();
I think you misunderstood the point. The line with garbage should fail, but does not, unless I add an expect that can be executed synchronously.
BTW This does not happen on our build machine, only on my computer.
I did find a a test that works, and worked backwards from that.
Adding this code caused the expects to execute:
afterEach ->
browser.manage().logs().get('browser').then (browserLogs)->
# Do some work here
It is almost as if the test do not wait for the promises to be fulfilled before exiting.
I'm using Protractor (v 1.3.1) to run E2E tests for my Angular 1.2.26 application.
But sometimes, tests are ok, sometimes not. It seems that sometimes the check is done before display is updated (or something like "synchronisation" problem).
I try many options :
add browser.driver.sleep instructions,
disable effects with browser.executeScript('$.fx.off = true')
add browser.waitForAngular() instructions
without success.
What are the bests practice to have reliables E2E tests with protractor?
JM.
Every time I have similar issues, I'm using browser.wait() with "Expected Conditions" (introduced in protractor 1.7). There is a set of built-in expected conditions that is usually enough, but you can easily define your custom expected conditions.
For example, waiting for an element to become visible:
var EC = protractor.ExpectedConditions;
var e = element(by.id('xyz'));
browser.wait(EC.visibilityOf(e), 10000);
expect(e.isDisplayed()).toBeTruthy();
Few notes:
you can specify a custom error message in case the conditions would not be met and a timeout error would be thrown, see Custom message on wait timeout error:
browser.wait(EC.visibilityOf(e), 10000, "Element 'xyz' has not become visible");
you can set EC to be a globally available variable pointing to protractor.ExpectedConditions. Add this line to the onPrepare() in your config:
onPrepare: function () {
global.EC = protractor.ExpectedConditions;
}
as an example of a custom expected condition, see this answer
Another point which is very important in testing with Protractor is understanding the ControlFlow. You may find explaination and code example here : When should we use .then with Protractor Promise?
Jean-marc
There are two things to consider.
The first is that you should properly sequence all protractor actions (as also hinted by #jmcollin92). For this, I typically use .then on every step.
The second important thing is to make sure that a new test it(...) only starts after the previous it(...) has completed.
If you use the latest version of Protractor, you can use Jasmine 2.x and its support for signalling the completion of a test:
it('should do something', function(done) {
clicksomething().then(function() {
expect(...);
done();
});
});
Here the done argument is invoked to signal that the test is ready. Without this, Protractor will schedule the clicksomething command, and then immediately move on with the next test, returning to the present test only once clicksomething has completed.
Since typically both tests inspect and possibly modify the same browser/page, your tests become unpredictable if you let them happen concurrently (one test clicks to the next page, while another is still inspecting the previous page).
If you use an earlier version of Protractor (1.3 as you indicate), the Jasmine 1.3 runs and waitsFor functions can be used to simulate this behavior.
Note that the whole point of using Protractor is that Protractor is supposed to know when Angular is finished. So in principle, there should be no need to ever call waitForAngular (my own test suite with dozens of scenarios does not include a single wait/waitForAngular). The better your application-under-test adheres to Angular's design principles, the fewer WaitForAngular's you should need.
I would add that disabling ngAnimate may not be enough. You may also have to disable all transition animation by injecting CSS (What is the cleanest way to disable CSS transition effects temporarily?).