When I launch protractor conf.js and this file needs to launch multiple spec files (like below), some of them are (randomly?) skipped. It works quite well when I run each spec separately.
specs: [
'tests/test-1-*.js',
'tests/test-2-*.js',
'tests/test-3-*.js'
],
What can I do to run all tests without any being skipped?
I don't know why this is happening, but if you need to be sure to run all your tests with the same stability than when you run them separately, then you can create a bat file (if you're using Windows) like this:
CALL protractor conf.js --specs='tests/test-1-*.js'
CALL protractor conf.js --specs='tests/test-2-*.js'
CALL protractor conf.js --specs='tests/test-3-*.js'
This will solve the problem until you find the root cause.
Related
I have unit tests set up with Karma and Mocha. Karma is important here because some of the functionality I'm testing needs a web browser (even if a fake headless one). But much of the code can run either in a browser or Node.js. For debugging tests it would be much easier to skip launching Karma and use Mocha directly most of the time.
I can do that easily enough if running the whole test suite, but I'd like to be able to use the convenience of the little green play-button style arrows for individual tests. Unfortunately, even for a single unit test, these always launch Karma now.
Disabling the Karma plugin doesn't help. Instead, that makes all of the green arrows go away, with no easy access to either Karma or Mocha.
Is there a way to configure IDEA so that these convenience arrows ignore Karma, and directly run Mocha tests instead?
The logic used for determining what test runner is available for a given test file is based on dependencies declarations in package.json nearest to the current file.
Note that if you have a single package.json, with both karma and mocha included, and there is a karma config in your project, karma is preferred - see https://youtrack.jetbrains.com/issue/WEB-26070#comment=27-2088951. To force using Mocha test runner for files in a certain directory, create a Mocha run configuration with Test directory: set to this directory - when running tests from gutter in this folder, mocha will be used.
I need to create UI automation tests with a protractor, I have successfully setuped everything from http://www.protractortest.org/#/ and can successfully run the test from the command line, but for debugging and for more comfort of coding I want to use VS2019 for both editing and run/debugging.
Everything that I found is pointing to VS code and couldn't find anything for Visual Studio.
You can use the NPM Task runner plugin for that.
But before you need to add protractor conf.js script in the package.json file.
Would it be possible in any way, to run Karma/Jasmine tests in two-step approach:
First step preprocesses all application files (Babel, Webpack, coverage, source maps, etc.) and stores the preprocessed output.
Second step runs actual (Jasmine) tests on files already preprocessed in first step.
?
This way we could preprocess all application code once and run second step as many times as we want (of course assuming that we don't change application code after running step 1).
Edit:
Some more details:
app code is divided into modules: common, moduleA, moduleB, ...
there is a single karma.conf.js with preprocessors set up to handle entire code base
Grunt is used to run karma tests - there are separate tasks to run tests only for given module (given Grunt task adjusts files section of Karma config to load only test specs for module that is to be tested)
On CI we test each module separately - running all tests in one go crashes the browser (due to excessive memory use - we've had little success in fixing this). So Karma runs n times (n is number of modules) preprocessing entire code base each time and running tests only for module that's currently being tested.
Since actual code (neither app or specs) does not change while modules are being tested one-by-one our idea was to tell Karma to only preprocess entire code base at first. Afterwards we could run Karma on each module using the previously preprocessed code. This should be much faster than what we currently have.
By the way, this is an Angular app. Tests are based on Jasmine and use ngDescribe.
I am preparing a suite of E2E tests using protractor and Jasmine. Currently I'm running these from the command line using Node. In the past I've used Jasmine tests with the SpecRunner.html setup, which shows the results in the browser as they are run, allows you to select individual tests or sub-suites of tests to run, etc.
Has anyone set up Jasmine + Protractor tests in that way - output going into one browser window, while tests run in another browser window?
Alternatively, is there a Jasmine reporter that will provide a similar output format even if I still have to run the tests from the command line?
For jasmine2, look into jasmine2-screenshot-reporter package.
For jasmine1:
I've used protractor-html-screenshot-reporter package that generates nice test reports including screenshots:
initialize a baseDirectory inside the onPrepare function:
onPrepare: function() {
// Add a screenshot reporter and store screenshots to `/tmp/screnshots`:
jasmine.getEnv().addReporter(new HtmlReporter({
baseDirectory: '/tmp/screenshots'
}));
}
and observe a nicely HTML formatted test results:
Hope this is what you were asking about.
Is there a way to get JavaScript tests to do continuous integration with MSBuild or in Visual Studio? What I want to do is have it so anytime my JavaScript or my JavaScript tests change they are built again and ran and if they fall below certain values of acceptance criteria (code coverage, assertions passing, test's passing, etc.) it'll fail my build. Anyone know how to do this?
Thanks!
It looks like Chutzpah has a command-line runner, so you can create a PowerShell script which gets called from your build server to run the tests. See the Chutzpah documentation for more information. I'm not sure how you would integrate the test results with TFS.
I've used a Command Line build step to run my javascript tests with chutzpah.
Add this and point to something like
$(Build.SourcesDirectory)\packages\Chutzpah.4.0.3\tools\chutzpah.console.exe
or whereever you have chutzpah setup to be run on build server. You can then add an argument to tell the chutzpah console where the tests are
$(Build.SourcesDirectory)\YourApp.UnitTests\Tests**strong text**