So I have two tests - Test1.spec.js and Test2.spec.js and I want that with every test run a random number should be generated and the same random number should be used in both the specs. I wrote a simple Math.random() function for this under support/index.js
Cypress.config('UniqueNumber', `${Math.floor(Math.random() * 10000000000000)}`)
And in the tests I am writing as:
cy.get('locator').type(Cypress.config('UniqueNumber'))
When I am trying to execute the tests using the cypress app npm cypress open and then Run All Specs, a random number is generated and the same is passed to both of the spec files correctly. But when I try to run the tests using the CLI npx cypress run for both of the spec files different random numbers are passed.
What am I doing wrong in case of executing the tests using CLI?
So as per cypress docs support/index.js is run every time before each spec file is run, so my above approach is not valid because with every run a new value would be generated. So the next approach I followed was to write the values in the fixtures/data.json file on the first test and use it throughout the tests. This way with every run a new set of values would be generated and saved in the fixture file and then the same value would be used throughout the test suite for that Test Run. Below is the way how I wrote to fixtures/data.json file:
const UniqueNumber = `${Math.floor(Math.random() * 10000000000000)}`
cy.readFile("cypress/fixtures/data.json", (err, data) => {
if (err) {
return console.error(err);
};
}).then((data) => {
data.username = UniqueNumber
cy.writeFile("cypress/fixtures/data.json", JSON.stringify(data))
})
})
Related
I have several projects, and all tests can pass on them, but there are no tests, and I want to set up a config for the project so that it understands that if project X you take test 1.2.3 if project Y you take test 1.3
My examples:
export const urls= ["https://" + url + "someProject1","https://" + url + "someProject2","https://" + url + "someProject3"];
where url =its dev, or production
there is a function that substitutes depending on the launch of ENV
The tests are run like this
npx cypress run --env ENV="stage" --browser chrome --spec cypress\e2e\SomeProject\**
Below is an example of my tests that runs the test in a loop, each test on each site
describe("Some test", () => {
urls.forEach((url) => {
//This gives you some feedback in the Cypress side bar console on what URL you're at
describe(`url: ${url}`, () => {
//The nested loop meaning that for each URL you go to we will also go into another loop and use each size as well.
it(`Some new${url}`, () => {
cy.visit(url);
cy.get("Button").click();
cy.get("#someText").should("not.exist");
});
it(`Some new 3 ${url}`, () => {
cy.visit(url);
cy.get("Button").click();
cy.get("#someText").should("not.exist");
});
it(`Some new${url}`, () => {
cy.visit(url);
cy.get("Button").click();
cy.get("#someText").should("not.exist");
});
});
});
My question is
I want to make project settings so that if project number 1 starts, then it takes only 1 and 3 tests
have project 2 then 1,2,3
if project 3, then only 1
P.S.I did not want to write through if, since more than 100 tests have already been written.
I thought to create a config file for projects, and somehow substitute it () into the test itself
So I'm currently using Nightwatch.js to run some tests on a web platform. We have nightwatch configured with Saucelabs so we can run multiple tests at the same time and see recordings of each test. However, our test cases are split up into multiple files rather than multiple steps in a single file.
Since sauce labs doesn't like to run multiple files within a single browser session, we had to do a little hack:
extend = function(target) {
var sources = [].slice.call(arguments, 1);
sources.forEach(function (source) {
for (var prop in source) {
target[prop] = source[prop];
}
});
return target;
}
require("./login.spec.js");
module.exports = extend(module.exports,login);
require("./getCustomLink.js");
module.exports = extend(module.exports,getCustLink);
Essentially, you use 'require' and the file path the test and just add on to your current test module. So if the test doesn't end in "browser.end()" then it will continue to run through the tests in the order specified. My question is: I'm going to be having multiples of these kinds of files with different orders of test cases for different cases. How exactly would I add a '#tags' to this so I can just call the tag when running:
npm test --tags #tagHere
and it will just run the master file and the order of tests I specified?
I am using babel and ng-html2js to build and test an angular app with Karma as the test runner. For the most part, things are working. If I make changes to the files, Karma will rerun the tests and pass/fail the tests as appropriately. Karma also determines when changes have been made to the test files themselves ... however, it will run the old tests rather than the new tests.
As a simple example:
/* constant.js */
angular.module("foo").constant("constant", 1);
/* constant.test.js */
let constant;
beforeEach(inject(_constant_ => constant = _constant));
describe("constant", () => {
it("is 1", () => {
expect(constant).to.equal(2);
});
});
In the above test, I am checking constant against 2, but in my original code, constant is 1.
If I change constant to 2 in constant.js, the test will rerun and pass.
If I change .to.equal(2) to .to.equal(1) and rerun the tests from scratch, they will pass.
If I change .to.equal(2) to .to.equal(1) while the tests are already running, the tests will rerun automatically but they will fail because it is still checking 1 to equal 2.
How can I get Karma to rerun the new tests when I make changes to the test files?
I'm trying to get webdriver.io and Jasmine working.
Following their example, my script is at test/specs/first/test2.js (in accordance with the configuration) and contains:
var webdriverio = require('webdriverio');
describe('my webdriverio tests', function() {
var client = {};
jasmine.DEFAULT_TIMEOUT_INTERVAL = 9999999;
beforeEach(function() {
client = webdriverio.remote({ desiredCapabilities: {browserName: 'firefox'} });
client.init();
});
it('test it', function(done) {
client
.url("http://localhost:3000/")
.waitForVisible("h2.btn.btn-primary")
.click("h2.btn.btn-primary")
.waitForVisible("h2.btn.btn-primary")
.call(done);
});
afterEach(function(done) {
client.end(done);
});
});
I'm using wdio as the test runner, and set it up using the interactive setup. That config is automatically-generated and all pretty straightforward, so I don't see a need to post it.
In another terminal window, I am running selenium-server-andalone-2.47.1.jar with Java 7. I do have Firefox installed on my computer (it blankly starts when the test is run), and my computer is running OS 10.10.5.
This is what happens when I start the test runner:
$ wdio wdio.conf.js
=======================================================================================
Selenium 2.0/webdriver protocol bindings implementation with helper commands in nodejs.
For a complete list of commands, visit http://webdriver.io/docs.html.
=======================================================================================
[18:17:22]: SET SESSION ID 46731149-79aa-412e-b9b5-3d32e75dbc8d
[18:17:22]: RESULT {"platform":"MAC","javascriptEnabled":true,"acceptSslCerts":true,"browserName":"firefox","rotatable":false,"locationContextEnabled":true,"webdriver.remote.sessionid":"46731149-79aa-412e-b9b5-3d32e75dbc8d","version":"40.0.3","databaseEnabled":true,"cssSelectorsEnabled":true,"handlesAlerts":true,"webStorageEnabled":true,"nativeEvents":false,"applicationCacheEnabled":true,"takesScreenshot":true}
NoSessionIdError: A session id is required for this command but wasn't found in the response payload
at waitForVisible("h2.btn.btn-primary") - test2.js:21:14
/usr/local/lib/node_modules/webdriverio/node_modules/q/q.js:141
throw e;
^
NoSessionIdError: A session id is required for this command but wasn't found in the response payload
0 passing (3.90s)
$
I find this very strange and inexplicable, especially considering that it even prints the session ID.
Any ideas?
Please check out the docs on the wdio test runner. You don't need to create an instance using init on your own. The wdio test runner takes care on creating and ending the session for you.
Your example covers the standalone WebdriverIO usage (without testrunner). You can find examples which use wdio here.
To clarify that: there are two ways of using WebdriverIO. You can embed it in your test system by yourself (using it as standalone / or as a scraper ). Then you need to take care of things like create and end an instance or run those in parallel. The other way to use WebdriverIO is using its test runner called wdio. The testrunner takes a config file with a bunch of information on your test setup and spawns instances updates job information on Sauce Labs and so on.
Every Webdriver command gets executed asynchronously.
You properly called the done callback in afterEach and in your test it test, but forgot to do it in beforeEach:
beforeEach(function(done) {
client = webdriverio.remote({ desiredCapabilities: {browserName: 'firefox'} });
client.init(done);
});
I have two javascript files which contain mocha test cases.
//----------abc.js -------------
describe("abc file", function(){
it("test 1" , function(){
assert.equal(20 , 20);
});
});
//---------xyz.js--------------
describe("xyz file", function(){
it("test 1" , function(){
assert.equal(10 , 10);
});
});
I have put them in a folder called test and when I execute the mocha command the first file(abc.js) is always executed before xyz.js.
I thought this might be due to alphabetical ordering and renamed the files as
abc.js => xyz.js
xyz.js => abc.js
but still, the content of the xyz.js (previously abc.js) is executed first. How can I change the execution order of these test files?
In the second file, require the first one:
--- two.js ---
require("./one")
or if you are using ES modules:
--- two.js ---
import "./one"
Mocha will run the tests in the order the describe calls execute.
I follow a totally seperate solution for this.
Put all your tests in a folder named test/ and
Create a file tests.js in the root directory in the order of execution
--- tests.js ---
require('./test/one.js')
require('./test/two.js')
require('./test/three.js')
And in the tests files one.js, two.js and so on write your simple mocha tests
this way if you want to run them in the order you have defined then just run mocha tests.js
Mocha has a --sort (short -S) option that sorts test files:
$ mocha --help
[...]
-S, --sort sort test files
[...]
Since mocha sorts files in alphabetical order, I usually prefix my test files names with numbers, like:
0 - util.js
1 - something low level.js
2 - something more interesting.js
etc.
In addition to being really easy to maintain (no gulp grunt or any of that nonsense, no editing your package.json...), it provides the benefit that:
people reading your source code get an idea of the structure of your program, starting from the less interesting parts and moving up to the business layer
when a test fails, you have some indication of causality (if something failed in 1 - something.js but there are no failures in 0 - base.js then it's probably the fault of the layer covered by 1 - something.js
If you're doing real unit tests of course order should not matter, but I'm rarely able to go with unit tests all the way.
If you prefer a particular order, you can list the files (in order) as command-line arguments to mocha, e.g.:
$ mocha test/test-file-1.js test/test-file-2.js
To avoid a lot of typing every time you want to run it, you could turn this into an npm script in your package.json:
{
// ...
"scripts": {
"test": "mocha test/test-file-1.js test/test-file-2.js"
}
// ...
}
Then run your suite from the command line:
$ npm test
Or if you're using Gulp, you could create a task in your gulpfile.js:
var gulp = require('gulp');
var mocha = require("gulp-mocha");
gulp.task("test", function() {
return gulp.src([
"./test/test-file-1.js",
"./test/test-file-2.js"
])
.pipe(mocha());
});
Then run $ gulp test.
The way it worked for my tests to be executed in a specific order was to create a separate test.js file and then added a describe for each mocha test file I'd wanted to execute.
test.js:
describe('test file 1', function() {
require('./test1.js')
})
describe('test file 2', function() {
require('./test2.js')
})
Then simply run mocha test.js
I am exporting an array with all required files and that is the way I tell mocha the order of execution through index.js file in the folder with all my test files:
const Login = require('../login');
const ChangeBudgetUnit = require('./changeBudgetUnit');
const AddItemsInCart = require('./addItemsInCart');
// if the order matters should export array, not object
module.exports = [
Login,
ChangeBudgetUnit,
AddItemsInCart
];
mocha-steps allows you to write tests that run in a specific sequence, aborting the run at the first failure. It provides a drop-in replacement for it, called steps.
Example usage:
describe('my smoke test', async () => {
step('login', async () => {})
step('buy an item', async () => throw new Error('failed'))
step('check my balance', async () => {})
xstep('temporarily ignored', async () => {})
})
The repo hasn't seen much activity in three years, but it works fine with Mocha 9.