Cypress: run entire test suite multiple times with different data - javascript

I have seen several posts about running a single test with different parameters. It is also documented here.
However, I couldn't find any examples of how to run the entire test suite, i.e. tests across multiple files in the cypress/integration folder multiple times with different data.
My scenario is that I want to stub different responses from an API I'm calling and run all test cases against the different responses. So for the 1st run, I would put in support/index.js:
beforeEach(() => {
cy.intercept("GET", "example/API", { fixture: "fixture1.json" });
});
and for the 2nd run I would put:
beforeEach(() => {
cy.intercept("GET", "example/API", { fixture: "fixture2.json" });
});
and so on. All my test cases are identical for different responses and I expect them to have the same result regardless of the data returned by the API.

Running a suite with different parameters
cypress run --env fixture=fixture1
// or
cypress run --env fixture=fixture2
Ref Environment Variables
In support/index.js
beforeEach(() => {
const fixtureName = `${Cypress.env('fixture')}.json`
cy.intercept("GET", "example/API", { fixture: fixtureName });
})
I expect them to have the same result regardless of the data returned by the API - I'm not sure what that means exactly but to me it suggests you don't have to worry about changing the fixture.
If you want a single call from the command line to run the whole suite, say 3 times with a different fixture each time, consider using the Cypress Module API
In a script file, e.g /scripts/run-fixtures.js
const fixtures = ['fixture1.json','fixture2.json','fixture3.json']
const cypress = require('cypress')
fixtures.forEach((fixtureName) => {
cypress.run({
reporter: 'junit',
browser: 'chrome',
config: {
baseUrl: 'http://localhost:8080',
video: true,
},
env: {
fixture: fixtureName,
},
})
})
Run it with node /scripts/run-fixtures.
support/index.js is the same as above.

Related

Cannot make Cypress and Pact work together

I already have working project with few passing Cypress tests.
Now I'm trying to add contract tests using Cypress + Pact
In developer console I can see that app is calling /api/v1/document-service, but I get:
Pact verification failed - expected interactions did not match actual.
Part of logs:
W, [2021-07-20T12:49:37.157389 #34805] WARN -- : Verifying - actual interactions do not match expected interactions.
Missing requests:
POST /api/v1/document-service
W, [2021-07-20T12:49:37.157489 #34805] WARN -- : Missing requests:
POST /api/v1/document-service
I'm using:
cypress: 7.5.0
#pact-foundation/pact: 9.16.0
Steps I've done:
Added cypress plugin (https://github.com/pactflow/example-consumer-cypress/blob/master/cypress/plugins/cypress-pact.js)
Added commands (https://github.com/pactflow/example-consumer-cypress/blob/master/cypress/support/commands.js)
Added config to cypress.json (https://github.com/pactflow/example-consumer-cypress/blob/master/cypress.json) - not sure what to put to baseUrl, if I don't want interactions with real server.
Added test:
let server;
describe('Simple', () => {
before(() => {
cy.mockServer({
consumer: 'example-cypress-consumer',
provider: 'pactflow-example-provider',
}).then(opts => {
cy.log(opts)
server = opts
})
});
beforeEach(() => {
cy.fakeLogin()
cy.addMockRoute({
server,
as: 'products',
state: 'products exist',
uponReceiving: 'a request to all products',
withRequest: {
method: 'POST',
path: '/api/v1/document-service',
},
willRespondWith: {
status: 200,
body: {
data: {
collections: [
{
id: '954',
name: 'paystubs',
},
{
id: '1607',
name: 'mystubs',
},
],
},
},
},
});
});
it('is ok?', () => {
cy.visit('/new/experiments/FirstProject/collections');
});
})
Tried both using deprecated cy.server()/cy.route() and new cy.intercept(), but still verification failures.
At Pactflow, we've spent about 6 months using Cypress and Pact together, using the Pact mock service to generate pacts from our Cypress tests as suggested in https://pactflow.io/blog/cypress-pact-front-end-testing-with-confidence/
We have this week decided to change our approach, for the following reasons.
Our Cypress tests are much slower than our unit tests (15+ minutes), so generating the pact, and then getting it verified takes a lot longer when we generate the pact from our Cypress tests vs our unit tests.
Our Cypress tests can fail for reasons that are not to do with Pact (eg. layout changes, issues with the memory consumption on the build nodes, UI library upgrades etc) and this stops the pact from being generated.
The Cypress intercept() and mock service don't play very nicely together. Any request that does not have an explicit intercept() for Cypress ends up at the mock service, and every time a new endpoint is used by the page, the request hits the mock service, which then fails the overall test, even if that endpoint was not required for that specific test.
When a Cypress test fails, there is very good debugging provided by the Cypress library. When it fails for reasons to do with Pact, it's currently hard to identify the cause, because that information is not displayed in the browser (this could potentially be improved, however, it won't mitigate the already listed issues).
Our new approach is to generate our Pacts from our unit tests (instead of our Cypress tests) and to share the fixtures between the unit and Cypress tests, using a function that strips of the Pact matchers. eg.
const SOME_INTERACTION = {
state: 'some state',
uponReceiving: "some request",
withRequest: {
method: "GET",
path: "/something"
},
willRespondWith: {
status: 200,
body: {
something: like("hello")
}
}
}
Unit test
describe('Something', () => {
let someProviderMockService
beforeAll(() => {
someProviderMockService = new Pact({
consumer: 'some-consumer',
provider: 'some-provider'
})
return someProviderMockService.setup()
})
afterAll(() => someProviderMockService.finalize())
afterEach(() => someProviderMockService.verify())
describe('loadSomething', () => {
beforeEach(() => {
return someProviderMockService.addInteraction(SOME_INTERACTION)
})
it('returns something', async () => {
const something = await loadSomething()
//expectations here
})
})
})
Function that turns the Pact interaction format into Cypress route format, and strips out the Pact matchers.
const Matchers = require('#pact-foundation/pact-web').Matchers
// TODO map the request body and query parameters too
const toCypressInteraction = (interaction) => {
return {
method: interaction.withRequest.method,
url: Matchers.extractPayload(interaction.withRequest.path),
status: interaction.willRespondWith.status,
headers: Matchers.extractPayload(interaction.willRespondWith.headers),
response: Matchers.extractPayload(interaction.willRespondWith.body)
}
}
In the Cypress test
cy.route(toCypressInteraction(SOME_INTERACTION))
This approach has the following benefits:
The pact is generated quickly.
The pact is generated reliably.
The Cypress tests are more reliable and easier to debug.
The requests used in the Cypress tests are verified to be correct.
There is less chance of "interaction bloat", where interactions are added merely to test UI features, rather than because they provide valuable coverage.
I hope this information is helpful for you. We now recommend this approach over direct use of the mock service in Cypress tests.

How to set/define environment variables and api_Server in cypress?

Currently, we are using cypress to test our application. We have 2 environments with 2 different api_Servers. I want to define this inside the environment files. I am not sure how to define both the url in same file.
For example,
Environment-1:
baseUrl - https://environment-1.me/
Api_Serever - https://api-environment-1.me/v1
Environment-2:
baseUrl - https://environment-2.me/
Api_Serever - https://api-environment-2.me/v1
So few test cases depend on the baseUrl and 1 test case to check API depends on Api_Serever.
To resolve this I tried to set the baseUrl and Api_Serever inside the config file inside a plugin following this link https://docs.cypress.io/api/plugins/configuration-api.html#Usage.
I created two config files for 2 environments,
{
"baseUrl": "https://environment-2.me/",
"env": {
"envname": "environment-1",
"api_server": "https://api-environment-1.me/v1"
}
}
Another file similar to this changing the respective endpoints.
plugin file has been modified as,
// promisified fs module
const fs = require('fs-extra')
const path = require('path')
function getConfigurationByFile (file) {
const pathToConfigFile = path.resolve('..', 'cypress', 'config', `${file}.json`)
return fs.readJson(pathToConfigFile)
}
module.exports = (on, config) => {
// `on` is used to hook into various events Cypress emits
// `config` is the resolved Cypress config
// accept a configFile value or use development by default
const file = config.env.configFile || 'environment-2'
return getConfigurationByFile(file)
}
inside test cases, whichever refers to the baseUrl we used visit('/')
This works fine when we run a specific file from the command line using the command cypress run --env configFile=environment-2 all the test cases pass, as the visit('/') automatically replaces with the respective environments expect the API test case.
I am not sure how the API test should be modified to call the API endpoint instead of the base URL.
Can somebody help, please?
Thanks,
indhu.
If I understand your question correctly, you need to run tests with different urls. Urls being set in cypress.json or in env file.
Can you configure the urls in cypress.json file as below. I haven't tried though, can you give it a go.
{
"baseUrl": "https://environment-2.me/",
"api_server1": "https://api1_url_here",
"api_server2": "https://api2_url_here"
}
Inside the test call pass the urls as below;
describe('Test for various Urls', () => {
it('Should test the base url', () => {
cy.visit('/') // this point to baseUrl configured in cypress.json file
// some tests to continue based on baseUrl..
})
it('Should test the api 1 url', () => {
cy.visit(api_server1) // this point to api server 1 configured in cypress.json file
// some tests to continue based on api server1..
})
it('Should test the api 2 url', () => {
cy.visit(api_server2) // this point to api server 2 configured in cypress.json file
// some tests to continue based on api server2..
})
})
This issue has been resolved.
The best way is to do with a plugin as suggested by their docs (https://docs.cypress.io/api/plugins/configuration-api.html#Usage).
I kept the structure same as such in my question and in my test case I called it using, cy.request(Cypress.env('api_server'))
This solved my issue :)

Mocking node modules with Jest and #std/esm

I'm currently having an issue writing tests for a node application that uses #std/esm. I've setup a manual mock of a node module inside a __mocks__ directory and the following code shows a test for the file uses this mocked node module. (It's used in db.mjs)
const loader = require('#std/esm')(module, { cjs: true, esm: 'js' })
const Db = loader('../src/db').default
const db = new Db()
describe('getNotes', () => {
it('gets mocked note', () => {
db.getNote()
})
})
However, when I run Jest, my manual mock is not being used, it is using the real node module.
Does anyone have any thoughts on why this could be happening?
Jest is particular about the location of your mocks. From their documentation on mocking node modules:
For example, to mock a scoped module called #scope/project-name,
create a file at mocks/#scope/project-name.js, creating the
#scope/ directory accordingly.
In your case it would be __mocks__/#std/esm.js

Running javascript e2e tests on a local appium server

I'm wanting to run e2e tests written in javascript with mocha on an Appium server instance running a local android emulator. The app on test is an apk originally written in react-native.
On Windows I have the server up and running with an Android Studio emulator through using the Appium desktop app. The server all looks good and has the apk of the native app I want to test working fine. I also have a basic describe/assert test written in mocha that I want to apply to the app.
My question is what do I need to include (presumably in the test file) to make the tests actually test the emulator application? I'm finding the documentation pretty confusing and the sample code seems pretty specific to a different use case.
Many thanks for your help!
There are at least 2 good js client libraries to use for Appium based project: webdriverio and wd. Personally, I'm using the second one so I can advice you how write tests with it and mocha:
my test file looks like this:
'use strict'
require(path.resolve('hooks', 'hooks'))
describe('Suite name', function () {
before('Start new auction', async function () {
//do before all the tests in this file, e.g. generate test data
})
after('Cancel auction', async function () {
//do after all the tests in this file, e.g. remove test data
})
it('test1', async () => {
// test steps and checks are here
})
it('test2', async () => {
// test steps and checks are here
})
it('test3', async () => {
// test steps and checks are here
})
})
where hooks.js contains global before/after for all the tests:
const hooks = {}
before(async () => {
// before all the tests, e.g. start Appium session
})
after(async () => {
// after all the tests, e.g. close session
})
beforeEach(async () => {
// before each test, e.g. restart app
})
afterEach(async function () {
// e.g. take screenshot if test failed
})
module.exports = hooks
I'm not saying its the best practice of designing tests, but its one of multiple ways.
Cool so I managed to get it working to a degree. I was checking through the Appium console logs as I was trying to run stuff and noticed that the session id was missing from my requests. All that was needed was to attach the driver using the session id. My code looks a bit like this:
"use strict";
var wd = require("wd")
var assert = require("assert")
var serverConfig = {
host: "localhost",
port: 4723,
}
var driver = wd.remote(serverConfig)
driver.attach("0864a299-dd7a-4b2d-b3a0-e66226817761", function() {
it("should be true", function() {
const action = new wd.TouchAction()
action
.press({x: 210, y: 130})
.wait(3000)
.release()
driver.performTouchAction(action)
assert.equal(true, true)
})
})
The equals true assert is just there as a placeholder sanity check. The only problem with this currently is that I'm copy-pasting the alpha-numeric session id inside the attach method each time I restart the Appium server so I need to find a way to automate that.

Jasmine Runs Test Three Times

I am running Karma/Jasmine/Angular 2.0 tests on my development box. Just recently, Jasmine on my development box decided to start running my tests three times. Yes, exactly three times, every time.
On the first run, everything passes as expected. However, on the second and third pass, all of the same things fail. It always acknowledges that there are 7 tests, but runs 21, and 10 fails (first-grade math out the window)????
This also fails on Travis with SauceLabs. (Note: That links to an older build with 3 tests, but ran 9, and 5 fail???)
I have a screenshot, karma.conf.js file, and one suite which started this whole thing. Any help with be greatly appreciated.
Culprit [TypeScript] (Remove this and problem solved on my dev box):
Full source
describe('From the Conductor Service', () => {
let arr: Array<ComponentStatusModel> = null;
let svc: ConductorService = null;
beforeEach(() => {
arr = [/* Inits the array*/];
svc = new ConductorService();
});
describe('when it is handed a container to hold objects which need to be loaded', () => {
// More passing tests...
/// vvvvv The culprit !!!!!
describe('then when you need to access the container', () => {
beforeEach(() => {
svc.loadedContainer = arr;
});
it('it should always be available', () => {
assertIsLocalDataInTheService(arr, svc.loadedContainer);
});
});
/// ^^^^^ End of culprit !!!!!
});
// More passing tests...
});
Failing Tests:
Browser Screenshots:
Not sure if this is related, but before all of the errors happen, the Jasmine call stack is smaller (left, observe scrollbar). After the errors start, the stack just gets bigger with repeating calls to the same functions (right, observe scrollbar).
Suite Stack is Wrong:
In my test, the Nanobar and Conductor spec files are totally separate. However, you can see the suites array includes stuff from the Nanobar and Conductor specs. Somehow Jasmine mashed these two spec files together (after everything started failing), and resulted in my describe() statements not making any sense when published to the console.
Simplified karma.conf.js:
Full source
module.exports = function (config) {
config.set({
autoWatch: false,
basePath: '.',
browsers: ['Chrome'],
colors: true,
frameworks: ['jasmine'],
logLevel: config.LOG_INFO,
port: 9876,
reporters: ['coverage', 'progress'],
singleRun: true,
coverageReporter: {
// Code coverage config
},
files: [
// Loads everything I need to work
],
plugins: [
'karma-chrome-launcher',
'karma-coverage',
'karma-jasmine'
],
preprocessors: {
'app/**/*.js': ['coverage']
},
proxies: {
// Adjust the paths
}
})
}
Can you try refreshing your browser in your first assertion in each of your test files?
Try this:
browser.restart();
I had the same problem and this fixed it for me.
The first thing is these test run randomly. If you pass some data in any test case if you think you can resue that it is not possible.
You have to declare the data in before each so all test cases get data.
All test cases run independently.
If you are using array or object you must have to use this after deep cloning because array and object works on the reference. If you manipulate any value it will also change the original array.
In most of the cases if the test fails there may be an error of data you are passing in test cases.
I would try to debug this and pinpoint the exact cause.
Usually happens when I have redirection code or any reload code inside the functions I'm testing.
You can try adding an f to the prefix of describe and it (i.e. fdescribe and fit)

Categories