I have several projects, and all tests can pass on them, but there are no tests, and I want to set up a config for the project so that it understands that if project X you take test 1.2.3 if project Y you take test 1.3
My examples:
export const urls= ["https://" + url + "someProject1","https://" + url + "someProject2","https://" + url + "someProject3"];
where url =its dev, or production
there is a function that substitutes depending on the launch of ENV
The tests are run like this
npx cypress run --env ENV="stage" --browser chrome --spec cypress\e2e\SomeProject\**
Below is an example of my tests that runs the test in a loop, each test on each site
describe("Some test", () => {
urls.forEach((url) => {
//This gives you some feedback in the Cypress side bar console on what URL you're at
describe(`url: ${url}`, () => {
//The nested loop meaning that for each URL you go to we will also go into another loop and use each size as well.
it(`Some new${url}`, () => {
cy.visit(url);
cy.get("Button").click();
cy.get("#someText").should("not.exist");
});
it(`Some new 3 ${url}`, () => {
cy.visit(url);
cy.get("Button").click();
cy.get("#someText").should("not.exist");
});
it(`Some new${url}`, () => {
cy.visit(url);
cy.get("Button").click();
cy.get("#someText").should("not.exist");
});
});
});
My question is
I want to make project settings so that if project number 1 starts, then it takes only 1 and 3 tests
have project 2 then 1,2,3
if project 3, then only 1
P.S.I did not want to write through if, since more than 100 tests have already been written.
I thought to create a config file for projects, and somehow substitute it () into the test itself
Related
I am currently working in Cypress, I will try to semplify my question as much as possible:
I have several test files:
integration/testA.spec.js
integration/testB.spec.js
integration/testC.spec.js
I also have a config.js file in which I have an object containing the name of the test I want to run:
{
test: 'testA.spec.js'
}
what I want to do - if possible - is a main.spec.js file that, depending on the testproperty, runs the relative spec file.
I have prepared almost everything
describe('Test', () => {
before(() => {})
beforeEach(() => {})
after(() => {})
describe('Test Booking', () => {
// retrieves the `test`property and returns the string name
let testName = retrieveTestName()
console.log('testName', testName)
/** I want to do something like this */
if (testName === 'testA.spec.js') {
// launch testA.spec.js file
}
}
})
but I ignore if I can do something like the code above; do you think is possible?
Or do I have to create a script file that
checks for the namefile
uses the --spec option and launches the test
?
An option can be to create commands using the --spec file and add it under scripts in your package.json. Something like:
"scripts": {
"testA": "npx cypress run --spec "cypress/integration/testA.spec.js",
"testB": "npx cypress run --spec "cypress/integration/testB.spec.js",
"testC": "npx cypress run --spec "cypress/integration/testC.spec.js"
}
You can run directly run the above command depending on the test you want to execute.
npm run testA
So I have two tests - Test1.spec.js and Test2.spec.js and I want that with every test run a random number should be generated and the same random number should be used in both the specs. I wrote a simple Math.random() function for this under support/index.js
Cypress.config('UniqueNumber', `${Math.floor(Math.random() * 10000000000000)}`)
And in the tests I am writing as:
cy.get('locator').type(Cypress.config('UniqueNumber'))
When I am trying to execute the tests using the cypress app npm cypress open and then Run All Specs, a random number is generated and the same is passed to both of the spec files correctly. But when I try to run the tests using the CLI npx cypress run for both of the spec files different random numbers are passed.
What am I doing wrong in case of executing the tests using CLI?
So as per cypress docs support/index.js is run every time before each spec file is run, so my above approach is not valid because with every run a new value would be generated. So the next approach I followed was to write the values in the fixtures/data.json file on the first test and use it throughout the tests. This way with every run a new set of values would be generated and saved in the fixture file and then the same value would be used throughout the test suite for that Test Run. Below is the way how I wrote to fixtures/data.json file:
const UniqueNumber = `${Math.floor(Math.random() * 10000000000000)}`
cy.readFile("cypress/fixtures/data.json", (err, data) => {
if (err) {
return console.error(err);
};
}).then((data) => {
data.username = UniqueNumber
cy.writeFile("cypress/fixtures/data.json", JSON.stringify(data))
})
})
Currently, we are using cypress to test our application. We have 2 environments with 2 different api_Servers. I want to define this inside the environment files. I am not sure how to define both the url in same file.
For example,
Environment-1:
baseUrl - https://environment-1.me/
Api_Serever - https://api-environment-1.me/v1
Environment-2:
baseUrl - https://environment-2.me/
Api_Serever - https://api-environment-2.me/v1
So few test cases depend on the baseUrl and 1 test case to check API depends on Api_Serever.
To resolve this I tried to set the baseUrl and Api_Serever inside the config file inside a plugin following this link https://docs.cypress.io/api/plugins/configuration-api.html#Usage.
I created two config files for 2 environments,
{
"baseUrl": "https://environment-2.me/",
"env": {
"envname": "environment-1",
"api_server": "https://api-environment-1.me/v1"
}
}
Another file similar to this changing the respective endpoints.
plugin file has been modified as,
// promisified fs module
const fs = require('fs-extra')
const path = require('path')
function getConfigurationByFile (file) {
const pathToConfigFile = path.resolve('..', 'cypress', 'config', `${file}.json`)
return fs.readJson(pathToConfigFile)
}
module.exports = (on, config) => {
// `on` is used to hook into various events Cypress emits
// `config` is the resolved Cypress config
// accept a configFile value or use development by default
const file = config.env.configFile || 'environment-2'
return getConfigurationByFile(file)
}
inside test cases, whichever refers to the baseUrl we used visit('/')
This works fine when we run a specific file from the command line using the command cypress run --env configFile=environment-2 all the test cases pass, as the visit('/') automatically replaces with the respective environments expect the API test case.
I am not sure how the API test should be modified to call the API endpoint instead of the base URL.
Can somebody help, please?
Thanks,
indhu.
If I understand your question correctly, you need to run tests with different urls. Urls being set in cypress.json or in env file.
Can you configure the urls in cypress.json file as below. I haven't tried though, can you give it a go.
{
"baseUrl": "https://environment-2.me/",
"api_server1": "https://api1_url_here",
"api_server2": "https://api2_url_here"
}
Inside the test call pass the urls as below;
describe('Test for various Urls', () => {
it('Should test the base url', () => {
cy.visit('/') // this point to baseUrl configured in cypress.json file
// some tests to continue based on baseUrl..
})
it('Should test the api 1 url', () => {
cy.visit(api_server1) // this point to api server 1 configured in cypress.json file
// some tests to continue based on api server1..
})
it('Should test the api 2 url', () => {
cy.visit(api_server2) // this point to api server 2 configured in cypress.json file
// some tests to continue based on api server2..
})
})
This issue has been resolved.
The best way is to do with a plugin as suggested by their docs (https://docs.cypress.io/api/plugins/configuration-api.html#Usage).
I kept the structure same as such in my question and in my test case I called it using, cy.request(Cypress.env('api_server'))
This solved my issue :)
I'm wanting to run e2e tests written in javascript with mocha on an Appium server instance running a local android emulator. The app on test is an apk originally written in react-native.
On Windows I have the server up and running with an Android Studio emulator through using the Appium desktop app. The server all looks good and has the apk of the native app I want to test working fine. I also have a basic describe/assert test written in mocha that I want to apply to the app.
My question is what do I need to include (presumably in the test file) to make the tests actually test the emulator application? I'm finding the documentation pretty confusing and the sample code seems pretty specific to a different use case.
Many thanks for your help!
There are at least 2 good js client libraries to use for Appium based project: webdriverio and wd. Personally, I'm using the second one so I can advice you how write tests with it and mocha:
my test file looks like this:
'use strict'
require(path.resolve('hooks', 'hooks'))
describe('Suite name', function () {
before('Start new auction', async function () {
//do before all the tests in this file, e.g. generate test data
})
after('Cancel auction', async function () {
//do after all the tests in this file, e.g. remove test data
})
it('test1', async () => {
// test steps and checks are here
})
it('test2', async () => {
// test steps and checks are here
})
it('test3', async () => {
// test steps and checks are here
})
})
where hooks.js contains global before/after for all the tests:
const hooks = {}
before(async () => {
// before all the tests, e.g. start Appium session
})
after(async () => {
// after all the tests, e.g. close session
})
beforeEach(async () => {
// before each test, e.g. restart app
})
afterEach(async function () {
// e.g. take screenshot if test failed
})
module.exports = hooks
I'm not saying its the best practice of designing tests, but its one of multiple ways.
Cool so I managed to get it working to a degree. I was checking through the Appium console logs as I was trying to run stuff and noticed that the session id was missing from my requests. All that was needed was to attach the driver using the session id. My code looks a bit like this:
"use strict";
var wd = require("wd")
var assert = require("assert")
var serverConfig = {
host: "localhost",
port: 4723,
}
var driver = wd.remote(serverConfig)
driver.attach("0864a299-dd7a-4b2d-b3a0-e66226817761", function() {
it("should be true", function() {
const action = new wd.TouchAction()
action
.press({x: 210, y: 130})
.wait(3000)
.release()
driver.performTouchAction(action)
assert.equal(true, true)
})
})
The equals true assert is just there as a placeholder sanity check. The only problem with this currently is that I'm copy-pasting the alpha-numeric session id inside the attach method each time I restart the Appium server so I need to find a way to automate that.
Goal: Perfom real end-to-end tests for a VSCode extension using Spectron.
As an example I installed the vim extension.
I adapted the usage example from Spectron's README like this:
var Application = require('spectron').Application
var assert = require('assert')
describe('VSCode extension', function () {
this.timeout(10000)
beforeEach(function () {
this.app = new Application({
path: '.vscode-test/VSCode-linux-x64/bin/code',
args: [
'--extensionDevelopmentPath=' + process.cwd(),
'--locale=en',
process.cwd(),
],
requireName: 'nodeRequire',
})
return this.app.start()
})
afterEach(function () {
if (this.app && this.app.isRunning()) {
return this.app.stop()
}
})
it('suggest commands', function () {
return this.app.client
.waitUntilWindowLoaded()
//.pause(5000)
//.waitUntilTextExists('span', 'OPEN EDITORS', 10000)
.keys('F1')
.waitForVisible('.quick-open-widget:not(.hidden)')
.keys('vim')
.waitForVisible('.quick-open-entry*=Vim: Show Command Line')
})
})
Problem: How to exactly determine if VSCode is ready.
Calling client.waitUntilWindowLoaded() is not sufficient. In some test runs entering text via client.keys(...) into the Command Palette (F1) does not suggest any commands.
I don't want to use pause(...) after waitUntilWindowLoaded() as it wastes useful time and may still not be sufficient when the system is under heavy load.
For the moment I just came up with .waitUntilTextExists('span', 'OPEN EDITORS', 10000) which seems to work most of the time. Sometimes it runs into the timeout.
Is there anything more reliable (in the DOM) that is set by VSCode and can be checked by Spectron that states that VSCode is really ready?