Protractor code failing due to module exports - javascript

In my Protractor framework, I am using a POM model, so a lot of code resides in different .js files, which are then called into , at necessary junctions, to have the e2e tests.
I have a CompleteProfile.js file (dummy name), where I have a condition,
if profile_flag ===100,
then do nothing
else
complete profile (includes a lot of forms)
For the else portion, I have the code in a differentfillCustomerForms.js file, whose code is something like this
var completeprofile = function(){
this.locator = element(by.css('some_css_locator'));
this.locator.click();
browser.sleep(2000);
}
module.exports={
profileComplete1 = completeprofile
}
I'm using this from fillCustomerForms.js in my CompleteProfile.js as
var Profile = require('./fillCustomerForms.js');
var c_profile = new Profile.profileComplete1();
var compl_profile = function(){
this.someFunction= function(){
profile_flag = "90"
if profile_flag ==="100"{
then do nothing;
}else{
c_profile.completeprofile();
}
}
}
module.exports={
finalExp = compl_profile
}
Inside my spec.js, I am calling the CompleteProfile.js as
var Profile = require('./CompleteProfile.js');
var co_profile = new Profile.finalExp();
describe("Modules",()=>{
it('Modules that load other things',()=>{
//do other things neccessary
});
});
describe("Module",()=>{
it("should do something,"()=>{
co_profile.someFunction();
});
});
The first describe block is the one that loads the browser and checks for the URL and other test cases. My issue is when if I add the second describe block, then the URL that is sent in first describe block is rendered empty i.e. Chrome loads without any URL, and errors out due to timeout error. I have checked the code and it seems fine. What did I do wrong here.
I'm guessing this might have to do with some basics of JS, that I might have overlooked, but right now I'm not able to figure this one out.

You have a syntax error in your second testcase (the it function). Every callback of each testcase in Mocha requires to be resolved or rejected. e.g:
it('should ...', done => {
/* assertion */
done(/* passing a new instance of Error will reject the testcase*/);
});`.
The called function doesn't not return anything in the provided code snippet, I don't really see what you're trying to test for.

Related

How to skip or ignore programmatically a suite in CodeceptJS

As the test suite grows I need to be able to run something in BeforeSuite() which will connect to external suite and skip the suite if an external resource is unavailable.
Feature('External Server');
BeforeSuite((I) => {
// Check if server is available and skip all scenarios if it is not
});
Scenario('Login to the server', (I) => {
// Do not run this if the server is not available
})
I understand I could probably set a variable, but I think it would be nice if there was a way to tell the runner that a suite has been skipped.
The goal is to have a suite marked as skipped in the output eg:
Registration --
✓ Registration - pre-checks in 4479ms
✓ Registration - email validation in 15070ms
✓ Registration - password validation in 8194ms
External Server -- [SKIPPED]
- Login to the server [SKIPPED]
maybe prepend x before every scenario in your feature? example xScenario. I don't think codecept supports something similar to only for features. it currently works with scenarios only as far as I know.
you can use
Scenario.skip
in your step definition to skip a scenario.
Note: if any steps have been executed before skipping then it will still show it in the report
https://codecept.io/basics/#todo-test
My answer is compiled from a number of comments on the CodeceptJS github and stackoverflow. However, I can't recall the exact links or comments which helped me derive this solution, it's been at least a year, maybe two, since I started and have slowly modified this.
Edit: Found the github thread - https://github.com/codeceptjs/CodeceptJS/issues/661
Edit2: I wrote a post about "selective execution" (which avoids tagging unwanted tests with skip status) https://github.com/codeceptjs/CodeceptJS/issues/3544
I'll add a snippet at the bottom.
I'm on CodeceptJS 3.3.6
Define a hook file (eg: skip.js) and link it to your codeceptjs.conf.js file.
exports.config = {
...
plugins: {
skipHook: {
require: "../path/to/skip.js",
enabled: true,
}
}
...
}
The basic code in skip.js is
module.exports = function () {
event.dispatcher.on(event.test.before, function (test) {
const skipThisTest = decideSkip(test.tags);
if (skipThisTest) {
test.run = function skip() {
this.skip();
};
return;
}
});
};
I've defined a decision function decideSkip like so:
function decideSkip(testTags) {
if (!Array.isArray(testTags)) {
output.log(`Tags not an array, don't skip.`);
return false;
}
if (testTags.includes("#skip")) {
output.log(`Tags contain [#skip], should skip.`);
return true;
}
if (
process.env.APP_ENVIRONMENT !== "development" &&
testTags.includes("#stageSkip")
) {
output.log(`Tags contain [#stageSkip], should skip on staging.`);
return true;
}
}
(Mine is a bit more complicated, including evaluating whether a series of test case ids are in a provided list but this shows the essence. Obviously feel free to tweak as desired, the point is a boolean value returned to the defined event listener for event.test.before.)
Then, using BDD:
#skip #otherTags
Scenario: Some scenario
Given I do something first
When I do another thing
Then I see a result
Or standard test code:
const { I } = inject();
Feature("Some Feature");
Scenario("A scenario description #tag1 #skip #tag2", async () => {
console.log("Execute some code here");
});
I don't know if that will necessarily give you the exact terminal output you want External Server -- [SKIPPED]; however, it does notate the test as skipped in the terminal and report it accordingly in allure or xunit results; at least insofar as CodeceptJS skip() reports it.
For "selective execution" (which is related but not the same as "skip"), I've implemented a custom mocha.grep() utilization in my bootstrap. A key snippet is as follows. To be added either to a bootstrap anonymous function on codecept.conf.js or some similar related location.
const selective = ["tag1", "tag2"];
const re = selective.join("|");
const regex = new RegExp(re);
mocha.grep(regex);

Refactoring a path in JS cannot find module

Hi working on a project at the moment and currently getting it to call the api, great! however I'm looking at refactoring my test files as there will be quite a few so initially this is how I've done it.
var chakram = require('chakram');
expect = chakram.expect;
var baseFixture = require('../../test/fixtures/');
it("Should return the matching test file", function () {
var response = chakram.get("http://testurl");
return expect(response).to.have.json(baseFixture + 'testfile.json')
});
However with the code above I'm getting this error.
Error: Cannot find module '../../../../test/fixtures'
If I put the json file into the variable at declared at the top with a call to base fixture within the function the test will pass.
Where am I going wrong?
Thanks in advance.

Writing tests as functions in Dalek.js

I've been using Dalek.js for testing and have been enjoying it. I would like to take some tests from one of my files and put them in a separate "common_tests.js" file so that I can use them in other places without having to duplicate code. However, whenever I try moving the test code out of the main file, the test fails to execute correctly -- I get the "RUNNING TEST - '[TEST NAME]'" message twice. The first does nothing, and the second immediately hangs while trying to open the page. Is there something I'm doing wrong?
My code is in this format:
In common_tests.js:
module.exports = {
testFunction: function(test) {
test
.open("www.google.com")
// other testing stuff here that's fine when in the other file
.done();
}
In my tests file:
var common = require('../common_tests.js");
module.exports = {
'Trying to test with functions': function(test) {
common.testFunction(test);
test.done();
}
}
I never used DalekJS, but it looks like you are calling test.done() twice : once in the shared function you wrote in common_tests.js, and then in your test file, after calling the common function.
I would have written the test file like this:
var common = require('../common_tests.js');
module.exports = {
'Trying to test with functions': common.testFunction,
};

QUnit multiple scripts in one page but no interaction between them

I'm very new to unit testing (this is my first day working with QUnit and I've never worked with anything other testng system before), and I'm a bit unclear on how to test stuff from multiple script files in one QUnit page without letting the scripts interact with each other. What I mean is, say, if I have script1.js, and it calls hello(), and hello() is defined in script2.js, how can I run a unit test on script1.js to make sure it calls hello(), but mock the output of hello() so that it's a true unit test, and then run script2.js's hello().
Basically, how am I supposed to hide one script's global variables and functions from another script in a single QUnit page?
This depends entirely on how the various script files are organized as well as the system as a whole. If you were using Angular, for example, then you are able to inject dependencies when you include a module in another script file. There are tools for mocking out things and "spying" on function calls such as Sinon, but it still depends heavily on how your code is organized.
For the sake of argument, let's say the two files look like so, and we'll ignore the design pattern (although you should seriously be considering that)...
// File A:
window.greeting = function() {
var world = hello();
return 'hello ' + world;
}
// File B:
window.hello = function() {
// possibly lots of code to determine what to return...
var value = 'foobar';
return value;
}
The hello() function could just as easily return any other value based on the state of the system, user input, etc. For our case it doesn't, and what we want to do is mock out File B's code so that we don't have to test what it is doing, just that it gives back a string. We could (and should) do this with a proper mocking/dependency injection library. However, just to give you a sense of the minimal setup you could do, and to see the general approach, here's our QUnit test file:
var _hello;
QUnit.module('File A', {
setup: function() {
_hello = window.hello; // hold onto the old value
// now we mock out hello()
window.hello = function() {
window.hello.called++; // track calls to it
return 'world'; // return our static value
}
window.hello.called = 0;
},
teardown: function() {
// put the old one back
window.hello = _hello || window.hello;
}
});
QUnit.test('Ensure greeting is correct', function(assert) {
var result = greeting();
assert.equal(window.hello.called, 1, 'hello should be called only once');
assert.equal(result, 'hello world', 'The greeting call should be "hello world"');
});
And if you want to see it running, here is a jsfiddle for you. As I said, this is a simple example to show you how you could do this, but you should look into proper code organization (think AMD modules, require, Angular, Ember, things like this) and a proper mocking library.

Make real async requests for jasmine integration testing

I've got an angular app going with some jasmine testing. I recently added a new method to make a query to elastic search within one of my services, it looks like this.
test:
function(){
return "Working";
},
executeSearch:
function(field, value, size, page_number){
return service.executeRegExSearch(field, value, size, page_number);
},
//new method
executeRegExSearch:
function(field, value, size, page_number){
//main search body, I know it works because
//I am getting expected results in the browser
}
And then in my jasmine tests, I've got something like this.
//initialization stuff
var $httpBackend;
var searchAPI;
beforeEach(inject(function($injector){
jasmine.DEFAULT_TIMEOUT_INTERVAL = 5000;
$httpBackend = $injector.get('$httpBackend');
searchAPI = $injector.get('searchAPI');
}));
it("is loaded properly", function(){
expect(searchAPI.test() == "Working").toBe(true); //passes
});
it("can make a request", function(){
var field = "col_name";
var value = "bb.*"; //this is matching in my browser/application
var size = 10;
var page_number = 1;
var res;
searchAPI.executeSearch(field, value, size, page_number).then(function(res){
res = res;
alert(JSON.stringify(res));
done();
});
$httpBackend.flush();
});
But when I run, I get the error
Unexpected request: GET http://myserver/index-1/_search?source={"query":{"regexp":{"col_name":{"value":"bb.*"}}}}"&size=10&from=0 No more request expected in http://localhost:8081/js/angularjs/angular-mocks.js (line 1180)
I'm not sure about how to use the mocks, or am even aware that I was doing it. All I want to do is be able to run a suite of tests that make actual calls to my backend to confirm that things are being integrated properly... you know.. integration tests.
Anyone have any advice?
I didn't want to do tests with Selenium stuff, so to test my app with real backend calls I used a modified version of angular-mocks
It works just like for unit-tests in Jasmine.
I'm using it with Jasmine 2.0, so a test looks like following :
it(' myTest', function (done) {
_myService.apiCall()
.then(function () {
expect(true).toBeTruthy();
done()
});
});
NB: the done is needed because of the async call.
If you want to do end to end e2e test for angular applications you definitely must use protractor (was created by the angular team), you could create your tests using jasmine and run them against your application, if you are just using unit testing I think the best way is to mock the httpBackend.
https://github.com/angular/protractor
https://docs.angularjs.org/guide/e2e-testing

Categories