I have an app that runs in different modes (think of it as running for different platforms as well as using different protocols), one of which has a long loading period every time a page is opened. There are some other minor changes, but all of those could just be taken care of using wdio's setting variables.
Currently I have one test file (with a describe) for each section of the app. Which would be fine if one of the configurations being tested didn't have such a long wait time. Anyway, I've decided to deal with this test case, to handle it all in one file, which will all be on the same page.
Anyway, instead of copying and pasting all the tests I had previously to this one large file I was wondering if I could somehow reuse them, as if they were functions.
As it is right now I did just wrap things in functions, so for example:
// test1.js
module.exports = function test1 () {
describe('Test1', function () {
var settings = {}
before(function () {
// do something
})
it('do something', function () {
assert.ok(true)
})
it('do something else', function () {
assert.ok(true)
})
})
}
In another file we run every single function we created:
test1 = require('./test1')
test2 = require('./test2')
...
test10 = require('./test10')
describe('Main Test', function () {
test1()
test2()
...
test10()
}
This would have solved my DRY problem, if I could somehow select which test functions to run upon my command using
wdio wdio/wdio.conf.js --specs
wdio/test/spects/browser/test1.js
Which obviously will not work.
Basically I want a solution to be able to reuse my tests (the describe blocks). Is what I was doing the right path? If not, how should it be done?
So I have figured out the best way to go about this after I found some documentation about it here.
I will just do as I previously described, however instead of shoving all those functions in the same file, I'll keep them in their own files. There still may be a better solution out there, but it is still an improvement from copying and pasting all test cases for the different modes of running my app.
Just programmatically create different describe blocks. Wrap the describe block in a function with all the parameters that change (including the name of the block) and simply invoke the function to create the variations.
I made a small repo to show this in practice: https://github.com/fatso83/forum-support-code/commit/cb2bc10b1d8bdae31e8f0a8c4e724c70583a5e11
Related
I have a class that is growing too big for a single file and would like to split it out into different files. I have got this working in a sample project, but Intellisense can't "see" functions in those other files
Using a sandbox as a sample project, I've managed to get functions to execute across files but Intellisense doesn't suggest these functions as options to use. I'm fairly new to JS but have been using the node_module https://github.com/DoctorMcKay/node-steam-tradeoffer-manager as inspiration
Example BigClass
module.exports = BigClass
function BigClass () {}
require('./expansion') // I've tried moving this about, to no avail
BigClass.prototype.doAThing = function () {
console.log(`I did a thing`)
this.doAnotherThing() // Intellisense doesn't suggest this as an option - but it's legit
}
Example expansion.js
const BigClass = require('./index')
BigClass.prototype.doAnotherThing = function () {
console.log(`I did another thing`)
}
I would expect that intellisense would suggest doAnotherThing as an option. Its legit code and executes as expected (prints "I did a thing" and "I did another thing").
My end goal is to to be able to split my 2000 line class up into smaller files and still have intellisense fully working as if it were in a single file, so if theres an easier way to achieve this (say using ES6 classes somehow) then I'm all ears
So I'm working with an enterprise tool where we have javascript scripts embedded throughout. These scripts have access to certain built-in objects.
Unfortunately, the tool doesn't give any good way to unit test these scripts. So my thinking was to maintain the scripts in a repo, mock the built-in objects, and then set up unit tests that run on my system.
I'm pretty ignorant to how JavaScript works in terms of building, class loading, etc. but I've been just trying things and seeing what works. I started by trying out Mocha by making it a node project (even though it's just a directory full of scripts, not a real node project). The default test works, but when I try and test functions from my code, I get compiler errors.
Here's what a sample script from my project looks like. I'm hoping to test the functions, not the entire script:
var thing = builtInObject.foo();
doStuff(thing);
doMoreStuff(thing);
function doStuff(thing) {
// Code
}
function doMoreStuff(thing) {
// More Code
}
Here's what a test file looks like:
var assert = require('assert');
var sampleScript = require('../scripts/sampleScript.js');
describe('SampleScript', function() {
describe('#doStuff()', function() {
it('should do stuff', function() {
assert.equal(-1, sampleScript.doStuff("input"));
});
});
});
Problem happens when I import ("require") the script. I get compilation errors, because it doesn't builtInObject. Is there any way I can "inject" those built in objects with mocks? So I define variables and functions that those objects contain, and the compiler knows what they are?
I'm open to alternative frameworks or ideas. Sorry for my ignorance, I'm not really a javascript guy. And I know this is a bit hacky, but it seems like the best option since I'm not getting out of the enterprise tool.
So if I get it right you want to do the unit tests for the frontened file in the Node.js environment.
There are some complications.
First, in terms of Node.js each file has it's own scope so the variables defined inside of the file won't be accessible even if you required the file. So you need to export the vars to use them.
module.exports.doStuff = doStuff; //in the end of sample script
Second, you you start using things like require/module.exports on the frontend they'll be undefined so you'll get an error.
The easiest way to run your code would be. Inside the sample script:
var isNode = typeof module !== 'undefined' && module.exports;
if (isNode) {
//So we are exporting only when we are running in Node env.
//After this doStuff and doMoreStuff will be avail. in the test
module.exports.doStuff = doStuff;
module.exports.doMoreStuff = doMoreStuff;
}
What for the builtInObject. The easies way to mock it would be inside the test before the require do the following:
global.builtInObject = {
foo: function () { return 'thing'; }
};
The test just passed for me. See the sources.
Global variables are not good anyway. But in this case seems you cannot avoid using them.
Or you can avoid using Node.js by configuring something like Karma. It physically launches browser and runs the tests in it. :)
I previously run into the problems of data hiding under modularization in JavaScript. Please see the links below:
Module pattern- How to split the code for one module into different js files?
JavaScript - extract out function while keeping it private
To illustrate the problem, see the example below. My goal is to split my long js file into 2 files, but some functions need to access some private variables:
first.js:
(function(context) {
var parentPrivate = 'parentPrivate';
})(window.myGlobalNamespace);
second.js:
(function(context) {
this.childFunction = console.log('trying to access parent private field: ' + parentPriavte);
}(window.myGlobalNamespace.subNamspace);
Now this wouldn't work because child doesn't have access to parent. One solution is to make parentPrivate publicly visible, but that is unacceptable in my case.
Quoting #Louis who gave an answer for one of the previous questions:
"We can't have a field that's accessible by child but not to outside
public (i.e. protected). Is there any way to achieve that?"
If you want modularization (i.e. you want the child to be coded
separately from the parent), I do not believe this is possible in
JavaScript. It would be possible to have child and parent operate in
the same closure but then this would not be modular. This is true with
or without RequireJS.
The problem is that the parent and the child are not inside the same closure. Therefore I'm thinking, does it make sense to create a library that puts files into the same closure?
Something like:
concatenator.putIntoOneClosure(["public/js/first.js", "public/js/second.js"]);
Of course we can take in more arguments to specify namespaces etc. Note that it is not the same functionality we get from RequireJS. RequireJS achieves modularization while this concatenator focuses on data hiding under the condition of modularization.
So does any of the above make sense? Or am I missing out some important points? Any thoughts are welcomed.
If you need things available in two separate files, then you can't have true privacy... however, something similar to this may work for you:
first.js:
(function(context) {
var sharedProperties = {
sharedProp1: "This is shared"
};
function alertSharedProp1() {
alert (sharedProperties.sharedProp1)
}
window[context] = {
sharedProperties: sharedProperties,
alertSharedProp1: alertSharedProp1
};
})("myGlobalNamespace");
second.js:
(function(parent, context) {
// CHANGED: `this` doesn't do what you think it does here.
var childFunction = function() {
console.log('trying to access parent private field: ' + window.myGlobalNamespace.sharedProperties.sharedProp1);
};
window[parent][context] = {
childFunction: childFunction
};
}("myGlobalNamespace", "subNamspace"));
window.myGlobalNamespace.subNamspace.childFunction();
Edit detailed answer based on comments
What I did was to set up a source file that looked like this:
master.js
(function() {
##include: file1.js##
##include: file2.js##
}());
Then I wrote a script (in windows scripting, in my case) that read in master.js and then read through line by line looking for the ##include: filename.js## lines. When it found such a line it read in the include file and just dumped it out.
My particular needs were special since I was writing a browser plugin that needed to work in three different browsers and had to be wrapped up separately, yet for my own sanity I wanted separate files to work with.
For performance optimization I'm using a single JavaScript file to handle all the pages of the website I'm working on.
The basic structure I'm using is as followed:
(function($) {
// Shared functions ...
// A function for every page
function Page1() {
}
Page1.prototype = {
init: function() {
//...
},
//more functions
};
// more pages
$(document).ready(function() {
if ($('#page1uniqueidentifier').length) {
var page1 = new Page1();
page1.init();
}
// more pages
}
}) (jQuery);
I'm not an experienced JavaScript programmer so I've been searching a lot about best practices and different ways of structuring my code and I've ended up choosing this one but I'm not really sure about it and I have a few questions:
Is it worth it to use prototype if I'm never gonna have more than a single instance of a page? I think I understand how prototype works and that I'm not gaining any performance there. But I'm using it just as a best practice because in the case different instances would exist, these functions would be the same in every instance.
Is there a better way to structure the code?
Should I put the call of the init function inside the constructor and then only call new Page1()?
function Page1() {
this.init();
}
if ($('#page1uniqueidentifier').length) {
new Page1();
}
For performance optimization I'm using a single JavaScript file to
handle all the pages of the website I'm working on
That makes no sense. You should separate code into files, and then run all your js files thru a minimizer/concatenator to package it up.
Anyway, to answer your questions,
if you are only going to have 1, then prototype won't buy you anything. However, if you are going to use more than 1, would you go back and change it? Plus, using prototype wont hurt you either, so you might as well do it for learning.
You should create the files that make sense according to the functionality implemented. I would separate your object definition into its own file, for example, so when you look at that file, all you see is the code for that object.
If you have a constructor function, you don't really need init, do you?
I'm working on a web application that includes different JavaScript files, depending on where I am in the app. For instance, I have a display.js for each page, each of which has an "init()" function that is called as soon as the page is loaded.
This works well for the webapp, but in my QUnit tests, where all script files are included from a single index.html, functions of the same names override each other.
How are such problems best handled? One test index.html file per page creates lots of boilerplate code and makes it non-trivial to execute all test cases. That's why I decided to name each and every function distinctively, e.g. "initFrontPage()" instead of "init()". This, however, makes the application code a bit weird: Not only do I have to include the right file, I also have to call the right functions in it. Is there a better way?
The solution is to use namespaces:
In foo/display.js:
window.foo = {};
foo.init = function () { ... };
In bar/display.js:
window.bar = {};
bar.init = function () { ... };
Then, in the page that uses bar/display.js's init method:
(function (display) {
display.init();
}(bar));
It would be a good idea to wrap your display.js code in an IIFE as well.