I am trying to write unit tests for a mobile app that has been developed in Appcelerator titanium. I am trying to unit test it with TiUnit and Jasmine.
One of my methods uses String.format(). How do I unit test that method since String.format() is not defined in Javascript?
function MyMethod(_flag, _myProperty, _propertyValue) {
if(_flag) {
_myProperty = String.format('The value :', _propertyValue);
}
}
Is it possible to mock this? Or can I add format() to the prototype of jasmine's String so that whenever it encounters format() in .js to be tested, it executes the format() which I will define in the test suite?
A way to get around this problem is to make this a pure function.
Take this small change as an example:
function MyMethod(_flag, _myProperty, _propertyValue, formatFunction) {
if(_flag) {
_myProperty = formatFunction('The value :', _propertyValue);
}
}
Now MyMethod does not have a dependency on anything external to the function. This allows you to use whatever string formatter you want, and in your case a mocked one.
In your test, you now have the ability to do something like this:
it('should format the string', () => {
const mockFormatter = () => {/*do the mock operation of your choosing here*/}
const formattedString = MyMethod(yourFlag, yourProperty, yourPropertyValue, mockFormatter)
assert(formattedString, expectedOutput)
})
What this allows is for you to not have to manipulate the global String object/prototypes, which can have implications elsewhere. Instead, you create a function that is agnostic to the formatter and are able to easily mock it.
Lastly, now that I have hopefully helped provide a path forward on your original question, I'm curious why you are after mocking the formatter? This seems like something you would always want to validate, and would have no harm in doing so. To me, and this can lead to very in depth and often pedantic discussions on testing, this already suffices as a unit test. It is not "pure", but as far as side-effects go, there are none and you are testing some basic expectations around data manipulation without.
I think mocking String.format() introduces unnecessary complexity to the test without any real gain in code-confidence.
**Edit: I assumed String.format() was a JS function I had not heard of, but that does not appear to be the case.
To achieve what you are after, and avoid the need to mock entirely, I think you should use string interpolation via string literals, or concatenation.
See here:
function MyMethod(_flag, _myProperty, _propertyValue) {
if(_flag) {
_myProperty = `The value : ${_propertyValue}`; // option 1 uses interpolation
// _myProperty = 'The value : ' + _propertyValue; // option 2 uses concatenation
}
}
Related
I am reading a book where it says one way to handle impure functions is to inject them into the function instead of calling it like the example below.
normal function call:
const getRandomFileName = (fileExtension = "") => {
...
for (let i = 0; i < NAME_LENGTH; i++) {
namePart[i] = getRandomLetter();
}
...
};
inject and then function call:
const getRandomFileName2 = (fileExtension = "", randomLetterFunc = getRandomLetter) => {
const NAME_LENGTH = 12;
let namePart = new Array(NAME_LENGTH);
for (let i = 0; i < NAME_LENGTH; i++) {
namePart[i] = randomLetterFunc();
}
return namePart.join("") + fileExtension;
};
The author says such injections could be helpful when we are trying to test the function, as we can pass a function we know the result of, to the original function to get a more predictable solution.
Is there any difference between the above two functions in terms of being pure as I understand the second function is still impure even after getting injected?
An impure function is just a function that contains one or more side effects that are not disenable from the given inputs.
That is if it mutates data outside of its scope and does not predictably produce the same output for the same input.
In the first example NAME_LENGTH is defined outside the scope of the function - so if that value changes the behaviour of getRandomFileName also changes - even if we supply the same fileExtension each time. Likewise, getRandomLetter is defined outside the scope - and almost certainly produces random output - so would be inherently impure.
In second example everything is referenced in the scope of the function or is passed to it or defined in it. This means that it could be pure - but isn't necessarily. Again this is because some functions are inherently impure - so it would depend on how randomLetterFunc is defined.
If we called it with
getRandomFileName2('test', () => 'a');
...then it would be pure - because every time we called it we would get the same result.
On the other hand if we called it with
getRandomFileName2(
'test',
() => 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'.charAt(Math.floor(25 * Math.random()))
);
It would be impure, because calling it each time would give a different result.
There's more than one thing at stake here. At one level, as Fraser's answer explains, assuming that getRandomLetter is impure (by being nondeterministic), then getRandomFileName also is.
At least, by making getRandomFileName2 a higher-order function, you at least give it the opportunity to be a pure function. Assuming that getRandomFileName2 performs no other impure action, if you pass it a pure function, it will, itself, transitively, be pure.
If you pass it an impure function, it will also, transitively, be impure.
Giving a function an opportunity to be pure can be useful for testing, but doesn't imply that the design is functional. You can also use dependency injection and Test Doubles to make objects deterministic, but that doesn't make them functional.
In most languages, including JavaScript, you can't get any guarantees from functions as first-class values. A function of a particular 'shape' (type) can be pure or impure, and you can check this neither at compile time nor at run time.
In Haskell, on the other hand, you can explicitly declare whether a function is pure or impure. In order to even have the option of calling an impure action, a function must itself be declared impure.
Thus, the opportunity to be impure must be declared at compile time. Even if you pass a pure 'implementation' of an impure type, the receiving, higher-order function still looks impure.
While something like described in the OP would be technically possible in Haskell, it would make everything impure, so it wouldn't be the way you go about it.
What you do instead depends on circumstances and requirements. In the OP, it looks as though you need exactly 12 random values. Instead of passing an impure action as an argument, you might instead generate 12 random values in the 'impure shell' of the program, and pass those values to a function that can then remain pure.
There's more at stake than just testing. While testability is nice, the design suggested in the OP will most certainly be impure 'in production' (i.e. when composed with a proper random value generator).
Impure actions are harder to understand, and their interactions can be surprising. Pure functions, on the other hand, are referentially transparent, and referential transparency fits in your head.
It'd be a good idea to have as a goal pure functions whenever possible. The proposed getRandomFileName2 is unlikely to be pure when composed with a 'real' random value generator, so a more functional design is warranted.
Anything that contains random (or Date or stuff like that). Will be considered impure and hard to test because what it returns doesn't strictly depends on its inputs (always different). However, if the random part of the function is injected, the function can be made "pure" in the test suite by replacing whatever injected randomness with something predictable.
function getRandomFileName(fileExtension = "", randomLetterFunc = getRandomLetter) {}
can be tested by calling it with a predictable "getLetter" function instead of a random one:
getRandomFileName("", predictableLetterFunc)
I have two functions: myFunctionA() and myFunctionB().
myFunctionA() returns an Object which includes the key Page_Type which has a string value.
myFunctionB() processes a number of entries in the Object returned by myFunctionA(), including Page_Type and its string value.
Later, myFunctionA() is updated so it no longer returns an object including the key Page_Type but, instead, the key Page_Types - which has an array value.
Because of this, myFunctionB() will now also need to be updated - it will no longer be processing Page_Type which is a string, but Page_Types which is an array.
If I understand correctly (and I may not), the above is an example of Dependency Request and the extensive refactoring that it throws up can be avoided by (I think) deploying the Dependency Injection pattern (or possibly even the Service Locator pattern?) instead (??)
But, despite reading around this subject, I am still uncertain as to how Dependency Injection can work in PHP or Javascript functions (much of the explanation deals with programming languages like C++ and OOP concepts like Classes, whereas I am dealing with third party functions in PHP and javascript).
Is there any way to structure my code so that updating myFunctionA() (in any significant manner) will not then require me to also update myFunctionB() (and all other functions calling myFunctionA() - for instance myFunctionC(), myFunctionD(), myFunctionE() etc.) ?
And what if myFunctionH() requires myFunctionG() requires myFunctionF() which requires myFunctionA()? I don't want to be in a position where updating myFunctionA() now means that three more functions (F, G and H) all have to be updated.
Attempt at an answer:
The best answer I can think of at present - and this may not be a best practice answer because I don't know yet if there is a formal problem which corresponds with the problem I am describing in the question above - is the following restatement of how I presented the setup:
I have two (immutable) functions: myFunctionA__v1_0() and myFunctionB__v1_0().
myFunctionA__v1_0() returns an Object which includes the key
Page_Type which has a string value.
myFunctionB__v1_0() processes a number of entries in the Object
returned by myFunctionA__v1_0(), including Page_Type and its
string value.
Later, myFunctionA__v1_0() still exists but is also succeeded by myFunctionA__v2_0() which
returns an object including the key Page_Types - which has an array value.
In order for myFunctionB to access the object returned by myFunctionA__v2_0(),
there will now also need to be a myFunctionB__v1_1(), capable of processing
the array Page_Types.
This can be summarised as:
myFunctionB__v1_0() requires object returned by myFunctionA__v1_0()
myFunctionB__v1_1() requires object returned by myFunctionA__v2_0()
Since each function becomes immutable after being formally named, what never happens is that we end up with an example of myFunctionB__v1_0() requiring object returned by myFunctionA__v2_0().
I don't know if I am approaching this the right way, but this is the best approach I have come up with so far.
It is very common in programming for the provider - ie. myFunctionA() to know nothing about its consumer(s) myFunctionB(). The only correct way to handle this is to define an API up front and never change it ;)
I don't see the purpose of versioning the consumer - that reason would have to be "downstream" of myFunctionB() - ie. a consumer of myFunctionB(), that the author of myFunctionB() is not in control of... in which case myFunctionB() itself becomes a provider, and the author would have to deal with that (perhaps using the same pattern that you do)... But it's not your problem to deal with.
As for your provider myFunctionA(): If you cannot define an interface / API for the data itself up front - ie. you know that the structure of the data will have to change (in a non-backwards-compatible way), but you don't know how... then you will need to version something one way or another.
You are miles ahead of most since you see this coming and plan for it from the beginning.
The only way to avoid having to make changes to the consumer myFunctionB() at some point, is to make all changes to the provider myFunctionA() in a backwards-compatible way. The change you describe is not backwards compatible because myFunctionB() cannot possibly know what to do with the new output from myFunctionA() without being modified.
The solution you propose sounds like it should work. However, there are at least a couple of downsides:
It requires you to keep an ever-growing list of legacy functions around in case there are ever any consumers requesting their data. This will become very complicated to maintain, and likely impossible in the long run.
Depending what changes need to be made in the future it might no longer be possible to produce the output for myFunctionA__v1_0() at all - in your example you add the possibility of several page_types in your system - in this case you can probably just rewrite v1_0 to use the first and the legacy consumers will be happy. But if you decide to completely drop the concept of page_types from your system, you would have to plan for the complete removal of v1_0 one way or another. So you need to establish a way to communicate this to the consumers.
The only correct way to handle this still is to define an API up front and never change it.
Since we have established that:
you will have to make backwards incompatible changes
you don't know anything about the consumers and you don't have the power to change them when needed
I propose that instead of defining an immutable API for your data, you define an immutable API that allows you to communicate to consumers when they should or must upgrade.
This might sound complicated, but it doesn't have to be:
Accept a version parameter in the provider:
The idea is to let the consumer explicitly tell the provider which version to return.
The provider might look like this:
function myFunctionA(string $version) {
$page_types = ['TypeA', 'TypeB'];
$page = new stdClass();
$page->title = 'Page title';
switch ($version) {
case '1.0':
$page->error = 'Version 1.0 no longer available. Please upgrade!';
break;
case '1.1':
$page->page_type = $page_types[0];
$page->warning = 'Deprecated version. Please upgrade!';
break;
case '2.0':
$page->page_types = $page_types;
break;
default:
$page->error = 'Unknown version: ' . $version;
break;
}
return $page;
}
So the provider accepts a parameter which will contain the version that the consumer can understand - usually the one that was the newest when the consumer was last updated.
The provider makes a best-effort attempt to deliver the requested version
if it is not possible there is a "contract" in place to inform the consumer ($page->error will exist on the return value)
if it is possible, but there is a newer version available another "contract" is in place to inform the consumer of this ($page->warning will exist on the return value).
And handle a few cases in the consumer(s):
The consumer needs to send the version it expects as a parameter.
function myFunctionB() {
//The consumer tells the provider which version it wants:
$page = myFunctionA('2.0');
if ($page->error) {
//Notify developers and throw an error
pseudo_notify_devs($page->error);
throw new Exception($page->error);
} else if ($page->warning) {
//Notify developers
pseudo_notify_devs($page->warning);
}
do_stuff_with($page);
}
The second line of an older version of myFunctionB() - or a completely different consumer myFunctionC() might instead ask for an older version:
$page = myFunctionA('1.1');
This allows you to make backwards-compatible changes any time you want - without consumers having to do anything. You can do your best to still support old versions when at all possible, providing "graceful" degradation in legacy consumers.
When you do have to make breaking changes you can continue supporting the old version for a while before finally removing it completely.
Meta information
I'm not confident this would be useful... but you could add some meta information for consumers using an outdated version:
function myFunctionA(string $version) {
# [...]
if ($page->error || $page->warning) {
$page->meta = [
'current_version' => '3.0',
'API_docs' => 'http://some-url.fake'
]
}
return $page;
}
This could then be used in the consumer:
pseudo_notify_devs(
$page->error .
' - Newest version: ' . $page->meta['current_version'] .
' - Docs: ' . $page->meta['API_docs']
);
...if I were you I would be careful not to overcomplicate things though... Always KISS
Dependency injection is more relevant in a OOP context. However, the main thing I would do here is to stop thinking in terms of returning what you have available and start considering how the 2 methods work together and what's their contract.
Figure out what's the logical output for myFunctionA(), encode that contract into an object, and convert the data you have to that format. This way, even if the way you fetch data in myFunctionA() changes, you only have to update that one conversion.
As long as you adhere to that contract (which can represented through a custom object), myFunctionB() and other methods that expect to receive data as per the contract, you won't have to change those methods any longer.
So my main take away here would be to start thinking about the data you need and pass it around not in the structure you receive it, but in the way that it makes the most sense for your application.
It seems you have broken the interface between myFunctionA and myFunctionB by changing the return type from string to array.
I don't think DI can be of help.
Firstly, this is not Dependency Injection.
Your myFunctionA() could be call a Producer since it provide data, it should be proved a Data Structure.
Your myFunctionB() could be call a Consumer since it consume the data that provided by myFunctionA.
So in the order to make your Producers and your Consumer working independently, you need to add another layer between them, call Converter. The Converter layer will convert the Data Structure that provided by Producer to be a well know Data Structure that can understand by a Consumer
I really recommend you to read the book Clean Code chapter 6: Objects and data structures. So you will able to fully understand the concept above, about Data Structure
Example
Assume we had Data Structure call Hand, had property right hand and left hand.
class Hand {
private $rightHand;
private $leftHand
// Add Constructor, getter and setter
}
myFunctionA() will provide object Hand, Hand is a Data Structure
function myFunctionA() {
$hand = Hand::createHand(); //Assume a function to create new Hand object
return $hand;
}
let said we had another Data Structure, call Leg, the Leg will able to consume by myFunctionB();
class Leg {
private $rightLeg;
private $leftLeg
// Add Constructor, getter and setter
}
Then, we need to had a converter, in the middle , to convert from Hand to Leg, and use on myFunctionB()
class Converter {
public static function convertFromHandToLeg($hand) {
$leg = makeFromHand($hand); //Assume a method to convert from Hand to Leg
return $leg;
}
}
myFunctionB(Converter::convertFromHandToLeg($hand))
So, whenever you edit the response of myFunctionA(), mean you are going to edit Data Structure of Hand. You only need to edit the Converter to make sure it continue to convert from Hand to Leg correctly. You do not need to touch myFunctionB and vice versa.
This will very helpful when you had another Producer that will provide Hand like you mention on your question, myFunctionC(), myFunctionD()... And you also had many another Consumer that will consume Leg like myFunctionH(), myFunctionG()...
Hope this help
Edit: I found this interesting library which looks like it can do exactly what I was describing at the bottom: https://github.com/philbooth/check-types.js
Looks like you can do it by calling check.quacksLike.
I'm fairly new to using javascript and I'm loving the amount of power it offers, but sometimes it is too flexible for my sanity to handle. I would like an easy way to enforce that some argument honors a specific interface.
Here's a simple example method that highlights my problem:
var execute = function(args)
{
executor.execute(args);
}
Let's say that the executor expects args to have a property called cmd. If it is not defined, an error might be caught at another level when the program tries to reference cmd but it is undefined. Such an error would be more annoying to debug than explicitly enforcing cmd's existence in this method. The executor might even expect that args has a function called getExecutionContext() which gets passed around a bit. I could imagine much more complex scenarios where debugging would quickly become a nightmare of tracing through function calls to see where an argument was first passed in.
Neither do I want to do something on the lines of:
var execute = function(args)
{
if(args.cmd === undefined || args.getExecutionContext === undefined ||
typeof args.getExecutionContext !== 'function')
throw new Error("args not setup correctly");
executor.execute(args);
}
This would entail a significant amount of maintenance for every function that has arguments, especially for complex arguments. I would much rather be able to specify an interface and somehow enforce a contract that tells javascript that I expect input matching this interface.
Maybe something like:
var baseCommand =
{
cmd: '',
getExecutionContext: function(){}
};
var execute = function(args)
{
enforce(args, baseCommand); //throws an error if args does not honor
//baseCommand's properties
executor.execute(args);
}
I could then reuse these interfaces amongst my different functions and define objects that extend them to be passed into my functions without worrying about misspelling property names or passing in the wrong argument. Any ideas on how to implement this, or where I could utilize an existing implementation?
I don't see any other way to enforce this. It's one of the side effects of the dynamic nature of JavaScript. It's essentially a free-for-all, and with that freedom comes responsibility :-)
If you're in need of type checking you could have a look at typescript (it's not JavaScript) or google's closure compiler (javascript with comments).
Closure compiler uses comments to figure out what type is expected when you compile it. Looks like a lot of trouble but can be helpful in big projects.
There are other benefits that come with closure compiler as you will be forced to produce comments that are used in an IDE like netbeans, it minifies your code, removes unused code and flattens namespaces. So code organized in namespaces like myApp.myModule.myObject.myFunction will be flattened to minimize object look up.
Cons are that you need to use externs when you use libraries that are not compiler compatible like jQuery.
The way that this kind of thing is typically dealt with in javascript is to use defaults. Most of the time you simply want to provide a guarentee that certain members exist to prevent things like reference errors, but I think that you could use the principal to get what you want.
By using something like jQuery's extend method, we can guarentee that a parameter implements a set of defined defaults.
var defaults = {
prop1: 'exists',
prop2: function() { return 'foo'; }
};
function someCall(args) {
var options = $.extend({}, defaults, args);
// Do work with options... It is now guarentee'd to have members prop1 and prop2, defined by the caller if they exist, using defaults if not.
}
If you really want to throw errors at run time if a specific member wasn't provided, you could perhaps define a function that throws an error, and include it in your defaults. Thus, if a member was provided by the caller, it would overwrite the default, but if it was missed, it could either take on some default functionality or throw an error as you wish.
I come from a heavy C# background and am currently learning my way through ASP.NET MVC with Knockout.js and JavaScript. I'm very much a TDD based person and have hit a number of snags which I seem to be struggling with. I've read a number of examples of jsTestDriver and all seem fairly straightforward until put to the test...
Basically, what I'm trying to unit test (using JetBrains WebStorm 5.0.4 in conjunction with JsTestDriver) is a simple assertion that an exception is thrown when a certain case is met. This should be simple right?
My actual test case looks like this in jsTestDriver (having removed any underlying basic code and simply raised the exception in the unit test function itself):
GridControllerTest.prototype.testBasicExceptionType = function () {
assertException(function() {
throw "InvalidDataSourceException";
}, "InvalidDataSourceException");
};
Which is a test case that asserts that my function throws an exception "InvalidDataSourceException" doesn't it? Initially I tried this with a function that declared the type:
function InvalidDataSourceException (){}
GridControllerTest.prototype.testBasicExceptionType = function () {
assertException(function() {
throw new InvalidDataSourceException();
}, "InvalidDataSourceException");
};
Can anyone point out the blindingly obvious to me and tell me why I can't get such a simple test to pass? Am I misundertanding the structure of the unit test function?
The difference is that in the first example you're throwing a string and in the second you're throwing an object. Objects in JavaScript don't have canonical names associated with them, essentially because there's no type system (just prototypes). In the second example, the function is assigned to window.InvalidDataSourceException, but the function object as such doesn't have the name in it. In particular, there's no default reflection to get a name or equivalent of toString() to get a canonical value.
Personally, I ditched using assertException entirely, because it was just too flaky for this kind of reason. I started using try-catch blocks. I put a call to fail() at the end of the try block, because it was expect to throw by then, and I put another test point in the catch block, to ensure that the exception was as-expected. It's a better test pattern, in my opinion, since it splits the test for change of control from the test for the reason that control changed.
Is it possible to test myInnerFunction below?
var val = function() {
var myInnerfunction = function(input) {
return input + ' I ADDED THIS';
};
return myInnerfunction('test value');
}();
Because myInnerFunction is essentially a private member of the anonymously executed outer function, it doesn't seem like it is testable from the outside.
You could intentionally expose a testing hook to the outside world, like possibly this:
var val = function() {
var myInnerfunction = function(input) {
return input + ' I ADDED THIS';
};
/* START test hook */
arguments.callee.__test_inner = myInnerFunction;
/* END test hook */
return myInnerfunction('test value');
}();
now, once val has been run at least once, you can reference val.__test_inner and call it with testable inputs.
The benefits to this approach:
1. you choose what is exposed and not (also a negative 'cause you have to REMEMBER to do so)
2. all you get is a copy-reference to the private method, so you can't accidentally change it, only use it and see what it produces
The drawbacks:
1. if the private member changes (or relies on) state of its host/parent function, it's harder for you to unit test around that, since you have to recreate or artificially control the host/parent state at the same time
2. As mentioned, these hooks have to be manually added
If you got really clever, you could have your build process look for blocks of comments like the above and remove the test hooks when creating your production build.
afaik unit testing does not concern about internal workings of things you test. The point is that you test the functionality ie: it does what it's supposed to do, not how it does it.
So if it uses an internal private member, it should not be testable...
You can test the external behavior that is observable. In this simple case you returned just the value of the inner function, but in a real world example you might combine the result of that inner function with something else. That combination is what you would test, not the direct outputs of the private method.
Trying to test the private method will make your code difficult to change and refactor, even when the external behavior is preserved. That said, I like to view unit tests not as extensive tests of your code, but simply providing an example of the API and how it behaves to different conditions. ;)
I think my answer for this is (like so many things) that I'm doing it wrong. What I've defined as a 'private' function is really something that needs to be tested. It was only private because I didn't want to expose it within a utilities api or something like that. But it could still be exposed through my application namespace.
So within the anonymous function that is executed on-dom-ready, I just attach my pre-defined functions as event handlers to the proper DOM hooks. The functions themselves, while not stored with my more open utilities functions, are still stored publicly within a package in my namespace associated with the DOM structure they are dealing with. This way I can get at them and test them appropriately.