I inherited a legacy JavaScript library simply written as a list of functions as follow:
function checkSubtree(targetList, objId)
{
...
}
function checkRootSubtree(targetList, rootLength, rootInfo, level)
{
...
}
To test it with JsTestDriver do I have to 'clean' it to adhere to some JavaScript best practice or can I test it without modification?
Thanks
The HtmlDoc document fragment feature of jsTestDriver helps when unit testing DOM-dependent JavaScript code. You probably want a little HTML chunk to apply those functions to, as you go along.
When you've proven that they work, you'll see ways of making the functions more testable. This is one of the hidden gems of unit testing: emergent software design. Since you want to be able to test your code in isolation, you'll have an incentive to reduce coupling.
You can still test it using JsTestDriver. JsTestDriver is only a test runner, it doesn't require your code to be written in any special way. It's hard to give any advice on actual testing without seeing some code (i.e. function bodies).
Related
I have built various Test Automation frameworks using the Page Object Pattern with Java (https://code.google.com/p/selenium/wiki/PageObjects).
Two of the big benefits I have found are:
1) You can see what methods are available when you have an instance of a page (e.g. typing homepage. will show me all the actions/methods you can call from the homepage)
2) Because navigation methods (e.g. goToHomepage()) return an instance of the subsequent page (e.g. homepage), you can navigate through your tests simply by writing the code and seeing where it takes you.
e.g.
WelcomePage welcomePage = loginPage.loginWithValidUser(validUser);
PaymentsPage paymentsPage = welcomePage.goToPaymentsPage();
These benefits work perfectly with Java since the type of object (or page in this case) is known by the IDE.
However, with JavaScript (dynamically typed language), the object type is not fixed at any point and is often ambiguous to the IDE. Therefore, I cannot see how you can realise these benefits on an automation suite built using JavaScript (e.g. by using Cucumber).
Can anyone show me how you would use JavaScript with the Page Object Pattern to gain these benefits?
From Gerrit0's comment above and investigating it further, it seems a great way to achieve this is to use TypeScript (which is a statically typed version of JavaScript):
https://en.wikipedia.org/wiki/TypeScript
I am not much about this patterns.but i will give some details maybe it helps to you.
http://www.guru99.com/page-object-model-pom-page-factory-in-selenium-ultimate-guide.html
http://www.assertselenium.com/automation-design-practices/page-object-pattern/
It seems a great way to achieve this is to use TypeScript (which is a statically typed version of JavaScript):
https://en.wikipedia.org/wiki/TypeScript
If you use Jetbrains products like IntelliJ IDEA it will do a code completion and ther proper navigation for you. In javascript world page object is a known pattern too. AngularJs offers it too in it's own e2e test framework (http://www.protractortest.org/#/page-objects). Personally I use IIFE for page objects and IntelliJ does the rest. If it doesn't fit to your needs you can still choose typescript and transpile it to javascript.
I'm quite new to Javacript Unit testing. One thing keep bothering me. When testing javascript, we often need to do the DOM manipulation. It looks like I am unit testing a method/function in a Controller/Component, but I still need to depend on the HTML elements in my templates. Once the id(or attributes used to be selectors in my test cases) is changed, my test cases also need to be CHANGED! Wouldn't this violate the purpose of unit testing?
One of the toughest parts of javascript unit testing is not the testing, it's learning how to architect your code so that it is testable.
You need to structure your code with a clear separation of testable logic and DOM manipulation.
My rule of thumb is this:
If you are testing anything that is dependent on the DOM structure, then you are doing it wrong.
In summary:Try to test data manipulations and logical operations only.
I respectfully disagree with #BentOnCoding. Most often a component is more than just its class. Component combines an HTML template and a JavaScript/TypeScript class. That's why you should test that the template and the class work together as intended. The class-only tests can tell you about class behavior. But they cannot tell you if the component is going to render properly and respond to user input.
Some people say you should test it in integration tests. But integration tests are slower to write/run and more expensive (in terms of time and resources) to run/maintain. So, testing most of your component functionality in integration tests might slow you down.
It doesn't mean you should skip integration tests. While integration and E2E tests may be slower and expensive than unit tests, they bring you more confidence that your app is working as intended. Integration test is where individual units/components are combined and tested as a group. It shouldn't be considered as an only place to test your component's template.
I think I'd second #BentOnCoding's recommendation that what you want to unit test is your code, not anything else. When it comes to DOM manipulation, that's browser code, such as appendChild, replaceChild etc. If you're using jQuery or some other library, the same still applies--you're calling some other code to do the manipulation, and you don't need to test that.
So how do you assert that calling some function on your viewmodel/controller resulted in the DOM structure that you wanted? You don't. Just as you wouldn't unit test that calling a stored procedure on a DB resulted in a specific row in a specific table. You need to instead think about how to abstract out the parts of your controller that deal with inputs/outputs from the parts that manipulate the DOM.
For instance, if you had a method that called alert() based on some conditions, you'd want to separate the method into two:
One that takes and processes the inputs
One that calls window.alert()
During the test, you'd substitute window.alert (or your proxy method to it) with a fake (see SinonJS), and call your input processor with the conditions to cause (or not cause) the alert. You can then assert different values on whether the fake was called, how many times, with what values, etc. You don't actually test window.alert() because it's external to your code. It's assumed that those external dependencies work correctly. If they don't, then that's a bug for that library, but it's not your unit test's job to uncover those bugs. You're only interested in verifying your own code.
I'm getting started with javascript unit testing (with Jasmine).
I have experience in unit testing C# code. But given that javascript is a dynamic language, I find it very useful to exploit that, and writing tests using the expressive power of javascript, for instance:
describe('known plugins should be exported', function(){
var plugins = ['bundle','less','sass','coffee','jsn','minifyCSS','minifyJS','forward','fingerprint'];
plugins.forEach(function(plugin){
it('should export plugin named ' + plugin, function(){
expect(all[plugin]).toBeDefined();
});
});
});
As far as doing this kind of non-conventional test-writing, I haven't gone further than doing this kind of tests (array with a list of test cases that are very similar)
So I guess my question is
Is it fine to write tests like this, or should I constrain myself to a more "statically typed" test fixture?
Great question!
Yes, it is perfectly fine to write unit tests like this. It's even encouraged.
JavaScript being a dynamic language lets you mock objects really easily. DI and IoC are really easy to do.
In general, testing with Jasmine (or Mocha which I personally prefer) is a pleasant and fun experience.
It's worth mentioning that since you're in a dynamic language you need to have tests you did not in statically typed languages. Tests commonly enforce members and methods existing, and types.
Having no interfaces to define your contract, often, your tests define your code's contract so it's really not uncommon to see tests do this sort of verification (like in your code) where you wouldn't in C#.
I am trying to write test driven Javascript. Testing each function, I know, is crucial. But I came to a stumbling block, in that the plugin which I am writing needs to have some private functions. I cannot peek into how they are functioning. What would I need to do if I want to keep my code well tested without changing the structure of it too much? (I am ok with exposing some API, though within limits.)
I am using sinon, QUnit, and Pavlov.
If you are doing test driven development (as suggested by the tags) each line of production code is first justified by failing test case.
In other words the existence of each and every line of your production code is implicitly tested because without it some test must have failed. That being said you can safely assume that private function/lambda/closure is already tested from the definition of TDD.
If you have a private function and you are wondering how to test it, it means you weren't doing TDD on the first place - and now you have a problem.
To sum up - never write production code before the test. If you follow this rule, every line of code is tested, no matter how deep it is.
I've got a collection of javascript files from a 3rd party, and I'd like to remove all the unused methods to get size down to a more reasonable level.
Does anyone know of a tool that does this for Javascript? At the very least give a list of unused/used methods, so I could do the manually trimming? This would be in addition to running something like the YUI Javascript compressor tool...
Otherwise my thought is to write a perl script to attempt to help me do this.
No. Because you can "use" methods in insanely dynamic ways like this.
obj[prompt("Gimme a method name.")]();
Check out JSCoverage . Generates code coverage statistics that show which lines of a program have been executed (and which have been missed).
I'd like to remove all the unused methods to get size down to a more reasonable level.
There are a couple of tools available:
npm install -g fixmyjs
fixmyjs <filename or folder>
A configurable module that uses JSHint (Github, docs) to flag functions that are unused and perform clean up as well.
I'm not sure that it removes undefined functions as opposed to flagging them. though it is a great tool for cleanup, it appears to lack compatibility with later versions of ECMAScript (more info below).
There is also the Google Closure Compiler which claims to remove dead JS but this is more of a build tool.
Updated
If you are using something like Babel, consider adding ESLint to your text editor, which can trigger a warning on unused methods and even variables and has a --fix CLI option for autofixing some errors and style issues.
I like ESLint because it contains multiple plugins for alternate libs (like React warnings if you're missing a prop), allowing you to catch bugs in advance. They have a solid ecosystem.
As an example: on my NodeJS projects, the config I use is based off of the Airbnb Style Guide.
You'll have to write a perl script. Take no notice of the nay-sayers above.
Such a tool could work with libraries that are designed to only make function calls explicitly. That means no delegates or pointers to functions would be allowed, the use of which in any case only results in unreadable "spaghetti code" and is not best practice. Even if it removes some of these hidden functions you'll discover most if not all of them in testing. The ones you dont discover will be so infrequently used that they will not be worth your time fixing them. Dont obsess with perfection. People go mad doing that.
So applying this one restriction to JavaScript (and libraries) will result in incredible reductions in page size and therefore load times, not to mention readability and maintainability. This is already the case for tools that remove unused CSS such as grunt_CSS and unCSS (see http://addyosmani.com/blog/removing-unused-css/) and which report typical reductions down to one tenth the original size.
Its a win/win situation.
Its noteworthy that all interpreters must address this issue of how to manage self modifying code. For the life of me I dont understand why people want to persist with unrestrained freedom. As noted by Triptych above Javascript functions can be called in ways that are literally "insane". This insane fexibility corrupts the fundamental doctrine of separation of code and data, enables real-time code injection, and invalidates any attempt to maintain code integrity. The result is always unreadable code that is impossible to debug and the side effect to JavaScript - removing the ability to run automatic code pre-optimisation and validation - is much much worse than any possible benefit.
AND - you'd have to feel pretty insecure about your work to want to deliberately obsficate it from both your collegues and yourself. Browser clients that do work extremely well take the "less is more" approach and the best example I've seeen to date is Microsoft Office combination of Access Web Forms paired with SharePoint Access Servcies. The productivity of having a ubiquitous heavy tightly managed runtime interpreter client and its server side clone is absolutely phenomenal.
The future of JavaScript self modifying code technologies therfore is bringing them back into line to respect the...
KISS principle of code and data: Keep It Seperate, Stupid.
Unless the library author kept track of dependencies and provided a way to download the minimal code [e.g. MooTools Core download], it will be hard to to identify 'unused' functions.
The problem is that JS is a dynamic language and there are several ways to call a function.
E.g. you may have a method like
function test()
{
//
}
You can call it like
test();
var i = 10;
var hello = i > 1 ? 'test' : 'xyz';
window[hello]();
I know this is an old question by UglifyJS2 supports removing unused code which may be what you are looking for.
Also worth noting that eslint supports an option called no-unused-vars which actually does some basic handling of detecting if functions are being used or not. It definitely detects it if you make the function anonymous and store it as a variable (but just be aware that as a variable the function declaration doesn't get hoisted immediately)
In the context of detecting unused functions, while extreme, you can consider breaking up a majority of your functions into separate modules because there are packages and tools to help detect unused modules. There is a little segment of sindreshorus's thoughts on tiny modules which might be relevant to that philosophy but that may be extreme for your use case.
Following would help:
If you have fully covered test cases, running Code Coverage tool like istanbul (https://github.com/gotwarlost/istanbul) or nyc (https://github.com/istanbuljs/nyc), would give a hint of untouched functions.
At least the above will help find the covered functions, that you may thought unused.