Javascript + selenium, writing readable tests - javascript

I'm writing tests for my web application with selenium webdriver for javascript. The problem that I'm facing is that because many operations are asynchronous, the indentation goes crazy. For example, for reading a value of an element, I'm using a structure
driver.findElement(By.id('my-element')).then(function(elem) {
elem.getAttribute('innerHTML').then(function(text) {
// Some operation
// Read next element with same structure
});
});
As you can see, if I need to read values of multiple elements, the indentation gets very deep quickly. Are there some kind of best practices to avoid this kind of issue? Is using "then" the only way to read values from elements?

If your concern is all about writing readable test then you should have some kind of code quality control tool in your codebase.
I prefer JSLint, which helps me to write cleaner code. It follows the standards of ECMAScript Programming Language and complains me if find any issues.

Related

Why are using exceptions more acceptable in python than javascript

You can replace "javascript" with other languages here. Basically what I've found from reading is that python actively encourages the use of exceptions and a series of if tests to manage code. Often readability is cited as well as cleaner looking code when 'duck-typing'
However, often when working in javascript or some other languages, it seems that best practices suggest trying to 'code defensively' and cover as much as you can in if statements and return types in order to avoid using exceptions. The reason most often cited is because exceptions are a very expensive operation.
Here's an example:
https://stackoverflow.com/a/8987401/2668545
Does python face the same exceptions cost as javascript and the best
practices are such because there's more emphasis about
readability/debuggability than performance?
Does python have a different way of handling exceptions than javascript or other languages that don't recommend using exceptions?
Am I misinterpreting the advice?
Or is it something else?
My take on this would be that though Python is a dynamically typed language, it is strongly-typed at the same time, see the explanation here. This means that if something goes wrong deep down the call hierarchy (like trying to convert an empty string to an integer, division by zero, etc.), the interpreter raises an interrupt that bubbles up the call graph.
Javascript and many other interpreted languages tend to gloss such things over and continue silently computing (rubbish) as long as possible. Essentially, the programmer has to defend against Javascript itself.
It is thus consistent when a user-defined Python module behaves in the same way as the standard library modules and the interpreter itself: achieve the expected result or raise an exception.
The advantages are:
Improved readability: the expected sequence of actions is not mixed with error handling.
Extra polymorphism. It can be possible to pass any object as a function argument, and things will work out if the object has the properties/members used by the function. The programmer writing the function does not have to know in advance the exact type of the argument (dynamic typing). If something goes wrong, there will be a call trace to investigate. Defensive checks are likely to be too restrictive and not 100% bullet-proof at the same time.
The considerations of readability and extensibility are probably more important than the ones of performance.

JavaScript - What Level of Code Optimization can one expect?

So, I am fairly new to JavaScript coding, though not new to coding in general. When writing source code I generally have in mind the environment my code will run in (e.g. a virtual machine of some sort) - and with it the level of code optimization one can expect. (1)
In Java for example, I might write something like this,
Foo foo = FooFactory.getFoo(Bar.someStaticStuff("qux","gak",42);
blub.doSomethingImportantWithAFooObject(foo);
even if the foo object only used at this very location (thus introducing an needless variable declaration). Firstly it is my opinion that the code above is way better readable than the inlined version
blub.doSomethingImportantWithAFooObject(FooFactory.getFoo(Bar.someStaticStuff("qux","gak",42));
and secondly I know that Java compiler code optimization will take care of this anyway, i.e. the actual Java VM code will end up being inlined - so performance wise, there is no diffence between the two. (2)
Now to my actual Question:
What Level of Code Optimization can I expect in JavaScript in general?
I assume this depends on the JavaScript engine - but as my code will end up running in many different browsers lets just assume the worst and look at the worst case. Can I expect a moderate level of code optimization? What are some cases I still have to worry about?
(1) I do realize that finding good/the best algorithms and writing well organized code is more important and has a bigger impact on performance than a bit of code optimization. But that would be a different question.
(2) Now, I realize that the actual difference were there no optimization is small. But that is beside the point. There are easily features which are optimized quite efficiently, I was just kind of too lazy to write one down. Just imagine the above snippet inside a for loop which is called 100'000 times.
Don't expect much on the optimization, there won't be
the tail-recursive optimization,
loop unfolding,
inline function
etc
As javascript on client is not designed to do heavy CPU work, the optimization won't make a huge difference.
There are some guidelines for writing hi-performance javascript code, most are minor and technics, like:
Not use certain functions like eval(), arguments.callee and etc, which will prevent the js engine from generating hi-performance code.
Use native features over hand writing ones, like don't write your own containers, json parser etc.
Use local variable instead of global ones.
Never use for-each loop for array.
Use parseInt() rather than Math.floor.
AND stay away from jQuery.
All these technics are more like experience things, and may have some reasonable explanations behind. So you will have to spend some time search around or try jsPerf to help you decide which approach is better.
When you release the code, use closure compiler to take care of dead-branch and unnecessary-variable things, which will not boost up your performance a lot, but will make your code smaller.
Generally speaking, the final performance is highly depending on how well your code organized, how carefully your algorithm designed rather than how the optimizer performed.
Take your example above (by assuming FooFactory.getFoo() and Bar.someStaticStuff("qux","gak",42) is always returning the same result, and Bar, FooFactory are stateless, that someStaticStuff() and getFoo() won't change anything.)
for (int i = 0; i < 10000000; i++)
blub.doSomethingImportantWithAFooObject(
FooFactory.getFoo(Bar.someStaticStuff("qux","gak",42));
Even the g++ with -O3 flag can't make that code faster, for compiler can't tell if Bar and FooFactory are stateless or not. So these kind of code should be avoided in any language.
You are right, the level of optimization is different from JS VM to VM. But! there is a way of working around that. There are several tools that will optimize/minimize your code for you. One of the most popular ones is by Google. It's called the Closure-Compiler. You can try out the web-version and there is a cmd-line version for build-script etc. Besides that there is not much I would try about optimization, because after all Javascript is sort of fast enough.
In general, I would posit that unless you're playing really dirty with your code (leaving all your vars at global scope, creating a lot of DOM objects, making expensive AJAX calls to non-optimal datasources, etc.), the real trick with optimizing performance will be in managing all the other things you're loading in at run-time.
Loading dozens on dozens of images, or animating huge background images, and pulling in large numbers of scripts and css files can all have much greater impact on performance than even moderately-complex Javascript that is written well.
That said, a quick Google search turns up several sources on Javascript performance optimization:
http://www.developer.nokia.com/Community/Wiki/JavaScript_Performance_Best_Practices
http://www.nczonline.net/blog/2009/02/03/speed-up-your-javascript-part-4/
http://mir.aculo.us/2010/08/17/when-does-javascript-trigger-reflows-and-rendering/
As two of those links point out, the most expensive operations in a browser are reflows (where the browser has to redraw the interface due to DOM manipulation), so that's where you're going to want to be the most cautious in terms of performance. Some of that can be alleviated by being smart about what you're modifying on the fly (for example, it's less expensive to apply a class than modify inline styles ad hoc,) so separating your concerns (style from data) will be really important.
Making only the modifications you have to, in order to get the job done, (ie. rather than doing the "HULK SMASH (DOM)!" method of replacing entire chunks of pages with AJAX calls to screen-scraping remote sources, instead calling for JSON data to update only the minimum number of elements needed) and other common-sense approaches will get you a lot farther than hours of minor tweaking of a for-loop (though, again, common sense will get you pretty far, there, too).
Good luck!

Do these two node.js modules do the same thing?

https://github.com/caolan/async
https://github.com/maxtaco/tamejs
These are two modules. To me, it seems like the same thing, right?
Or...are they used in different situations?
async is a library that provides methods to let you control the flow of your program. For example: "I want to process each item in the array asynchronously and have this function executed after all processing is completed".
TameJS makes you write code that isn't JS, but will get converted to JS. It's aim is to make the way to do asynchronous programming more easy to follow.
I personally used TameJS, and there are a few problems with it:
When an error is reported, the line number is the line number of the JS file, not the TJS file that you wrote. Tracking errors is a pain.
There can be bugs that are hard to track down. I remember having a bug with return res.send(200) where the request was not being sent. It has been fixed by now, but it put a very bad taste in my mouth.
I am now using async and find it can make the code very easy to read and understand.
As a final suggestion, perhaps you should try writing your own code to manage the control flow. If you are new to JS, then that would be a very good learning experience to see what these libraries are doing on the inside. Even if you are in a time crunch, it would be best to understand what external libraries do, so you can make the best use of them.
They are completely different although they try to solve roughly the same problem. While async is a very cool flow control library that gives you some helper functions for managing your async code, tamejs is (similar to streamlinejs, which I prefer) a bunch of language additions for pseudo-synchronous code that gets compiled to asynchronous code.

JsTestDriver and legacy javascript. To convert or not?

I inherited a legacy JavaScript library simply written as a list of functions as follow:
function checkSubtree(targetList, objId)
{
...
}
function checkRootSubtree(targetList, rootLength, rootInfo, level)
{
...
}
To test it with JsTestDriver do I have to 'clean' it to adhere to some JavaScript best practice or can I test it without modification?
Thanks
The HtmlDoc document fragment feature of jsTestDriver helps when unit testing DOM-dependent JavaScript code. You probably want a little HTML chunk to apply those functions to, as you go along.
When you've proven that they work, you'll see ways of making the functions more testable. This is one of the hidden gems of unit testing: emergent software design. Since you want to be able to test your code in isolation, you'll have an incentive to reduce coupling.
You can still test it using JsTestDriver. JsTestDriver is only a test runner, it doesn't require your code to be written in any special way. It's hard to give any advice on actual testing without seeing some code (i.e. function bodies).

Is there a tool to remove unused methods in javascript?

I've got a collection of javascript files from a 3rd party, and I'd like to remove all the unused methods to get size down to a more reasonable level.
Does anyone know of a tool that does this for Javascript? At the very least give a list of unused/used methods, so I could do the manually trimming? This would be in addition to running something like the YUI Javascript compressor tool...
Otherwise my thought is to write a perl script to attempt to help me do this.
No. Because you can "use" methods in insanely dynamic ways like this.
obj[prompt("Gimme a method name.")]();
Check out JSCoverage . Generates code coverage statistics that show which lines of a program have been executed (and which have been missed).
I'd like to remove all the unused methods to get size down to a more reasonable level.
There are a couple of tools available:
npm install -g fixmyjs
fixmyjs <filename or folder>
A configurable module that uses JSHint (Github, docs) to flag functions that are unused and perform clean up as well.
I'm not sure that it removes undefined functions as opposed to flagging them. though it is a great tool for cleanup, it appears to lack compatibility with later versions of ECMAScript (more info below).
There is also the Google Closure Compiler which claims to remove dead JS but this is more of a build tool.
Updated
If you are using something like Babel, consider adding ESLint to your text editor, which can trigger a warning on unused methods and even variables and has a --fix CLI option for autofixing some errors and style issues.
I like ESLint because it contains multiple plugins for alternate libs (like React warnings if you're missing a prop), allowing you to catch bugs in advance. They have a solid ecosystem.
As an example: on my NodeJS projects, the config I use is based off of the Airbnb Style Guide.
You'll have to write a perl script. Take no notice of the nay-sayers above.
Such a tool could work with libraries that are designed to only make function calls explicitly. That means no delegates or pointers to functions would be allowed, the use of which in any case only results in unreadable "spaghetti code" and is not best practice. Even if it removes some of these hidden functions you'll discover most if not all of them in testing. The ones you dont discover will be so infrequently used that they will not be worth your time fixing them. Dont obsess with perfection. People go mad doing that.
So applying this one restriction to JavaScript (and libraries) will result in incredible reductions in page size and therefore load times, not to mention readability and maintainability. This is already the case for tools that remove unused CSS such as grunt_CSS and unCSS (see http://addyosmani.com/blog/removing-unused-css/) and which report typical reductions down to one tenth the original size.
Its a win/win situation.
Its noteworthy that all interpreters must address this issue of how to manage self modifying code. For the life of me I dont understand why people want to persist with unrestrained freedom. As noted by Triptych above Javascript functions can be called in ways that are literally "insane". This insane fexibility corrupts the fundamental doctrine of separation of code and data, enables real-time code injection, and invalidates any attempt to maintain code integrity. The result is always unreadable code that is impossible to debug and the side effect to JavaScript - removing the ability to run automatic code pre-optimisation and validation - is much much worse than any possible benefit.
AND - you'd have to feel pretty insecure about your work to want to deliberately obsficate it from both your collegues and yourself. Browser clients that do work extremely well take the "less is more" approach and the best example I've seeen to date is Microsoft Office combination of Access Web Forms paired with SharePoint Access Servcies. The productivity of having a ubiquitous heavy tightly managed runtime interpreter client and its server side clone is absolutely phenomenal.
The future of JavaScript self modifying code technologies therfore is bringing them back into line to respect the...
KISS principle of code and data: Keep It Seperate, Stupid.
Unless the library author kept track of dependencies and provided a way to download the minimal code [e.g. MooTools Core download], it will be hard to to identify 'unused' functions.
The problem is that JS is a dynamic language and there are several ways to call a function.
E.g. you may have a method like
function test()
{
//
}
You can call it like
test();
var i = 10;
var hello = i > 1 ? 'test' : 'xyz';
window[hello]();
I know this is an old question by UglifyJS2 supports removing unused code which may be what you are looking for.
Also worth noting that eslint supports an option called no-unused-vars which actually does some basic handling of detecting if functions are being used or not. It definitely detects it if you make the function anonymous and store it as a variable (but just be aware that as a variable the function declaration doesn't get hoisted immediately)
In the context of detecting unused functions, while extreme, you can consider breaking up a majority of your functions into separate modules because there are packages and tools to help detect unused modules. There is a little segment of sindreshorus's thoughts on tiny modules which might be relevant to that philosophy but that may be extreme for your use case.
Following would help:
If you have fully covered test cases, running Code Coverage tool like istanbul (https://github.com/gotwarlost/istanbul) or nyc (https://github.com/istanbuljs/nyc), would give a hint of untouched functions.
At least the above will help find the covered functions, that you may thought unused.

Categories