Watir and Javascript - javascript

I am looking into multiple web testing tools. I am trying watir now. My main concern is dealing with javascript. I just want to know if anyone can give me an overview on dealing with javascript in watir. What are some of the pitfalls and difficulties with it? Is it basically using javascript injections to tell the page what to do?
And if someone wants to suggest other web testing tools like watir I would appreciate it.
I tried selenium first and found it to be a tad unreliable.
Are there any cheap tools on the market?
Thank You!

Watir + Javascript => generally it's possible to inject javascript into your tests e.g.
#b.goto("javascript:openWin(2)")
When you say 'dealing with javascript' I assume you mean how well does Watir handle client side code in terms of rendering/execution. Since Watir drives real browsers (like Selenium) then JS will execute generally as expected.
Watir has different many different drivers, e.g. watir, firewatir, safariwatir, chromewatir, operawatir and now watir-webdriver. All of them drive the browser in slightly different implementations depending on the browser and OS. Firewatir for example uses JSSH which is in effect controlling the browser via JS. Can you explain what you mean by Selenium being 'unreliable'?
I'd recommend looking at the latest implementation of watir-webdriver. That way you get the benefit of a nice watir API on top of a new driver implementation. Webdriver has some strong backing in terms of support (Selenium 2 uses it, Google is coding it!) so I reckon it's a safe bet. You can also control most of the major browsers with this implementation.
Alternative tools => http://wiki.openqa.org/display/WTR/Alternative+Tools+For+Web+Testing

Tim provides a pretty good answer.
The only thing I have to add to what he said is that I've found that now and then I have to use the watir methods to fire specific javascript events such as onmouseover in order to accurately simulate the user interacting with the page. Since watir has a method for this, the hard part is not the watir code, but reverse engineering the page (or noticing subtle page interactions based on user actions) to figure out what elements are 'wired' up to what events and the order to fire those events against those specific elements.
Usually it's pretty easy to look at the HTML for an element and see what's going on. But with some custom controls it can take a bit of learning because they manage to do a pretty good job of 'hiding' all the event wiring, and you may have to parse through various aspects of the page (styles and all) using something like fiddler.
(after all, the normal user will never 'force' javacript to execute, or 'inject' javascript. They will use the mouse and keyboard to interact with the page, and any javascript is going to be a result of scripts that execute when the page is loaded, or as a result of scripts triggered via events based on specific user actions)

If your JS does not trigger a HTML refresh then WATiR will get confused. When you click an object in WATiR it waits for the page to load before continuing. You can overcome this with custom waiting commands and use of '.click!'.
If you are a reasonable ruby coder then WATiR is a solution for most things. It has the potential to be a rather stable and reliable source of automated web testing.
You may want to look into Firewatir, sahi, watir-webdriver, just to give you some more leads (would suggest googling for "open source web testing" and the like if you haven't. I looked into these and many more and settled on WATiR for reasons of cost, power, flexibility and prior knowledge (in ruby and WATiR). With the right gems it will speak to most databases and to Excel (or other file) to load test data.
I'm currently using WATiR to test a ZK-generated interface where none of the IDs are ever static and there's a lot of AJAXiness going on. I just built a framework to deal with these, and it works just fine.
Also, some semi-true and true things that may help.

To pass javascript from watir use browser.execute_script()
Example:
Watir::Wait.until { $browser.execute_script("return document.readyState") == "complete" }

Related

Automation using NodeJS

I have a few clients that pay me to post ads on sites such as craigslist.com an backpage.com. Currently every hour or so I have a macro that runs and I manually do the captchas (which I'm fine with). But now I have some free time and I want to write a proper program to prevent stupid errors that can happen with macros (screen resize, miss clicks etc).
Part of my posting includes selecting images to upload. I know for security reasons javascript doesn't let you specify which file the user uploads, that part is decided on their own. I'm sure I could do it using NodeJS somehow since it would be local on my machine, but I don't even have the slightest idea how I would even approach this.
Any guidance or direction would be very helpful.
if you use nodeJS, you need to work hard, like
- get html content and parse it
- construct input that you want
- re-submit form, re-post data
the easier way is to use web browser automation like selenium to work end to end for you
more info: http://www.seleniumhq.org/
If you are familiar to Nodejs and JavaScript then I recommend you use Protractor.
It is the current default end-to-end automation testing tool for AngularJs applications but I'm pretty sure it will solve your problem.
Instead of using AngularJs specific selectors (like element(by.model)) to "find" your html elements you will use regular css selectors like: $("div.top") returning an array of all divs with a css class named top, for exemple.
Protractor implements Selenium Web-driver protocol, that means that the scripts that you write will communicate with almost any automation ready browser like ChromeDriver, FirefoxDriver or PhantomJsDriver (for a no GUI low fidelity but fast alternative).
Make sure you check the getting started section for a jump start.

Tools and techniques for UX-centric Regression testing of a web application?

The application that I'm currently working on is a simple 3-tier web application (whatever simple means :) However, the application is very UX/UI intensive i.e. the user-interface forms the crux of application. Every structural change to the page or refactoring javascript/jquery/backbone code, we need to ensure that the UI is behaving as expected.
For example, if div's are disappearing on deleting the object, or if items are being successful 'posted' and displayed in a different div etc.
I'm relatively new to the domain of UX/UI-based testing and not sure how to attack this problem. As of now it's quite a manual overhead to ensure it looks and works right. We do have 'one layer below' tests where we send HTTP messages and all seems to work fine with the return codes etc. But UI focused testing is what we lack.
I've heard about Selenium, Jasmine and a few Javascript frameworks but am not sure if they serve my needs. As of now the solution I see is to custom code javascript tests that would 'autorun' these tests from a browser and check if things are happening the way they should (probably with a human just 'staring' at the screen :) This itself will be quite a task and I thought of asking the community on suggestions before we reinvent the wheel.
Question: What tools/techniques are best suited for this type of a
job?
PS: It's a Java/Restlet based web-application
Selenium can definitely do what you need if you're looking to build 'real' automation tests, meaning code-based testing in something like Java, .NET or any of the other supported 'server-side' languages.
This would be far more likely to help detect regression than javascript-based tests where you have sometimes have limited ability to properly replicate user-interactions if it wasn't designed to allow it. Some things you would find are nearly impossible to test with just javascript.
Its worth the effort and Selenium is supported very well across many languages. Its essentially the industry standard and you'll find lots of documentation and helpful frameworks to get you started.

Is it sometimes ok NOT to Degrade Gracefully?

I am in the process of building a video sharing CMS that uses lots of jQuery and ajax for everything from rich UI effects to submitting and retrieving data to and from the database. When JavaScript is disabled it everything falls apart and 90% of the functionality doesn't work.
I am beginning to think its ok to not degrade gracefully for certain types of sites like this one which uses a flash player to stream the main content - the videos. So what would be the point of going to great lengths to enable dual support on everything else if the main content of the site can't be viewed. Even YouTube breaks with JS disabled.
I'm planning to release the CMS under open source license, so the question is:
For mass distribution (and for this type of site) is not degrading gracefully a good idea?
As long as you make it clear to the users that they need JS enabled, it is ok for it to "fall apart" without JS. However, if you give no indication that it shouldn't work without JS, then people will get angry. Most people nowadays expect sites to require JS in some aspect of their functionality.
For something as complex as a CMS with videos, it is the users fault if they don't enable JS. They shouldn't expect something like that work without JS, and even if they do, it's probably not worth your time maintaining two versions of your site: JS and non-JS, especially for something that is open source.
Seeing as your application relies on javascript for its entire purpose, it is impossible for you to degrade gracefully. As long as your site clearly tells the user to enable javascript to get all of your awesome functionality, and maybe some links as to how to do so in different browsers, you should be fine. :D
You are essentially choosing an audience. It's not unlike deciding whether to support IE6. It's not right-vs-wrong, it's simply a question of what percentage of your audience you are willing to lose, in exchange for ease of development on your end.
That said, I find progressive enhancement (of which graceful degradation is an outcome) to be an efficient and safer way to develop. Do the HTML first, make it work, then add JS as sugar on top.
It's not likely that one of your users is not running Javascript. What is likely, speaking for my humble self, is that you will have some small JS error which kills everything. (JS tends to just stop on exceptions, you may have noticed.)
It's nice to know that, in the event of such an error, your users can still use the site. That's what graceful degradation is for, in my opinion.
Graceful degradation doesn't mean "everything works fully in every browser", it means "if your browser can't handle something, you see something sensible instead of broken junk".
In your case, simply detecting that the site will not work and displaying a nice error page explaining what is required would be an acceptable form of graceful degradation.
If you're a perfectionist, there's nothing wrong in letting people w/o JS know what's going on, as opposed to just letting the website break. Here's a quick how-to: How to detect if JavaScript is disabled? .

Automated browsing with javascript?

I'm trying to do some browser automation, but I'm having some problems. Basically, what I'd like to do is load a set pages, set some forms options, click a button and view the results for each page that I open. Originally, I tried to do this by placing the pages I wanted to automate in iframes and then using javascript to drive the interactions I want in each, but that results in a Permissions Error, since the sites I want to automate are not on my server. Is there any way around this? The other possibility I've thought of is to use QT's webkit class and the evaluateJavaScript method to accomplish what I'd like to do, but this seems a bit more heavy weight for something that is, conceptually, pretty simple.
The tasks that I wanted to accomplish weren't really test related, so a lot of the test-frameworks don't fit the use case that I had in mind (I did try to use Selenium, but ran into problems). I ended up doing what I mentioned in original question and injecting javascript into pages through QT. This ended up working pretty well, although it was a pain to debug, since the javascript had to be passed in as a string and the base environment provided by QT's webkit class doesn't reveal a whole lot.
Check out Selenium: http://seleniumhq.org/. It lets you automate Firefox and is probably the easiest to get start with.
Are you trying to do test automation? If so, there are plenty frameworks for that, like Selenium, WatiN, WebAii and even that built in Visual Studio.
Some of them (WebAii is my favorite) allow you to launch test in a real browser like FireFox.
If a peace of software you searching for is more like form filler, than take a look at iMacros, thay have a complete browser-side scriptable solution.
An easier way of doing this would be to use a web debugging proxy and injecting javascript that way. This should allow you to debug the code you wrote within the browser.
I haven't personally used web debugging proxies, But I wrote my own proxy and did this a while ago just for fun and it worked great.

Are there command line or library tools for rendering webpages that use JavaScript?

Page-scraping on the Internet has seem to have hit somewhat of a wall for me, as there are more and more sites that are dependent on JavaScript for rendering portions of the screen.
It seems to me that with so many open source layout and JavaScript renderers released (like WebKit, Gecko and Chromium + V8) that someone must have made a tool for downloading a page and rendering its JavaScript without having to run an actual browser. However, I'm not turning up what I'm looking for with my searches - I've found tools like Selenium-rc, but they depend on a running browser. I'm interested in any tool or library which can do one (or both) of the following:
A program that can be run from the command line (*nix) which, given the source of a page, returns the page's source as rendered by some JS engine.
Integrated support in a particular language that allows one to (easily) pass the source of a page to it and returns the page's source as rendered by some JS engine.
I think #1 is preferable in a general sense, but #2 would be more useful if the tool exists in the language I want to work in. Also, I'm not concerned with the particular JS engine - any relatively modern one will do. What is out there?
web kit html to pdf works perfect, it can even produce jpg
http://wkhtmltopdf.googlecode.com
You can look at HTMLUnit. It's main purpose is automatic web testing, but I think it may let you get the rendered page.
Well, there's the DumpRenderTree tool which is used as part of the WebKit test suites. I'm not sure how suitable it is for turning into a standalone tool, but it does what you ask for (render HTML, run JavaScript, and dump its render tree out to disk).
Since JavaScript can do quite a lot of manipulations to the web page's document object model (DOM), it seems like to accurately scrape the content of an arbitrary page, you'd need to not only run a JavaScript engine, you'd also need a complete and accurate DOM representation of the page. That's something you'll only get if you have a real browser engine instantiated. It is possible to use an embedded, not-displayed WebKit or Gecko engine for this, then after a suitable loading delay to allow for script execution, just dump the DOM contents in HTML form.
We used Rhino sometime ago to do some automated testing from Java. It seems it'll do the job for you :)
i think there's an example code for Qt that uses the included WebKit to render a page to a pixmap. from there to a full CLI utility is just defining your needs.
of course, for most screen-scraping need you want the text, not a pixmap... if that's what you want, better check Rhino
There is the Cobra Engine for Java (http://lobobrowser.org/cobra.jsp), which handles Javascript (it also has a renderer, but that is optional). I've never used it, but have heard nice things said about it.
It's very little code to have a WebView render a page without displaying anything, but it has to be a GUI application. They can take command line arguments as well, and hide the window. Using WebKit directly it might be possible in a tool.
Apart from the complicated DOM access in Objective-C WebKit can also inject JavaScript, and together with jQuery that makes for a nice scraping solution. I don't know of any universal application doing that, though.

Categories