I'm trying to do some browser automation, but I'm having some problems. Basically, what I'd like to do is load a set pages, set some forms options, click a button and view the results for each page that I open. Originally, I tried to do this by placing the pages I wanted to automate in iframes and then using javascript to drive the interactions I want in each, but that results in a Permissions Error, since the sites I want to automate are not on my server. Is there any way around this? The other possibility I've thought of is to use QT's webkit class and the evaluateJavaScript method to accomplish what I'd like to do, but this seems a bit more heavy weight for something that is, conceptually, pretty simple.
The tasks that I wanted to accomplish weren't really test related, so a lot of the test-frameworks don't fit the use case that I had in mind (I did try to use Selenium, but ran into problems). I ended up doing what I mentioned in original question and injecting javascript into pages through QT. This ended up working pretty well, although it was a pain to debug, since the javascript had to be passed in as a string and the base environment provided by QT's webkit class doesn't reveal a whole lot.
Check out Selenium: http://seleniumhq.org/. It lets you automate Firefox and is probably the easiest to get start with.
Are you trying to do test automation? If so, there are plenty frameworks for that, like Selenium, WatiN, WebAii and even that built in Visual Studio.
Some of them (WebAii is my favorite) allow you to launch test in a real browser like FireFox.
If a peace of software you searching for is more like form filler, than take a look at iMacros, thay have a complete browser-side scriptable solution.
An easier way of doing this would be to use a web debugging proxy and injecting javascript that way. This should allow you to debug the code you wrote within the browser.
I haven't personally used web debugging proxies, But I wrote my own proxy and did this a while ago just for fun and it worked great.
Related
I have a few clients that pay me to post ads on sites such as craigslist.com an backpage.com. Currently every hour or so I have a macro that runs and I manually do the captchas (which I'm fine with). But now I have some free time and I want to write a proper program to prevent stupid errors that can happen with macros (screen resize, miss clicks etc).
Part of my posting includes selecting images to upload. I know for security reasons javascript doesn't let you specify which file the user uploads, that part is decided on their own. I'm sure I could do it using NodeJS somehow since it would be local on my machine, but I don't even have the slightest idea how I would even approach this.
Any guidance or direction would be very helpful.
if you use nodeJS, you need to work hard, like
- get html content and parse it
- construct input that you want
- re-submit form, re-post data
the easier way is to use web browser automation like selenium to work end to end for you
more info: http://www.seleniumhq.org/
If you are familiar to Nodejs and JavaScript then I recommend you use Protractor.
It is the current default end-to-end automation testing tool for AngularJs applications but I'm pretty sure it will solve your problem.
Instead of using AngularJs specific selectors (like element(by.model)) to "find" your html elements you will use regular css selectors like: $("div.top") returning an array of all divs with a css class named top, for exemple.
Protractor implements Selenium Web-driver protocol, that means that the scripts that you write will communicate with almost any automation ready browser like ChromeDriver, FirefoxDriver or PhantomJsDriver (for a no GUI low fidelity but fast alternative).
Make sure you check the getting started section for a jump start.
I'm not going to try to dilly dally around: I'm a student on a college campus and I'd like to be able to invite all my friends on facebook to events I create without having to click all of their names manually.
I'm fairly familiar with Javascript; my only issue is that the scripts appear to be fairly obfuscated (probably on purpose) and I'm wondering what the best technique would be to tackle this task. I've tried the chrome developer toolbar, but I don't think it's quite what I'm looking for (although I could just be using it wrong).
You could use a GreaseMonkey script which would programmatically click on all the names for you.
Firebug for Firefox is a good tool for breaking down the JavaScript objects on a page. It's one of the leading debuggers for JavaScript. You can't go wrong with it as a tool for seeing how their code finally runs once the browser is done loading it.
I am looking into multiple web testing tools. I am trying watir now. My main concern is dealing with javascript. I just want to know if anyone can give me an overview on dealing with javascript in watir. What are some of the pitfalls and difficulties with it? Is it basically using javascript injections to tell the page what to do?
And if someone wants to suggest other web testing tools like watir I would appreciate it.
I tried selenium first and found it to be a tad unreliable.
Are there any cheap tools on the market?
Thank You!
Watir + Javascript => generally it's possible to inject javascript into your tests e.g.
#b.goto("javascript:openWin(2)")
When you say 'dealing with javascript' I assume you mean how well does Watir handle client side code in terms of rendering/execution. Since Watir drives real browsers (like Selenium) then JS will execute generally as expected.
Watir has different many different drivers, e.g. watir, firewatir, safariwatir, chromewatir, operawatir and now watir-webdriver. All of them drive the browser in slightly different implementations depending on the browser and OS. Firewatir for example uses JSSH which is in effect controlling the browser via JS. Can you explain what you mean by Selenium being 'unreliable'?
I'd recommend looking at the latest implementation of watir-webdriver. That way you get the benefit of a nice watir API on top of a new driver implementation. Webdriver has some strong backing in terms of support (Selenium 2 uses it, Google is coding it!) so I reckon it's a safe bet. You can also control most of the major browsers with this implementation.
Alternative tools => http://wiki.openqa.org/display/WTR/Alternative+Tools+For+Web+Testing
Tim provides a pretty good answer.
The only thing I have to add to what he said is that I've found that now and then I have to use the watir methods to fire specific javascript events such as onmouseover in order to accurately simulate the user interacting with the page. Since watir has a method for this, the hard part is not the watir code, but reverse engineering the page (or noticing subtle page interactions based on user actions) to figure out what elements are 'wired' up to what events and the order to fire those events against those specific elements.
Usually it's pretty easy to look at the HTML for an element and see what's going on. But with some custom controls it can take a bit of learning because they manage to do a pretty good job of 'hiding' all the event wiring, and you may have to parse through various aspects of the page (styles and all) using something like fiddler.
(after all, the normal user will never 'force' javacript to execute, or 'inject' javascript. They will use the mouse and keyboard to interact with the page, and any javascript is going to be a result of scripts that execute when the page is loaded, or as a result of scripts triggered via events based on specific user actions)
If your JS does not trigger a HTML refresh then WATiR will get confused. When you click an object in WATiR it waits for the page to load before continuing. You can overcome this with custom waiting commands and use of '.click!'.
If you are a reasonable ruby coder then WATiR is a solution for most things. It has the potential to be a rather stable and reliable source of automated web testing.
You may want to look into Firewatir, sahi, watir-webdriver, just to give you some more leads (would suggest googling for "open source web testing" and the like if you haven't. I looked into these and many more and settled on WATiR for reasons of cost, power, flexibility and prior knowledge (in ruby and WATiR). With the right gems it will speak to most databases and to Excel (or other file) to load test data.
I'm currently using WATiR to test a ZK-generated interface where none of the IDs are ever static and there's a lot of AJAXiness going on. I just built a framework to deal with these, and it works just fine.
Also, some semi-true and true things that may help.
To pass javascript from watir use browser.execute_script()
Example:
Watir::Wait.until { $browser.execute_script("return document.readyState") == "complete" }
I would like to check a large number of HTML files with inline JavaScript for JavaScript errors. What I'm envisioning doing is this: Script a browser to load a given page, wait a few seconds, and finally check the browser logs. I'm unsure though both on how to script a browser to load a given page and on how to access the JavaScript error log. I think the type of errors I'm worried about should show up in any modern browser so I would just go with whatever makes it most convenient. I'd be working either under Mac OS X or Linux.
Anybody already tackle a similar problem? I've thought a bit about hacking something together based on a unit testing framework -- generate a trivial (assertTrue(true)) test for each page and rely on the errors making it fail -- but I'm hoping for something more elegant. Thank you.
There are several routes you could take, though I'm not convinced that automation is really necessary in this case.
If your Javascript wasn't inline, you could try something adventurous like Rhino with DOM support, and completely eschew the browser. I would heartily recommend separating your JS and your markup anyway.
If you're dead-set on creating an automated solution for this, I would perhaps take a look at the Selenium plugin/testing framework for Firefox. It enables automated UI testing, and if you're thorough enough with it you should be able to uncover any error cases you would have run into in using the site. It should also be able to report JS errors to you. If not, using it in conjunction with a service like ExceptionHub or Hoptoad will get you what you need.
You shouldn't have to resort to trying to unit-test JS in the DOM. That's a recipe for complication.
Page-scraping on the Internet has seem to have hit somewhat of a wall for me, as there are more and more sites that are dependent on JavaScript for rendering portions of the screen.
It seems to me that with so many open source layout and JavaScript renderers released (like WebKit, Gecko and Chromium + V8) that someone must have made a tool for downloading a page and rendering its JavaScript without having to run an actual browser. However, I'm not turning up what I'm looking for with my searches - I've found tools like Selenium-rc, but they depend on a running browser. I'm interested in any tool or library which can do one (or both) of the following:
A program that can be run from the command line (*nix) which, given the source of a page, returns the page's source as rendered by some JS engine.
Integrated support in a particular language that allows one to (easily) pass the source of a page to it and returns the page's source as rendered by some JS engine.
I think #1 is preferable in a general sense, but #2 would be more useful if the tool exists in the language I want to work in. Also, I'm not concerned with the particular JS engine - any relatively modern one will do. What is out there?
web kit html to pdf works perfect, it can even produce jpg
http://wkhtmltopdf.googlecode.com
You can look at HTMLUnit. It's main purpose is automatic web testing, but I think it may let you get the rendered page.
Well, there's the DumpRenderTree tool which is used as part of the WebKit test suites. I'm not sure how suitable it is for turning into a standalone tool, but it does what you ask for (render HTML, run JavaScript, and dump its render tree out to disk).
Since JavaScript can do quite a lot of manipulations to the web page's document object model (DOM), it seems like to accurately scrape the content of an arbitrary page, you'd need to not only run a JavaScript engine, you'd also need a complete and accurate DOM representation of the page. That's something you'll only get if you have a real browser engine instantiated. It is possible to use an embedded, not-displayed WebKit or Gecko engine for this, then after a suitable loading delay to allow for script execution, just dump the DOM contents in HTML form.
We used Rhino sometime ago to do some automated testing from Java. It seems it'll do the job for you :)
i think there's an example code for Qt that uses the included WebKit to render a page to a pixmap. from there to a full CLI utility is just defining your needs.
of course, for most screen-scraping need you want the text, not a pixmap... if that's what you want, better check Rhino
There is the Cobra Engine for Java (http://lobobrowser.org/cobra.jsp), which handles Javascript (it also has a renderer, but that is optional). I've never used it, but have heard nice things said about it.
It's very little code to have a WebView render a page without displaying anything, but it has to be a GUI application. They can take command line arguments as well, and hide the window. Using WebKit directly it might be possible in a tool.
Apart from the complicated DOM access in Objective-C WebKit can also inject JavaScript, and together with jQuery that makes for a nice scraping solution. I don't know of any universal application doing that, though.