I'm writing WatiN tests to test an Ajax web application and have come across a timing issue with Ajax requests.
After an Ajax request is triggered by an action on the page, I'd like WatiN to wait until the request is complete before validating that the page was updated correctly.
I have a feeling that the solution will involve eval-ing JavaScript to register handlers for $.ajaxStart and $.ajaxComplete to track whether requests are in progress. I'll dig into that shortly, but wanted to see if anybody else has already solved this. Seems like it would be a common problem with Ajax testing.
I've created a few WatiN Browser extension methods to solve this problem, but am still interested in other solutions.
The InjectAjaxMonitor method creates a javascript global variable that attaches to the ajaxStart and ajaxComplete events to track the number of requests in progress.
Whenever you need to wait for AJAX requests to complete before moving on, you can then call browserInstance.WaitForAjaxRequest();.
public static class BrowserExtensions
{
public static void WaitForAjaxRequest( this Browser browser )
{
int timeWaitedInMilliseconds = 0;
var maxWaitTimeInMilliseconds = Settings.WaitForCompleteTimeOut*1000;
while ( browser.IsAjaxRequestInProgress()
&& timeWaitedInMilliseconds < maxWaitTimeInMilliseconds )
{
Thread.Sleep( Settings.SleepTime );
timeWaitedInMilliseconds += Settings.SleepTime;
}
}
public static bool IsAjaxRequestInProgress( this Browser browser )
{
var evalResult = browser.Eval( "watinAjaxMonitor.isRequestInProgress()" );
return evalResult == "true";
}
public static void InjectAjaxMonitor( this Browser browser )
{
const string monitorScript =
#"function AjaxMonitor(){"
+ "var ajaxRequestCount = 0;"
+ "$(document).ajaxSend(function(){"
+ " ajaxRequestCount++;"
+ "});"
+ "$(document).ajaxComplete(function(){"
+ " ajaxRequestCount--;"
+ "});"
+ "this.isRequestInProgress = function(){"
+ " return (ajaxRequestCount > 0);"
+ "};"
+ "}"
+ "var watinAjaxMonitor = new AjaxMonitor();";
browser.Eval( monitorScript );
}
}
This solution doesn't work very well because .ajaxStart is called only for the first Ajax request, while .ajaxComplete is called each time an ajax request is finished. if you run a this simple code in your console :
$.ajax({url:"/"}); $.ajax({url:"/"})
and add some logging in the .ajaxStart and .ajaxComplete handler methods, you can see that .ajaxStart handler will be called only once and .ajaxComplete handler twice. So ajaxRequestCount will become negative and all your design is screwed.
I suggest that you use .ajaxSend instead of .ajaxStart if you want to keep your design.
Another solution would be to use .ajaxStop instead of .ajaxComplete, but by doing so, you don't need the ajaxRequestCount, you only need a boolean that say if there are ajax requests running behind the scene.
Very useful information can be found : http://api.jquery.com/category/ajax/global-ajax-event-handlers/
Hope this helps.
I just ran into this issue myself while working on some tests using WatiN. I found that in version 1.1.0.4000 of WatiN (released on May 2nd 2007 (latest version being 2.0 RC2 from December 20th 2009)), it is claimed that better support for handling Ajax in tests were added:
To better support testing of AJAX
enabled websites, this release adds
some more options to your toolbox.
A new method is added that will wait
until some attribute has a certain
value. This might be handy in
situations where you need to wait
until a value of an element gets
updated.
Example:
// Wait until some textfield is enabled
textfield.WaitUntil("disable", false.ToSting, 10);
// Wait until some textfield is visible and enabled
textfield.WaitUntil(new Attribute("visibile", new BoolComparer(true)) && new Attribute("disabled", new BoolComparer(false)));
See the link to the release notes for more information.
I haven't looked into it in detail yet, so I cannot tell in which cases it might be useful or not. But thought it could be worth mentioning in case anybody else comes across this question.
Related
Preliminary context sharing
I am asked to manually perform a very repetitive action on a website that I do not own and for which I do not have any API access.
The only hope I have to automate these actions is to write some JavaScript and execute it on the browser just to automate the actions that I would be doing manually otherwise.
Please sorry in advance if this question already has an answer somewhere else, I'm a backend developer and in my limited knowledge of front-end I didn't manage to find any equivalence.
Explanation of the issue
Say I have to post several entries, one by one, into a form. I have written the following code (over simpified just for demonstration purposes):
//This array of Json objects is produced by an upstream service
var inputs = [
{
...
},
{
...
},
{
...
}
]
for (i = 0; i < inputs.length; i++) {
fillSomeForms(inputs[i])
clickSubmit() //<-- this will make the page reload, and so the script execution stop
}
The problem that I have here is very basic: after the first for iteration, when I invoke clickSubmit(), the page reloads (because the submission is a POST followed by a redirect to a "submit next" page) and so the JS stops executing.
I have tried to look around on the web for similar issues, and I've seen people tweaking the localStorage in order to resume the execution of their script.
However, that seems to assume the script being a resource of the front-end code, which is not the case for me (I don't own the code, I simply inject this JS into the browser's developer console and execute it to save some time).
Is there any way to reach this purpose? I am not necessarily looking for a clean solution, just for something that could get this work and spare us some monkey work (nothing of what I'm doing here is clean, but the system administrators do not want to provide access to the REST APIs that the platform actually provide to do so).
When you inject into the console, load a copy of the page into an iframe, and submit your forms from that copy:
const inputs = [ /* a convenient inputs array */ ];
const pageCopy = document.body.appendChild( document.createElement( "iframe" ) );
pageCopy.addEventListener( "load", () => {
//The page copy has finished loading / reloading, let's submit more stuff
if( inputs.length > 0 ) {
const moreInput = inputs.pop();
console.log( "Submitting inputs: ", moreInput );
//this shouldn't work, but let's clone the current DOM into the iframe...
pageCopy.contentDocument.body.parentElement.innerHTML =
document.body.parentElement.innerHTML;
fillSomeFormsInPageCopy( pageCopy.contentDocument, moreInput );
pageCopy.contentDocument.querySelector( "#submitButtonId" ).click();
console.log( "Clicked submit. Will wait for iframe to finish reloading..." );
//Okay, we clicked and the iframe is reloading. This event will fire again as soon as it's done reloading, ready to submit more form data
}
else if( inputs.length === 0 ) {
console.log( "Finished submitting all the inputs in the array!" );
}
} );
pageCopy.src = document.location.href;
Please understand I can't test this code. (I'm not even sure the click() event can be fired across an iframe boundary, for security, but I hope it can.)
Hopefully you can understand how to use the pageCopy's document to find your form elements and set their values. E.g., you can use
pageCopy.contentDocument.getElementById( "form-entry-id-1" ).value =
moreInput[ "form-entry-id-1" ];
In case it may help someone in the future, I finally was able to work around the problem by opening a new tab (and working in that tab) per iteration of my loop.
Something like this:
while (inputs.length > 0) {
const singleInput = inputs.pop();
const newWindow = window.open('about:blank', '_blank');
newWindow.addEventListener('load', () => {
newWindow.document.body.parentElement.innerHTML = document.body.parentElement.innerHTML;
fillForm(newWindow.document, singleInput) //<-- the function fill form uses the document in parameter to perform the different get/set
newWindow.document.getElementById("submit-button").click();
});
}
I am using the DHTMLX scheduler. Basically what I am doing is a AJAX GET request to the server to get my data, and then load the events using their method addEvent(). So I have quite a bit of data to load on the scheduler and I understand that this can take time. I can have from 20 to 2500 events to add to the scheduler, I use personnalized query to my server to optimize the request on each view. The GET/AJAX request takes no time. But loading the events in the calendar takes forever and not only does it take a long time, it freezes the browser. I thought the events were loading but weren't showing because it was just slow so I created a progress bar. But I then realized that the browser hangs while doing the loop so I don't even see the spinner I implemented. The only way to see the events actually being loaded and to see the spinner is to add breakpoints like you can see here :
Can anyone help me with this? Is there a way to make my code better or at least make the spinner show as it is loading the events? So the user knows what is hapenning? When I add a console.log in the for each I can also see it in the console incrementing, and it does it pretty fast, considering that there's a lot of data it can take between 1 second and 35 or so, and I'm okay with that, I just wish it didn't hang.
Here's my code :
$.each( data, function( key, event ) {
var eventObj = scheduler.getEvent(event.Activity_Id_int);
var type = typeof(me.scheduler.getEvent(event.Activity_Id_int));
if(typeof(me.scheduler.getEvent(event.Activity_Id_int)) === 'undefined')
{
var text;
if(event.Titre != null)
text = event.Titre + " " + event.Ressource + '-' + event.Employe;
else
text = event.Ressource + ' - ' + event.Employe;
me.scheduler.addEvent({
id: event.Activity_Id_int,
start_date: Global.formatDateTime(event.Local_Start_DateTime),
end_date: Global.formatDateTime(event.Local_End_DateTime),
text : text,
color: Global.RandGandB_To_SchedulerRGB(event.Color_R,event.Color_G,event.Color_B),
desc_act : event.Desc_Act,
priorite: event.Priorite,
ressource_id: event.Resource_Id,
ressource_name: event.Ressource,
textColor: "black"
});
}
n++;
progress.update(n/data.length * 100);
console.log("Loading these events y'all"); });
Also instead of clearing the events completly when I change view, I just check if the events from the request are already loaded, which increases performance immensly but it still hangs even if I don't add any event aka if I come back to a view where I have already loaded all the events.
i would consider using for loop insted of $.each for faster performance.
you can check this link to see the difference:
https://jsperf.com/browser-diet-jquery-each-vs-for-loop
Hey guys just testing our pages out using the grunt-phantomcss plugin (it's essentially a wrapper for PhantomJS & CasperJS).
We have some stuff on our sites that comes in dynamically (random profile images for users and random advertisements) sooo technically the page looks different each time we load it, meaning the build fails. We would like to be able to jump in and using good ol' DOM API techniques and 'grey out'/make opaque these images so that Casper/Phantom doesn't see them and passes the build.
We've already looked at pageSettings.loadImages = false and although that technically works, it also takes out every image meaning that even our non-ad, non-profile images get filtered out.
Here's a very basic sample test script (doesn't work):
casper.start( 'http://our.url.here.com' )
.then(function(){
this.evaluate(function(){
var profs = document.querySelectorAll('.profile');
profs.forEach(function( val, i ){
val.style.opacity = 0;
});
return;
});
phantomcss.screenshot( '.profiles-box', 'profiles' );
});
Would love to know how other people have solved this because I am sure this isn't a strange use-case (as so many people have dynamic ads on their sites).
Your script might actually work. The problem is that profs is a NodeList. It doesn't have a forEach function. Use this:
var profs = document.querySelectorAll('.profile');
Array.prototype.forEach.call(profs, function( val, i ){
val.style.opacity = 0;
});
It is always a good idea to register to page.error and remote.message to catch those errors.
Another idea would be to employ the resource.requested event handler to abort all the resources that you don't want loaded. It uses the underlying onResourceRequested PhantomJS function.
casper.on("resource.requested", function(requestData, networkRequest){
if (requestData.url.indexOf("mydomain") === -1) {
// abort all resources that are not on my domain
networkRequest.abort();
}
});
If your page handles unloaded resources well, then this should be a viable option.
I am using navigateToURL for file downloading from server. Is there anyway possible to know when navigateToURL has finished, more like when browser download dialog has opened?
Sometimes it takes 5 seconds to complete, user might get confused and start clicking download button like psychopath, which can result in multiple download dialogs opened.
I want to add some "please wait" text or something before it finishes (I already have one, I just need to know when to stop).
Maybe it can be done using javascript and get info from ExternalInterface?
This is kind of crazy, but I can't think of any other way: you could put the request object into a dictionary which is set to weak reference the keys, and then check on intervals whether the key was removed.
However, I'm not sure what will happen first, either the SWF itself will be disposed or the dictionary will be cleaned. It's also possible that given the one-time-ness of the function the reference to the request object isn't deleted because it is assumed to be deleted together with the whole SWF.
One more thing that I know is that uncaught error events will catch events from navigateToURL - not really helpful, but at least may give you the indication if it didn't work.
One more simple thing I can think of - just disable the button for a short time, like 1-2 seconds. If it worked, no one will see the delay, and if it didn't, they won't be able to press it too often.
private var _requestStore:Dictionary = new Dictionary(true);
private var _timer:Timer = new Timer(10);
. . .
_timer.addEventListener(TimerEvent.TIMER, timerHandler);
. . .
public function openURL(url:String):void
{
var request:URLRequest = new URLRequest(url);
_requestStore[request] = true;
_timer.start();
navigateToURL(request);
}
private function timerHandler(event:TimerEvent):void
{
var found:Boolean;
for (var o:Object in _requestStore)
{
found = true;
break;
}
if (!found) // the request got disposed
}
Is there anyway possible to know when navigateToURL has finished, more
like when browser download dialog has opened?
No, there is not. Once you pass a request onto the browser; the Flash Player no longer has any control or access to it.
You mention using ExternalInterface as part of a possible solution, but how would your HTML/JavaScript page know that a download had been finished?
navigateToURL does not fire a complete event. Try using this event from Adobe's help documentation:
public function URLRequestExample() {
loader = new URLLoader();
configureListeners(loader);
var request:URLRequest = new URLRequest("XMLFile.xml");
try {
loader.load(request);
} catch (error:Error) {
trace("Unable to load requested document.");
}
}
private function configureListeners(dispatcher:IEventDispatcher):void {
dispatcher.addEventListener(Event.COMPLETE, completeHandler);
dispatcher.addEventListener(Event.OPEN, openHandler);
dispatcher.addEventListener(ProgressEvent.PROGRESS, progressHandler);
dispatcher.addEventListener(SecurityErrorEvent.SECURITY_ERROR, securityErrorHandler);
dispatcher.addEventListener(HTTPStatusEvent.HTTP_STATUS, httpStatusHandler);
dispatcher.addEventListener(IOErrorEvent.IO_ERROR, ioErrorHandler);
}
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/URLRequest.html#includeExamplesSummary
I was just wondering whether there are any way (libraries, frameworks, tutorials) to do javascript tracking with another script? Basically, i want to track as the user work with the site, which function gets executed with what parameters and so on, as detailed as possible.
thanks a lot!
The extent of detail you're expecting will be challenging for any solution to gather and report on without severely slowing down your scripts -- consider that, for every call, at least 1 other call would need to occur to gather this.
You'd be better to pick a few key events (mouse clicks, etc.) and track only a few details (such as time) for them. If you're using ajax, keep JavaScript and the browser oblivious and just track this on server-side.
There's a few options but I'm not sure if there are any "great" ones. I take it Firebug/IE Dev toolbar profiling won't work because you are trying to track remote user's actions.
So, one option (I'm not highly recommending for production purposes), will work in some but not all browsers.
Essentially you overwrite every function, with a wrapper that you then inject your logging.
(I haven't tested this, trying to recall it from memory... hopefully in "pseudo code" you get the idea...)
//e.g. get all functions defined on the global window object
function logAll(){
var funcs = [];
var oldFunc;
for(var i in window){
try {
if(typeof(window[i]) == 'function'){
if(i != 'logAll'){
funcs.push(i);
}
}
} catch(ex){
//handle as desired
}
}
var x;
for(var i in funcs){
x = '_' + new Date().getTime();
window[x] = window[i];//save the old function as new function
//redefine original
window[i] = function(){
//do your logging here...
//then call the real function (and pass all params along)
call(window[x]);
};
}
};