I seem to be having trouble testing the slick javascript things I do with jQuery when using Capybara and Selenium. The expected behavior is for a form to be dynamically generated when a user clicks on the link "add resource". Capybara will be able to click the link, but fails to recognize the new form elements (i.e. "resource[name]").
Is there a way to reload the DOM for Capybara, or is there some element of this gem that I just haven't learned of yet?
Thanks in advance!
==Edit==
Currently trying my luck with selenium's:
wait_for_element
method.
==Edit==
I keep getting an "undefined method 'wait_for_element` for nill class" when attempting to do the following:
#selenium.wait_for_element
It appears that that specific method, or perhaps wait_for with a huge selector accessing the DOM element I expect is the correct course of action, but now trying to get the selenium session is starting to be a huge headache.
I use the Webdriver based driver for Capybara in RSpec, which I configure and use like this and it will definitely handle JS and doesn't need a reload of the dom. The key is using a wait_until and a condition that will be true when your AJAX response has finished.
before(:each) do
select_driver(example)
logout
login('databanks')
end
def select_driver(example)
if example.metadata[:js]
Capybara.current_driver = :selenium
else
Capybara.use_default_driver
end
end
it "should let me delete a scenario", :js=>true do
select("Mysite Search", :from=>'scenario_id')
wait_until{ page.has_content?('mysite_searchterms')}
click_on "delete"
wait_until{ !page.has_content?('mysite_searchterms')}
visit '/databanks'
page.should_not have_content('Mysite Search')
end
I also figured out a hack to slow down webdriver last night, like this, if you want to watch things in slo-mo:
#set a command delay
require 'selenium-webdriver'
module ::Selenium::WebDriver::Remote
class Bridge
def execute(*args)
res = raw_execute(*args)['value']
sleep 0.5
res
end
end
end
As someone else mentioned, if you are getting a timeout waiting for the element, you could look at upping this:
Capybara.default_wait_time = 10
From the Capybara docs:
When working with asynchronous
JavaScript, you might come across
situations where you are attempting to
interact with an element which is not
yet present on the page. Capybara
automatically deals with this by
waiting for elements to appear on the
page.
You might have some luck increasing the wait time:
Capybara.default_wait_time = 10
If that doesn't help then I would encorage you to contact somebody from the project on GitHub, write to the mailing list or submit an issue report.
Even wait_until deleted from Capybara 2.0. Still that's useful and grab code from below:
def wait_until(delay = 1)
seconds_waited = 0
while ! yield && seconds_waited < Capybara.default_wait_time
sleep delay
seconds_waited += 1
end
raise "Waited for #{Capybara.default_wait_time} seconds but condition did not become true" unless yield
end
Related
I have a project where I am attempting to scrape data from a database-type webpage, but find its load times are very unpredictable. The button clicks call JavaScript functions that change the buttons in the frame. I have developed a script that navigates through the scriptural book, then chapters, then verses, until it gets to the "bottom" level of the tree and returns information about which talks used that particular verse as a citation. The goal is to collect all the data about each verse cited, by whom, when, etc.
My current focus is to just get all the data downloaded, but that is proving to be tricky. I have tried to use browser.implicitly_wait to get the behaviour I am wanting, and have also tried WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.ID, "citationindex")) and that hasn't seemed to work either.
The current setup is shown below. It works as follows:
Selenium opens the base url in Chrome and calls the recursive function
dig_into_citations is called on the browser which recursively starts search through the links in the 'citationindex' or 'citationindex2' element.
The get_citationindex function is called first to determine which one the data is in, as it appears to switch back and forth, depending on what level of the tree you are at. If anyone knows why this is happening, that would be hugely helpful). This 'waits' for the citationindex element to load, which is determines by checking when link elements appear inside one of the two possible elements. Hacky but this seems to work most of the time.
Then the logic in dig_into_citations checks to see if we are at the base level yet, which is where the talks appear. If so, I'll collect the data somehow (not worried about the details now) but am currently just printing out for debugging purposes.
If not yet at the base level, then we are looking at either books, chapters, or verses and want to dig into each of those. I do this by running the javascript function within the onclick attribute of each of the button links (which has the form getFilter('1', '101') where the number of arguments depends on the level). I am passing the script through as well to store as the id for each talk.
The problem I am having is that call to execute_script basically just updates the current webpage with new buttons, and it's not clear how long that takes each time, so I'm putting a 1 second sleep in before each iteration. I find anything slower causes it to choke.
Like I say, I have tried the "wait-until" functionality in Selenium but it hasn't seemed to function as expected. I suspect that this is again because running the script in each case causes the elements inside the 'citationindex' (or 'citationindex2') function to update but the element itself is already there and already "loaded" so maybe that's why the wait-until stuff isn't working.
Any insights into how this process could be improved and made robust would be greatly appreciated. If I am going about this in the wrong way, it would also help to know if there is a simpler approach to get at all the data I'm looking for rather than this rickety recursion through button click hell. Thanks in advance.
from bs4 import BeautifulSoup
from selenium import webdriver
import time
def get_citationindex(browser):
"""
Gets the name of the citationindex element that contains the links of interest.
"""
CITATION_ID_NAMES = ['citationindex', 'citationindex2']
# Gets the HTML associated with both citation index elements and attempts to find links inside
def get_soup_list(browser):
citation_id_elems = [browser.find_element_by_id(x).get_attribute('innerHTML') for x in CITATION_ID_NAMES]
citation_id_soups = [BeautifulSoup(x, features="lxml") for x in citation_id_elems]
soup_list = [soup for soup in citation_id_soups if soup.find_all('a')]
return soup_list
# This runs until the page has actually loaded and the links are found inside the proper citationindex element
soup_list = []
while not soup_list:
soup_list = get_soup_list(browser) # ! this hack is not great
# the BeautifulSoup object of the citation index element is returned (i.e. for 'citationindex' or 'citationindex2')
citationindex_soup = soup_list[0]
return citationindex_soup
def dig_into_citations(browser, script=''):
"""
Recursive function that digs into the citations (through book, chapter, and verse) until it find the talk references.
It operates by calling the javascript functions on the page.
"""
soup = get_citationindex(browser)
# the desired links are the only ones with 'div' tags
links = [x for x in soup.find_all('a') if x.find('div')]
talks = [x for x in links if "getTalk" in x.get('onclick')]
header = soup.find(class_='volumetitle')
if talks:
for k in talks:
ref = k.find(class_='reference').text
title = k.find(class_='talktitle').text
print(script, header.text, ref, title)
else:
for k in links:
script = k.get('onclick')
browser.execute_script(script)
time.sleep(1) # ! this hack is also not great
dig_into_citations(browser, script)
# Run a test case for one particular book
browser = webdriver.Chrome()
browser.implicitly_wait(20)
browser.get('https://scriptures.byu.edu/#::fNYNY7267e401074')
dig_into_citations(browser)
Note I am running this with Python 3.8 with selenium=3.141.0 and beautifulsoup4=4.9.3 using Selenium driver for Chrome 87
I am using capybara with Selenium as its driver. I am trying to click on an element, which when clicked it will reveal a div, but the click never invokes javascript to do just that.
Below is the code I have
scenario 'currently used transport mode cannot be re-selected' do
expect(page).to have_css("h2.summary")
expect(find('h2.summary').text).to eq("Single event")
expect(page).to have_content("Change journey")
page.click_link("Change journey")
expect(find('#travel-times-preview').visible?).to be_truthy # FAILS here because of previous step not working
end
error message
Capybara::ElementNotFound: Unable to find css "#travel-times-preview"
html
<a class="change-journey gray-text" href="#">Change journey</a>
javascript code to execute
$(".change-journey").on("click", function(e){
var target = $(this).data("preview-target");
$('[data-preview-toggle="'+ target +'"]').toggleClass("hidden");
if($(this).text().indexOf('Change journey') > -1){
$(this).text("Close Preview");
}else{
$(this).text("Change journey");
}
e.preventDefault();
});
database cleaner setup
config.before(:suite) do
if config.use_transactional_fixtures?
raise(<<-MSG)
Delete line `config.use_transactional_fixtures = true` from rails_helper.rb
(or set it to false) to prevent uncommitted transactions being used in
JavaScript-dependent specs.
During testing, the Ruby app server that the JavaScript browser driver
connects to uses a different database connection to the database connection
used by the spec.
This Ruby app server database connection would not be able to see data that
has been setup by the spec's database connection inside an uncommitted
transaction.
Disabling the use_transactional_fixtures setting helps avoid uncommitted
transactions in JavaScript-dependent specs, meaning that the Ruby app server
database connection can see any data set up by the specs.
MSG
end
end
config.before(:suite) do
DatabaseCleaner.clean_with(:truncation)
end
config.before(:each) do
DatabaseCleaner.strategy = :transaction
end
config.before(:each, type: :feature) do
# :rack_test driver's Rack app under test shares database connection
# with the specs, so we can use transaction strategy for speed.
driver_shares_db_connection_with_specs = Capybara.current_driver == :rack_test
if driver_shares_db_connection_with_specs
DatabaseCleaner.strategy = :transaction
else
# Non-:rack_test driver is probably a driver for a JavaScript browser
# with a Rack app under test that does *not* share a database
# connection with the specs, so we must use truncation strategy.
DatabaseCleaner.strategy = :truncation
end
end
config.before(:each) do
DatabaseCleaner.start
end
config.after(:each) do
DatabaseCleaner.clean
end
While i can see the link being clicked, the underlying javascript is not executed.
Assuming you've looked into the dev console in the firefox window that opens and there are no JS errors, there are a few potential reasons for the behavior you're seeing, none of which have anything to do with e.preventDefault() (When you hear hoofs think horses, not zebras)
The link you show doesn't have a data-preview-target attribute so there is nothing for the click handler to read and append to data-preview-toggle= so nothing gets toggled. It's possible that data is added by other JS in your app so is set as a property and wasn't an attribute in the HTML (or you just chose to leave that detail out of the element you showed) which moves to #2
Your .on JS code is running before the element exists, and therefore not attaching. This can show up in test mode when it concatenates all the JS into one file because the JS is loaded faster than when multiple requests are made in dev mode. This would show up as the link text not changing since the click handler isn't run at all. If that is the case delay attaching the listener until the DOM is loaded.
This is the first JS supporting test you're writing, the elements being hidden/shown are generated from database records, and you haven't disabled transactional tests and/or configured database_cleaner correctly. This leads to objects you create in your tests not actually being visible in your app, so the element you're expecting to be on the page isn't actually there. You can verify that by pausing the test and looking in the HTML to see if it is or is not actually on the page.
Based on the error you provided this is not the cause of your issue, just adding for completeness of the answer: click_link clicks the link and returns immediately. That means the test continues running while the action triggered by the click continues. This can lead to a race condition in your code (if you have Capybara.ignore_hidden_elements = false set - bad idea) where your code finds the element and checks its visibility before it's changed. Because of that your final step should be written as below because it will wait/retry for the element to become visible
expect(page).to have_css('#travel-times-preview', visible: true)
As an aside, your test code can be improved and sped up by using the features Capybara provides
scenario 'currently used transport mode cannot be re-selected' do
expect(page).to have_css("h2.summary", text: "Single event") # do it in one query
page.click_link("Change journey") # click_link will find the content anyway so no need to check for it before
expect(page).to have_css('#travel-times-preview') # By default Capybara only finds visible elements so checking visibility is pointeless, if you've changed that default see #4
end
I have a capybara test that checks to see if a class is found on the page. The class allows the user to view a sidebar. Here is the test below.
feature 'Lesson Sidebar', js: true do
before do
visit course_lesson_path
end
context 'persists the lesson sidebar display' do
before do
find('#lesson-sidebar__lesson-2').trigger('click')
end
scenario do
expect(page).to have_selector('.lesson-page__sidebar--expanded', visible: false)
end
end
end
The JS code simply tacks on the class to the sidebar element when #lesson-sidebar__lesson-2 is clicked. The code is within a document ready call.
$(document).ready(function() {
$('.lesson-subnav__heading').on("mousedown", "#lesson-sidebar__lesson-2", function (e) {
$('#sidebar').addClass('lesson-page__sidebar--expanded')
})
})
Here is the error response I received.
Capybara::ElementNotFound:
Unable to find css "#lesson-sidebar__lesson-2"
This is my problem. The test will randomly fail. Not only for this test but for other tests within this page. My assumption is that the test is running before the JS has a chance to finish which is why the test fails sometimes. How do I fix this so the test passes every time?
Update: The posting of the actual error shows that it's the find in the before block thats not finding the element, which means this has nothing do with the wait_for_ajax call or event handler binding. If doing something like find('#lesson-sidebar__lesson-2', wait: 20).click doesn't make the error go away, then you have an issue with your page loading (or you've mistyped the selector) although I would not expect that to create an intermittent failure. Check your test logs for what requests were actually made and/or add
sleep 10
page.save_screenshot
before the find/click and look at the page to see if it's what you expect. Another thing to check would be that you're showing your JS errors in your driver config (I assume your using poltergeist since most people who default to .trigger generally are)
I am having a torrid time trying to click on a button using Webdriver. The button is not visible until a value is entered in a prior field. I have tried adding sleeps and explicit waits but still no luck.
I am thinking it might have something to do with page javascript, but my skills don't quite extend that far. I am still learning so apoligies for my ugly code...
count=1
while count < 3:
time.sleep(2)
# Not the best way to select the button - but it works for now!
elem = driver.find_element_by_tag_name("button").click()
#Clear default amount
elem = driver.find_element_by_name("amount")
elem.send_keys(Keys.BACKSPACE)
elem.send_keys(Keys.BACKSPACE)
elem.send_keys(Keys.BACKSPACE)
elem.send_keys(Keys.BACKSPACE)
elem.send_keys(Keys.BACKSPACE)
elem.send_keys(Keys.BACKSPACE)
elem.send_keys("0.04")
print 'Entered Amount'
time.sleep(1)
elem.send_keys(Keys.TAB)
time.sleep(3)
elem.send_keys(Keys.TAB)
time.sleep(3)
elem.send_keys("\n")
# This finds the button - but it isn't visible
# elem = driver.find_element_by_tag_name("button").click()
time.sleep(6)
print 'Number of Payments = ', count
count = count + 1
print 'Finished!'
The website code looks like this:
<button type="button" class="btn alpha centred-form-button ng-binding" ng-click="accountsPayCtrl.submit()" ng-disabled="!accountsPayCtrl.paymentSubmitted &&
(!paymentForm.$valid || !accountsPayCtrl.inAmount || !accountsPayCtrl.payToken)" tabindex="0" aria-disabled="false">
Pay $0.04 now
</button>
There are no doubt more elegant ways to get to the end result too!
Error I am getting is:
Traceback (most recent call last):
File "C:\MW_Test\energyaust_Explicit_Wait.py", line 43, in <module>
elem = driver.find_element_by_tag_name("button").click()
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\webelement.py", line 75, in click
self._execute(Command.CLICK_ELEMENT)
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\webelement.py", line 454, in _execute
return self._parent.execute(command, params)
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 201, in execute
self.error_handler.check_response(response)
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 181, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.ElementNotVisibleException: Message: Element is not currently visible and so may not be inter
acted with
Stacktrace:
at fxdriver.preconditions.visible (file:///c:/users/ozmatt/appdata/local/temp/tmpupncqr/extensions/fxdriver#googleco
de.com/components/command-processor.js:9981)
at DelayedCommand.prototype.checkPreconditions_ (file:///c:/users/ozmatt/appdata/local/temp/tmpupncqr/extensions/fxd
river#googlecode.com/components/command-processor.js:12517)
at DelayedCommand.prototype.executeInternal_/h (file:///c:/users/ozmatt/appdata/local/temp/tmpupncqr/extensions/fxdr
iver#googlecode.com/components/command-processor.js:12534)
at DelayedCommand.prototype.executeInternal_ (file:///c:/users/ozmatt/appdata/local/temp/tmpupncqr/extensions/fxdriv
er#googlecode.com/components/command-processor.js:12539)
at DelayedCommand.prototype.execute/< (file:///c:/users/ozmatt/appdata/local/temp/tmpupncqr/extensions/fxdriver#goog
lecode.com/components/command-processor.js:12481)
You should not use sleep under (almost) any circumstance.
Instead, the Selenium API provides you with waits, Implicit and Explicit.
From the Selenium documentation:
Implicit Waits An implicit wait is to tell WebDriver to poll the DOM
for a certain amount of time when trying to find an element or
elements if they are not immediately available. The default setting is
0. Once set, the implicit wait is set for the life of the WebDriver object instance.
And for Explicit waits:
Explicit Waits An explicit waits is code you define to wait for a
certain condition to occur before proceeding further in the code. The
worst case of this is Thread.sleep(), which sets the condition to an
exact time period to wait. There are some convenience methods provided
that help you write code that will wait only as long as required.
WebDriverWait in combination with ExpectedCondition is one way this
can be accomplished.
Now, in your case, what you need is to have the element visible, or since you need to click on it, have it clickable:
element = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.ID, "myDynamicElement")))
Refer to this for usages of Explicit waits.
Thanks for the suggestions guys. I had a colleague help me solve my issue and thought I would add it on here to perhaps help the next newbie like me.
Turns out that by not finding the detail to find the button, I was actually finding another, hidden button. I feel pretty silly but it is a good lesson!
I am using Capybara, Cucumber and Poltergeist. I am testing a JavaScript function that is attached to a form submit button, which is intended to catch the submit event and prevent it (doing an AJAX request in the background). With and without AJAX, the page will end up looking the same, but the AJAX approach is much faster and does not interrupt the browsing experience etc.
What can I do to test that the form was indeed not submitted, and that the changes are the result of a dynamic AJAX call rather than a reload?
Modified version of #jules' answer:
describe "My page", :js do
it "reloads when it should" do
visit "/"
expect_page_to_reload do
click_link "Reload me"
end
end
def expect_page_to_reload
page.evaluate_script "$(document.body).addClass('not-reloaded')"
yield
expect(page).not_to have_selector("body.not-reloaded")
end
end
EDIT 2017-01-19:
I found this old answer of mine which strangely did not seem to answer the actual question, but instead the opposite (check that the page has reloaded). This is what we do currently in our app to check that a page does not reload:
def expect_page_not_to_reload
page.evaluate_script %{$(document.body).addClass("not-reloaded")}
expect(page).to have_selector("body.not-reloaded")
yield
sleep 0.5 # Give it a chance to reload before we check.
expect(page).to have_selector("body.not-reloaded")
end
Both answers rely on jQuery for adding the class.
I had the same issue and came up with a solution, probably not the best oneābut it works :)
#support/reload.rb
def init_reload_check
page.evaluate_script "window.not_reloaded = 'not reloaded';"
end
def not_reloaded
page.evaluate_script("window.not_reloaded") == "not reloaded"
end
#Ajax Test
it "should not reload the page", js: true do
init_reload_check
click_link "Ajax Link"
expect(not_reloaded).to be_truthy
end
And here's my version of Henrik N's answer. Doesn't rely on jQuery or whatever assertion library he's using.
def assert_page_reloads(message = "page should reload")
page.evaluate_script "document.body.classList.add('not-reloaded')"
yield
if has_selector? "body.not-reloaded"
assert false, message
end
end
def assert_page_does_not_reload(message = "page should not reload")
page.evaluate_script "document.body.classList.add('not-reloaded')"
yield
unless has_selector? "body.not-reloaded"
assert false, message
end
page.evaluate_script "document.body.classList.remove('not-reloaded')"
end
Capybara is intended for user-level functional testing. There isn't an easy way to test an implementation detail such as whether a form uses AJAX or a traditional form submission. It encourages you to write your tests in terms of the end-user experience.
There are a couple of things you can do:
Capybara supports a wait time for finders and assertions. There are a few ways to set it, using Capybara.default_wait_time, Capybara.using_wait_time, or passing a wait option to your assertion methods. If you set a low wait time, you can be sure that the results of clicking the submit button return quickly.
Look into JavaScript unit and integration testing frameworks such as Jasmine. This can be used to test the JavaScript code that sets up the event binding and AJAX call. You can use a spy to ensure that the expected AJAX call is made when the button is clicked.
feature "User creates comment", do
context "with JavaScript enabled", js: true do
scenario "creates comment via AJAX" do
initial_page_id = assign_page_id
# Perform AJAX action
...
page_reloaded = current_page_id != initial_page_id
expect(page_reloaded).to eq(false)
end
def assign_page_id
page_id = SecureRandom.hex
page.evaluate_script "window.pageIdForTesting = '#{page_id}'"
end
def current_page_id
page.evaluate_script("window.pageIdForTesting")
end
end
context "without JavaScript" do
# Can be used if progressively enhancing
...
end
end