I am using capybara with Selenium as its driver. I am trying to click on an element, which when clicked it will reveal a div, but the click never invokes javascript to do just that.
Below is the code I have
scenario 'currently used transport mode cannot be re-selected' do
expect(page).to have_css("h2.summary")
expect(find('h2.summary').text).to eq("Single event")
expect(page).to have_content("Change journey")
page.click_link("Change journey")
expect(find('#travel-times-preview').visible?).to be_truthy # FAILS here because of previous step not working
end
error message
Capybara::ElementNotFound: Unable to find css "#travel-times-preview"
html
<a class="change-journey gray-text" href="#">Change journey</a>
javascript code to execute
$(".change-journey").on("click", function(e){
var target = $(this).data("preview-target");
$('[data-preview-toggle="'+ target +'"]').toggleClass("hidden");
if($(this).text().indexOf('Change journey') > -1){
$(this).text("Close Preview");
}else{
$(this).text("Change journey");
}
e.preventDefault();
});
database cleaner setup
config.before(:suite) do
if config.use_transactional_fixtures?
raise(<<-MSG)
Delete line `config.use_transactional_fixtures = true` from rails_helper.rb
(or set it to false) to prevent uncommitted transactions being used in
JavaScript-dependent specs.
During testing, the Ruby app server that the JavaScript browser driver
connects to uses a different database connection to the database connection
used by the spec.
This Ruby app server database connection would not be able to see data that
has been setup by the spec's database connection inside an uncommitted
transaction.
Disabling the use_transactional_fixtures setting helps avoid uncommitted
transactions in JavaScript-dependent specs, meaning that the Ruby app server
database connection can see any data set up by the specs.
MSG
end
end
config.before(:suite) do
DatabaseCleaner.clean_with(:truncation)
end
config.before(:each) do
DatabaseCleaner.strategy = :transaction
end
config.before(:each, type: :feature) do
# :rack_test driver's Rack app under test shares database connection
# with the specs, so we can use transaction strategy for speed.
driver_shares_db_connection_with_specs = Capybara.current_driver == :rack_test
if driver_shares_db_connection_with_specs
DatabaseCleaner.strategy = :transaction
else
# Non-:rack_test driver is probably a driver for a JavaScript browser
# with a Rack app under test that does *not* share a database
# connection with the specs, so we must use truncation strategy.
DatabaseCleaner.strategy = :truncation
end
end
config.before(:each) do
DatabaseCleaner.start
end
config.after(:each) do
DatabaseCleaner.clean
end
While i can see the link being clicked, the underlying javascript is not executed.
Assuming you've looked into the dev console in the firefox window that opens and there are no JS errors, there are a few potential reasons for the behavior you're seeing, none of which have anything to do with e.preventDefault() (When you hear hoofs think horses, not zebras)
The link you show doesn't have a data-preview-target attribute so there is nothing for the click handler to read and append to data-preview-toggle= so nothing gets toggled. It's possible that data is added by other JS in your app so is set as a property and wasn't an attribute in the HTML (or you just chose to leave that detail out of the element you showed) which moves to #2
Your .on JS code is running before the element exists, and therefore not attaching. This can show up in test mode when it concatenates all the JS into one file because the JS is loaded faster than when multiple requests are made in dev mode. This would show up as the link text not changing since the click handler isn't run at all. If that is the case delay attaching the listener until the DOM is loaded.
This is the first JS supporting test you're writing, the elements being hidden/shown are generated from database records, and you haven't disabled transactional tests and/or configured database_cleaner correctly. This leads to objects you create in your tests not actually being visible in your app, so the element you're expecting to be on the page isn't actually there. You can verify that by pausing the test and looking in the HTML to see if it is or is not actually on the page.
Based on the error you provided this is not the cause of your issue, just adding for completeness of the answer: click_link clicks the link and returns immediately. That means the test continues running while the action triggered by the click continues. This can lead to a race condition in your code (if you have Capybara.ignore_hidden_elements = false set - bad idea) where your code finds the element and checks its visibility before it's changed. Because of that your final step should be written as below because it will wait/retry for the element to become visible
expect(page).to have_css('#travel-times-preview', visible: true)
As an aside, your test code can be improved and sped up by using the features Capybara provides
scenario 'currently used transport mode cannot be re-selected' do
expect(page).to have_css("h2.summary", text: "Single event") # do it in one query
page.click_link("Change journey") # click_link will find the content anyway so no need to check for it before
expect(page).to have_css('#travel-times-preview') # By default Capybara only finds visible elements so checking visibility is pointeless, if you've changed that default see #4
end
Related
I have a project where I am attempting to scrape data from a database-type webpage, but find its load times are very unpredictable. The button clicks call JavaScript functions that change the buttons in the frame. I have developed a script that navigates through the scriptural book, then chapters, then verses, until it gets to the "bottom" level of the tree and returns information about which talks used that particular verse as a citation. The goal is to collect all the data about each verse cited, by whom, when, etc.
My current focus is to just get all the data downloaded, but that is proving to be tricky. I have tried to use browser.implicitly_wait to get the behaviour I am wanting, and have also tried WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.ID, "citationindex")) and that hasn't seemed to work either.
The current setup is shown below. It works as follows:
Selenium opens the base url in Chrome and calls the recursive function
dig_into_citations is called on the browser which recursively starts search through the links in the 'citationindex' or 'citationindex2' element.
The get_citationindex function is called first to determine which one the data is in, as it appears to switch back and forth, depending on what level of the tree you are at. If anyone knows why this is happening, that would be hugely helpful). This 'waits' for the citationindex element to load, which is determines by checking when link elements appear inside one of the two possible elements. Hacky but this seems to work most of the time.
Then the logic in dig_into_citations checks to see if we are at the base level yet, which is where the talks appear. If so, I'll collect the data somehow (not worried about the details now) but am currently just printing out for debugging purposes.
If not yet at the base level, then we are looking at either books, chapters, or verses and want to dig into each of those. I do this by running the javascript function within the onclick attribute of each of the button links (which has the form getFilter('1', '101') where the number of arguments depends on the level). I am passing the script through as well to store as the id for each talk.
The problem I am having is that call to execute_script basically just updates the current webpage with new buttons, and it's not clear how long that takes each time, so I'm putting a 1 second sleep in before each iteration. I find anything slower causes it to choke.
Like I say, I have tried the "wait-until" functionality in Selenium but it hasn't seemed to function as expected. I suspect that this is again because running the script in each case causes the elements inside the 'citationindex' (or 'citationindex2') function to update but the element itself is already there and already "loaded" so maybe that's why the wait-until stuff isn't working.
Any insights into how this process could be improved and made robust would be greatly appreciated. If I am going about this in the wrong way, it would also help to know if there is a simpler approach to get at all the data I'm looking for rather than this rickety recursion through button click hell. Thanks in advance.
from bs4 import BeautifulSoup
from selenium import webdriver
import time
def get_citationindex(browser):
"""
Gets the name of the citationindex element that contains the links of interest.
"""
CITATION_ID_NAMES = ['citationindex', 'citationindex2']
# Gets the HTML associated with both citation index elements and attempts to find links inside
def get_soup_list(browser):
citation_id_elems = [browser.find_element_by_id(x).get_attribute('innerHTML') for x in CITATION_ID_NAMES]
citation_id_soups = [BeautifulSoup(x, features="lxml") for x in citation_id_elems]
soup_list = [soup for soup in citation_id_soups if soup.find_all('a')]
return soup_list
# This runs until the page has actually loaded and the links are found inside the proper citationindex element
soup_list = []
while not soup_list:
soup_list = get_soup_list(browser) # ! this hack is not great
# the BeautifulSoup object of the citation index element is returned (i.e. for 'citationindex' or 'citationindex2')
citationindex_soup = soup_list[0]
return citationindex_soup
def dig_into_citations(browser, script=''):
"""
Recursive function that digs into the citations (through book, chapter, and verse) until it find the talk references.
It operates by calling the javascript functions on the page.
"""
soup = get_citationindex(browser)
# the desired links are the only ones with 'div' tags
links = [x for x in soup.find_all('a') if x.find('div')]
talks = [x for x in links if "getTalk" in x.get('onclick')]
header = soup.find(class_='volumetitle')
if talks:
for k in talks:
ref = k.find(class_='reference').text
title = k.find(class_='talktitle').text
print(script, header.text, ref, title)
else:
for k in links:
script = k.get('onclick')
browser.execute_script(script)
time.sleep(1) # ! this hack is also not great
dig_into_citations(browser, script)
# Run a test case for one particular book
browser = webdriver.Chrome()
browser.implicitly_wait(20)
browser.get('https://scriptures.byu.edu/#::fNYNY7267e401074')
dig_into_citations(browser)
Note I am running this with Python 3.8 with selenium=3.141.0 and beautifulsoup4=4.9.3 using Selenium driver for Chrome 87
I have to implement a way to select values from an ASP horizontal menu (generated with a config file). The generation of the menu is not an issue: my actual problem is to find a way for the user to access its values without using postbacks.
I am working on a big and old project that was coded with the intent of not using PostBack requests at all. Postback are by design unexpected and considered as bugs.
In order to reach my goal, I tried to code a JavaScript trigger for the MenuItemClick event that ends with return false, in order to avoid postbacks (currently the JavaScript code is limited to a simple test alert and the false return, for testing purposes). But this doesn't work: I can't get the event to fire the JavaScript function: a post back happens, which is undesired.
The complete code consist of thousands lines (most of them unrelated to the issue), here is an abridged version with only the relevant lines:
Default.aspx:
<%# Register TagPrefix="CRB" Namespace="ConfigurableReportBuilder.PageControls" Assembly="ConfigurableReportBuilder" %>
<form id="form1" class="page" runat="server">
<CRB:HorizontalMenu ID="MainMenu" runat="server" RootNode="Menu/Items" StyleClass="ui-menu ui-state-hover"/>
//this works fine, no issue here.
</form>
Default.aspx.cs:
(...)
if (this.IsPostBack)
throw new Exception("PostBack requests are not expected to occur");
//Ensure that "PostBack requests" (which are unexpected and therefore indicate bugs) become recognized by a specific exception which is thrown here
Workspace.js:
var WS = (function ($) {
var test = function (event) {
alert("test");
return false; //prevent postback
};
(...)
return {// This dictionary contains references to public methods used in the project
test: test,
(...)
}
}
$(document).on("menuitemclick", '#MainMenu', WS.test); //setting the event trigger
I have found a solution that does achieve the desired goal, but beware, it doesn't respect good practices of coding. There might be better solutions available, but i can't find any.
The idea is to abandon the MenuItemClick event based logic and revert to the default behavior of an ASP menus, having URLs as clickable menu items(NavigateUrl). The URLs will are accessible in the main page without any postback, they can therefore be used to carry the information... as small JavaScript codes.
In order to do this, one must set raw Javascript code as NavigateUrl parameters during the initialization of the menu items:
protected MenuItem getMenuItem(XmlNode menuNode)
{//function that initialize a MenuItem with an XML node as parameter
MenuItem mi = new MenuItem(); //creating a MenuItem object
//Assigning its properties:
mi.Text = menuNode.Attributes["text"].Value;
if (menuNode.Attributes["description"] != null)
mi.ToolTip = menuNode.Attributes["description"].Value;
if (menuNode.Attributes["configfile"] != null) {
mi.Value = menuNode.Attributes["configfile"].Value;
mi.NavigateUrl = "javascript:alert('"+mi.Value+"');"; //Note that this isn't an URL
}
}
The URL from the ASP menu are therefore not links but functions that will be executed by in the main page, without going though any event. Theses function can therefore be used to retrieve the desired values, without going though any postbacks-events : in the example I just gave, an alert window displays the selected dropdown value.
I have a capybara test that checks to see if a class is found on the page. The class allows the user to view a sidebar. Here is the test below.
feature 'Lesson Sidebar', js: true do
before do
visit course_lesson_path
end
context 'persists the lesson sidebar display' do
before do
find('#lesson-sidebar__lesson-2').trigger('click')
end
scenario do
expect(page).to have_selector('.lesson-page__sidebar--expanded', visible: false)
end
end
end
The JS code simply tacks on the class to the sidebar element when #lesson-sidebar__lesson-2 is clicked. The code is within a document ready call.
$(document).ready(function() {
$('.lesson-subnav__heading').on("mousedown", "#lesson-sidebar__lesson-2", function (e) {
$('#sidebar').addClass('lesson-page__sidebar--expanded')
})
})
Here is the error response I received.
Capybara::ElementNotFound:
Unable to find css "#lesson-sidebar__lesson-2"
This is my problem. The test will randomly fail. Not only for this test but for other tests within this page. My assumption is that the test is running before the JS has a chance to finish which is why the test fails sometimes. How do I fix this so the test passes every time?
Update: The posting of the actual error shows that it's the find in the before block thats not finding the element, which means this has nothing do with the wait_for_ajax call or event handler binding. If doing something like find('#lesson-sidebar__lesson-2', wait: 20).click doesn't make the error go away, then you have an issue with your page loading (or you've mistyped the selector) although I would not expect that to create an intermittent failure. Check your test logs for what requests were actually made and/or add
sleep 10
page.save_screenshot
before the find/click and look at the page to see if it's what you expect. Another thing to check would be that you're showing your JS errors in your driver config (I assume your using poltergeist since most people who default to .trigger generally are)
I'm using WebdriverIO and selenium-standalone to write automated tests that will verify that various parts of our user interface are working.
I need to verify that an element is not present on the page. For example, our system allows staff to track various types of resources that we are referring clients to. If a staff member accidentally adds the wrong resource, they can delete it, and I want to verify that the resource was actually deleted and is not present on the page.
WebdriverIO has an .isExisting() property, but no way to check if something is not existing (or not visible/present). I could also use Chai assertions to figure this out, but haven't delved into that world yet.
Here's a snippet of my code:
it('I can delete a resource from a need', function() {
return driver
.moveToObject('span.ccx-tasklist-task') // Hover mouse over resource
.click('div.referral-controls a.btn.dropdown-standalone') // Click Resource drop-down
.click('div.referral-controls.ccx-dropdown-menu-selected li > a') // Delete Resource
.pause(2000);
// Need to Verify that resource was deleted here
Any advice? Let me know if you need more information.
You can waitForExist with the reverse option set to true.
.waitForExist( '[id$=OpenNeedsPanel] div.commodities', 500, true )
I was able to verify that an element didn't exist on the page like this:
.isExisting('[id$=OpenNeedsPanel] div.commodities').should.eventually.equal(false);
you can simply use the below mentioned line of code
int temp=driver.findElements(By.xpath("your x-path expression")).size();
you can even replace your xpath locator with your other locators as well like id,class,link etc.
Now, if the value of temp>0, it means , your element exists
You can refer https://webdriver.io/docs/api/element/waitForExist.html
Wait for an element for the provided amount of milliseconds to be present within the DOM. Returns true if the selector matches at least one element that exists in the DOM, otherwise throws an error. If the reverse flag is true, the command will instead return true if the selector does not match any elements.
resource.waitForExist(1000, true);
where 1000 is the timeout in ms.
I seem to be having trouble testing the slick javascript things I do with jQuery when using Capybara and Selenium. The expected behavior is for a form to be dynamically generated when a user clicks on the link "add resource". Capybara will be able to click the link, but fails to recognize the new form elements (i.e. "resource[name]").
Is there a way to reload the DOM for Capybara, or is there some element of this gem that I just haven't learned of yet?
Thanks in advance!
==Edit==
Currently trying my luck with selenium's:
wait_for_element
method.
==Edit==
I keep getting an "undefined method 'wait_for_element` for nill class" when attempting to do the following:
#selenium.wait_for_element
It appears that that specific method, or perhaps wait_for with a huge selector accessing the DOM element I expect is the correct course of action, but now trying to get the selenium session is starting to be a huge headache.
I use the Webdriver based driver for Capybara in RSpec, which I configure and use like this and it will definitely handle JS and doesn't need a reload of the dom. The key is using a wait_until and a condition that will be true when your AJAX response has finished.
before(:each) do
select_driver(example)
logout
login('databanks')
end
def select_driver(example)
if example.metadata[:js]
Capybara.current_driver = :selenium
else
Capybara.use_default_driver
end
end
it "should let me delete a scenario", :js=>true do
select("Mysite Search", :from=>'scenario_id')
wait_until{ page.has_content?('mysite_searchterms')}
click_on "delete"
wait_until{ !page.has_content?('mysite_searchterms')}
visit '/databanks'
page.should_not have_content('Mysite Search')
end
I also figured out a hack to slow down webdriver last night, like this, if you want to watch things in slo-mo:
#set a command delay
require 'selenium-webdriver'
module ::Selenium::WebDriver::Remote
class Bridge
def execute(*args)
res = raw_execute(*args)['value']
sleep 0.5
res
end
end
end
As someone else mentioned, if you are getting a timeout waiting for the element, you could look at upping this:
Capybara.default_wait_time = 10
From the Capybara docs:
When working with asynchronous
JavaScript, you might come across
situations where you are attempting to
interact with an element which is not
yet present on the page. Capybara
automatically deals with this by
waiting for elements to appear on the
page.
You might have some luck increasing the wait time:
Capybara.default_wait_time = 10
If that doesn't help then I would encorage you to contact somebody from the project on GitHub, write to the mailing list or submit an issue report.
Even wait_until deleted from Capybara 2.0. Still that's useful and grab code from below:
def wait_until(delay = 1)
seconds_waited = 0
while ! yield && seconds_waited < Capybara.default_wait_time
sleep delay
seconds_waited += 1
end
raise "Waited for #{Capybara.default_wait_time} seconds but condition did not become true" unless yield
end