Javascript login page with Python Selenium - javascript

I am trying to login to the following page using Python selenium package to do few activities. I wrote the following, but every time I get "the Class is not found". I need to access the username and password to use send_keys(). Any feedback on this is appreciated.
[Code]
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait as wait
from selenium.webdriver.support import expected_conditions as EC
chrome_browser = webdriver.Chrome('C:/Users/vbabu/AppData/Local/chromedriver')
chrome_browser.maximize_window()
chrome_browser.get('https://myservices-dfsi.console.oraclecloud.com/mycloud/cloudportal/gettingStarted')
form = chrome_browser.find_element_by_id('idcs-signin-basic-signin-form-post-redirect-form')
print(form)
[Error]
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":"[id="idcs-signin-basic-signin-form-post-redirect-form"]"}
[Expected Output]
I need to access the username and password tags.

Try using explicit wait until element get available to perform actions
username = WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.ID, "idcs-signin-basic-signin-form-username")))
username.send_keys("some_text")
import below packages:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

time.sleep(5)
and ID = "idcs-signin-basic-signin-form-username"

Related

Python web scraping with requests sign in

I am working with www.freightquote.com and at some point I need to sign in otherwise not allowed me to get freight rates for more than 45 pairs.
I would like to enter sign in information for this website but for some reason it is not working. I could not understand the problem.
You can directly use this website: https://account.chrobinson.com/
I have problem to enter the information that I am asked. Here is what I did:
from selenium import webdriver
from time import sleep
import pandas as pd
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import Select
from selenium.webdriver.chrome.service import Service
PATH = r'C:\Users\b\Desktop\Webscraping\chromedriver.exe'
s= Service(PATH )
driver = webdriver.Chrome(service=s)
link = "https://www.freightquote.com/book/#/free-quote/pickup"
driver.get(link)
sleep(2)
driver.maximize_window()
sleep(2)
driver.find_elements(by=By.XPATH, value = '//button[#type="button"]')[0].click()
sleep(3)
#Username:
driver.find_element(by=By.XPATH, value='//input[#type="email"]').send_keys('USERNAME')
driver.find_elements(by=By.XPATH, value = '//input[#class="button button-primary" and #type="submit"]')[0].click()
#password
driver.find_element(by=By.XPATH, value='//input[#type="password"]').send_keys('PASSWORD')
driver.find_elements(by=By.XPATH, value = '//input[#class="button button-primary" and #type="submit"]')[0].click()
sleep(2)
your code and your technic have too many problems, you should learn how to code in selenium completely and then start writing code.
I modified your code to the point of entering the email, please complete the code accordingly.
driver = webdriver.Chrome()
link = "https://www.freightquote.com/book/#/free-quote/pickup"
driver.get(link)
driver.maximize_window()
WebDriverWait(driver, 30).until(
EC.presence_of_element_located((By.XPATH,
'(//button[#type="button"])[1]'))).click()
WebDriverWait(driver, 30).until(
EC.presence_of_element_located((By.XPATH,
'//input[#type="email"]'))).send_keys('USERNAME')
also, you don't need to add chromedriver path in your code. if you use Windows or Linux you should add it into your virtualenv, in the /bin folder
and if you use from mac you should add it to this path /usr/local/bin
To enter sign in information for the website you need to induce WebDriverWait for the element_to_be_clickable() and you can use the following locator strategies:
Using CSS_SELECTOR:
driver.get("https://account.chrobinson.com/")
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "input[name='username']"))).send_keys("Ribella")
driver.find_element(By.CSS_SELECTOR, "input[name='password']").send_keys("Ribella")
driver.find_element(By.CSS_SELECTOR, "input[value='Sign In']").click()
Note: You have to add the following imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
Browser Snapshot:

Selenium does not find existing elements that can be found by the browser aba elements and by using js in console

I'm trying to automate the bet365 casino, I know they have tools to block bots.
link :https://casino.bet365.com/Play/LiveRoulette
I can't handle anything that's inside the div class="app-container", at least by selenium. But I find these elements using JavaScript in the browser console.
import undetected_chromedriver as UChrome
from webdriver_manager.chrome import ChromeDriverManager
UChrome.install(ChromeDriverManager().install())
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.by import By
driver = UChrome.Chrome()
driver.get('https://www.bet365.com/#/HO/')
after login
driver.get('https://casino.bet365.com/Play/LiveRoulette')
locator = (By.XPATH,'//*[contains(#class, "second-dozen")]')
I try
probably the selectors should be a little different
driver.execute_script('return document.getElementsByClassName("roulette-table-cell roulette-table-cell_side-first-dozen roulette-table-cell_group-dozen")[0].getBoundingClientRect()')
Try
driver.find_element(locator[0], locator[1])
but I recive this: raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"(//*[contains(text(), "PAR")])[1]"}
(Session info: chrome=96.0.4664.110)
Stacktrace:
0 0x55f8fa1bcee3
1 0x55f8f9c8a608
2 0x55f8f9cc0aa1
You probably missing a delay / wait.
Redirecting to the inner page with
driver.get('https://casino.bet365.com/Play/LiveRoulette')
It takes some time to make all the elements loaded there, you can not access elements immediately.
The recommended way to do that is to use to use Expected Conditions explicit waits, something like this:
import undetected_chromedriver as UChrome
from webdriver_manager.chrome import ChromeDriverManager
UChrome.install(ChromeDriverManager().install())
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.by import By
driver = UChrome.Chrome()
wait = WebDriverWait(driver, 20)
driver.get('https://www.bet365.com/#/HO/')
#perform the login here
driver.get('https://casino.bet365.com/Play/LiveRoulette')
locator = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, '//*[contains(#class, "second-dozen")]')))
I see you are also basically missing the driver.find_element method.
This:
(By.XPATH,'//*[contains(#class, "second-dozen")]')
will not return a web element.
Also make sure that element is not inside the iframe.

Clicking a button in JavaScript page - Selenium/Python

My code accesses a page, and I am trying to click on the button that says "Physician Program" on the menu list. If you click on this on the browser, it directs you to a new webpage.
However, there is no href on the html of the page that would help me find this link via code (I am assuming because it is JavaScript?) Currently, I just used its Xpath.
My question is - If I am able to click on it in a browser, shouldnt I be able to click on it using Selenium? If so, how can this be done?
import time
from bs4 import BeautifulSoup
from selenium import webdriver
driver = webdriver.Chrome()
driver.get('https://www.kidney.org/spring-clinical/program')
time.sleep(6)
page_source = driver.page_source
soup = BeautifulSoup(page_source, 'html.parser')
element1 = driver.find_element_by_xpath('//*[#id="dx-c7ad8807-6124-b55e-d292-29a4389dee8e"]/div')
element1.click()
The element is inside iframe you need to switch to iframe
driver.switch_to.frame("SCM20 Advanced Practitioner Program")
element1 = driver.find_element_by_xpath("//div[text()='Physician Program']")
element1.click()
Ideally you should use webdriverwait and wait for frame to be available.
WebDriverWait(driver,10).until(EC.frame_to_be_available_and_switch_to_it((By.NAME,"SCM20 Advanced Practitioner Program")))
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH "//div[text()='Physician Program']"))).click()
You need to import below libraries
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
import subprocess
#other imports
import time
from bs4 import BeautifulSoup
from selenium import webdriver
driver = webdriver.Chrome()
driver.get('https://www.kidney.org/spring-clinical/program')
time.sleep(6)
page_source = driver.page_source
soup = BeautifulSoup(page_source, 'html.parser')
frame= WebDriverWait(driver,10).until(EC.presence_of_element_located(
(By.NAME, "SCM20 Advanced Practitioner Program")))
driver.switch_to.frame(frame)
options = WebDriverWait(driver, 10).until(EC.visibility_of_all_elements_located(
(By.CSS_SELECTOR, '[class="track-selector-popup"] [role="option"]')))
options[0].click()
input()
Element is inside iframe so switch to it and also use waits, to switch back and interact with elements outside the frame use:
driver.switch_to.default_content()

Python Selenium BeautifulSoup Page Source Does Not Display Everything

Goal:
Hello I am pretty new to Web and Selenium. I am currently trying to grab a value from my JIRA Board. Here:
Problem:
For some reason that value does not show up in the page source. I think it might be a JavaScript rendered value? or maybe it gets generated after the page loads. I tried using implicitly_wait, WebDriverWait, and switch_Frame but nothing seems to work. =/
Code:
#!/usr/local/bin/python2.7
#import requests
import json
import base64
import sys
import getopt
import argparse
from datetime import datetime
from datetime import timedelta
from bs4 import BeautifulSoup
from jira import JIRA
from jira.client import GreenHopper
import selenium
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.common.exceptions import TimeoutException
JIRA_INSTALLATION = "jira.turn.com"
STATE_IN_PROGRESS = "In Progress"
STATE_RESOLVED = "Resolved"
STATE_CLOSED = "Closed"
options = {'server': 'https://jira.turn.com'}
CUR_TIMEZONE_SHIFT = timedelta(hours=7)
def main(argv):
p=argparse.ArgumentParser(description="Gets a set of completed stories and list them with their implementation time.")
p.add_argument('filter_id', help="Id of the filter that contains completed stories.")
p.add_argument('-u', dest='username', help="JIRA username. Needs to have read access to all tickets returned from the search filter.")
p.add_argument('-p', dest='password', help="Password for the JIRA user to use for API calls.")
args = p.parse_args(argv)
driver = webdriver.Firefox()
driver.get('https://jira.turn.com/')
driver.find_element_by_id("login-form-username").send_keys(args.username)
driver.find_element_by_id ("login-form-password").send_keys(args.password)
driver.find_element_by_id("login").click()
#driver.implicitly_wait(10)
#ele = WebDriverWait(driver, 10)
driver.get('https://jira.turn.com/secure/RapidBoard.jspa?rapidView=184&view=reporting&chart=controlChart&days=30&column=1214&column=1298')
#WebDriverWait
soup_level1 = BeautifulSoup(driver.page_source, 'lxml')#'html.parser')#'lxml')
print soup_level1.find(id='ghx-chart-snapshot')#find(id='content').find(id="gh").find(id="ghx-content-main").find(id="ghx-chart-header"))
print soup_level1.find(id='ghx-chart-snapshot').find(id='ghx-chart-snapshot')
driver.quit()
return
if __name__ == "__main__":
main(sys.argv[1:])
Output:
<div id="ghx-chart-snapshot"></div>
None

Python Selenium: How to navigate Javascript based navigation

So the page in question is here, I want to navigate pagination having following markup:
<li class="btn-next">
Suivant</li>
If you notice, JS method is being called here. So far I have done this:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = None
driver = webdriver.Firefox()
wait = WebDriverWait(driver, 30)
def fetch(url):
driver.get(
'http://www.leparking.fr/voiture-occasion/Porsche--targa-g.html#!/voiture-occasion/Porsche--targa-g.html%3Fslider_millesime%3D1940%7C1985')
elem_more = wait.until(EC.element_to_be_clickable((By.LINK_TEXT, "Suivant")))
elem_more.click()
fetch(None)
It does hover the element but does not navigate on click. What should I do?
Thanks
I sorted it out by using execute_script method:
elem_more = wait.until(EC.element_to_be_clickable((By.LINK_TEXT, "Suivant")))
driver.execute_script("arguments[0].click();", elem_more)

Categories