I'm writing a web scraper using urllib2 and BeautifulSoup in python and am looking for a way to instruct python to click a button on a page that it reads the HTML source code for.
The following snippet of my script reads in URLs from a csv file and is meant to scrape data from the webpages specified, but an intermediary step is to click a "submit" button that exists on the webpage that is read from the csv's provided URLs.
for line in triplines:
FromTo = line.split(",")
From = FromTo[0].strip()
print(From)
To = FromTo[1].strip()
print(To)
url = KCString1 + From + KCString2 + To + KCString3
print(url)
page = urllib2.urlopen(url)
page_source = page.read()
soup = BeautifulSoup(page_source)
print(soup.prettify())
Is there a way to utilize urllib2 functionality in such a way as to say "follow the URL that is obtained from clicking this button"? I imagine I may need to find the JavaScript source to identify the button's identifiers first.
Buttons do not typically have urls attached to them. They normally need javascript interaction, which needs emulation. If you want to click a button, you should use a browser emulator like Ghost instead of a parser like Beautifulsoup
Related
I'm building a webscraper using beautifulsoup.Some websites have javascript contents and do not load using urllib3 hence I use selenium for them.But selenium takes too long too respond and I need to build a more efficient webscraper since I need to use the same generalized scraper for multiple websites. hence I'm thinking if there's some way I can find out if the website has js content only then ill use selenium else I'll go with faster urllib
from selenium import webdriver
from bs4 import BeautifulSoup
import time
browser = webdriver.Chrome()
strt=time.time()
y=browser.get("https://www.amazon.jobs/en/locations/bangalore-india")
#time.sleep(10)
html = browser.page_source
soup = BeautifulSoup(html,'lxml')
li=soup.find_all('ul')
print(li)
print('load time='+str(time.time()-strt))
Here is the simple check using selenium
jsSize = (len(driver.find_elements_by_xpath("/html/head/script")))
if jsSize>0:
print("Page contains javascript")
The script tag is used to define a client-side script (JavaScript).
The element either contains script statements, or it points to an external script file through the src attribute.
Right click on the webpage you want to scrape >> Go to View Page Source >>
look for the tag named script, the script tag will indicate that the web page you are trying to scrape also consist of JavaScript.
I have an existing Python script that was written using urllib2 to download from a http:// link:
import urllib2
import os.path
import os
from os import chdir, getcwd, listdir, path
print "downloading with urllib2"
f = urllib2.urlopen('http://www.dcregs.dc.gov/Notice/DownLoad.aspx?VersionID=4613531')
data = f.read()
with open( "11-B300.doc", "wb" ) as code :
code.write( data )
print "All done downloads!"
The source web-page has been reformatted to uses a "javascript:__doPostBack" address:
javascript:__doPostBack('ctl00$MainContent$rpt_ruleList$ctl02$Label1','')
My presumption is that there is some form of package, similar to urllib2, that will allow me to download the same information via the "javascript:__doPostBack" formatted address or to call the http url, where the information is located, from which I can then download the information.
The existing script was working well for my purposes, so I would like to limit the additional coding, if possible.
Is there an alternate to urllib2 that will allow me to do download the information in a similar manner?
Or am I going to have to get more sophisticated in my solution (e.g., using Selenium to scrape the information)? (Do I want to get more sophisticated so that I don't have to manage updates to individual urls?)
Thanks for your help in advance.
This relates to the site that you're on using is using .NET WebForms which manages the state of the page & the interaction within hidden form variables.
So in short, you'll need to click the link via something like Selenium as you say
I have to download source code of a website like www.humkinar.pk in simple HTML form. Content on site is dynamically generated. I have tried driver.page_source function of selenium but it does not download page completely such as image and javascript files are left. How can I download complete page. Is there any better and easy solution in python available?
Using Selenium
I know your question is about selenium, but from my experience I am telling you that selenium is recommended for testing and NOT for scraping. It is very SLOW. Even with multiple instances of headless browsers (chrome for your situation), the result is delaying too much.
Recommendation
Python 2, 3
This trio will help you a lot and save you a bunch of time.
Dryscrape
BeautifulSoup
ThreadPoolExecutor
Do not use the parser of dryscrape, it is very SLOW and buggy. For
this situation, one can use BeautifulSoup with the lxml parser. Use dryscrape to scrape Javascript generated content, plain HTML and images.
If you are scraping a lot of links simultaneously, i highly recommend
using something like ThreadPoolExecutor
Edit #1
dryscrape + BeautifulSoup usage (Python 3+)
from dryscrape import start_xvfb
from dryscrape.session import Session
from dryscrape.mixins import WaitTimeoutError
from bs4 import BeautifulSoup
def new_session():
session = Session()
session.set_attribute('auto_load_images', False)
session.set_header('User-Agent', 'SomeUserAgent')
return session
def session_reset(session):
return session.reset()
def session_visit(session, url, check):
session.visit(url)
# ensure that the market table is visible first
if check:
try:
session.wait_for(lambda: session.at_css(
'SOME#CSS.SELECTOR.HERE'))
except WaitTimeoutError:
pass
body = session.body()
session_reset(session)
return body
# start xvfb in case no X is running (server)
start_xvfb()
SESSION = new_session()
URL = 'https://stackoverflow.com/questions/45796411/download-entire-webpage-html-image-js-by-selenium-python/45824047#45824047'
CHECK = False
BODY = session_visit(SESSION, URL, CHECK)
soup = BeautifulSoup(BODY, 'lxml')
RESULT = soup.find('div', {'id': 'answer-45824047'})
print(RESULT)
I Hope below code will work to download the complete content of the page.
driver.get("http://testurl.com")
pageurl=driver.current_url
page = requests.get(pageurl)
pagecontent=page.content
`pagecontent` will contain the complete code content
It's not allowed to download a website without Permission. If you would know that, you would also know there is hidden Code on hosting Server, where you as Visitior has no access to it.
I've been trying to web scrape information off the website:
https://www.tddirectinvesting.co.uk/share-dealing/daily-trading-ideas
And the information I wanted were in the elements, with the class of "RecogniaEventSummaryBodyLinks"
But when I tried to download the html file and print it, it showed that the html file didn't download correctly. What I mean by this is that when I copied and pasted the whole html text I got from my python code into notepad++ and did CTRL+F to find if these elements were in the html text, they weren't there.
I also tried manually downloading the file directly from the website, but this also didn't work either.
Heres my code (python):
import mechanize
import cookielib
from bs4 import BeautifulSoup
def viewPage(url,proxy,userAgent):
br = mechanize.Browser()
cookieJar = cookielib.LWPCookieJar()
br.set_cookiejar(cookieJar)
br.set_proxies(proxy)
br.addheaders = userAgent
page = br.open(url)
htmlFile = page.read()
for cookie in cookieJar:
print("cookie: " + str(cookie))
print("")
return htmlFile
def ScrapeFigures(url):
html = viewPage(url,proxyAdress,agentStringSample)
soup = BeautifulSoup(html,"html.parser")
info = soup.find("a",attrs={"class":"RecogniaEventSummaryBodyLinks"})
I tried printing out variable info, but it returned null.
However, after this I tried copy & pasting the python output for the whole soup variable in the above code into another text file, and saved it as a html file. When I opened this html file with my web browser (Chrome), the elements I needed were on the page, despite not being present in the html file in text format. So I just wondered, is this caused by some sort of JS in the background thats triggered when the page is opened?
My question is, how can I scrape off the elements described above? Is there a way to get around this weird bug?
Thank you for your time
I am looking to get the contents of a text file hosted on my website using Python. The server requires JavaScript to be enabled on your browser. Therefore when I run:
import urllib2
target_url = "http://09hannd.me/ai/request.txt"
data = urllib2.urlopen(target_url)
I receive a html page saying to enable JavaScript.
I was wondering if there was a way of faking having JS enabled or something.
Thanks
Selenium is the way to go here, but there is another "hacky" option.
Based on this answer: https://stackoverflow.com/a/26393257/2517622
import requests
url = 'http://09hannd.me/ai/request.txt'
response = requests.get(url, cookies={'__test': '2501c0bc9fd535a3dc831e57dc8b1eb0'})
print(response.content) # Output: find me a cafe nearby
I would probably suggest tools like this. https://github.com/niklasb/dryscrape
Additionally you can see more info here: Using python with selenium to scrape dynamic web pages