I am trying to print in python the messages from the web console using a callback on a onConsoleMessage event. Pepper (Edit: version 1.6) is running naoqi 2.5.5.5. I've modified the executeJS example as a test. The problem is I keep getting null for the message in the callback. Is it a bug that has been fixed in a newer version of naoqi ? I've had a look at the release notes but I didn't find anything.
Here is the code I am using:
#! /usr/bin/env python
# -*- encoding: UTF-8 -*-
"""Example: Use executeJS Method"""
import qi
import argparse
import sys
import time
import signal
def signal_handler(signal, frame):
print('Bye!')
sys.exit(0)
def main(session):
"""
This example uses the executeJS method.
To Test ALTabletService, you need to run the script ON the robot.
"""
# Get the service ALTabletService.
try:
tabletService = session.service("ALTabletService")
# Display a local web page located in boot-config/html folder
# The ip of the robot from the tablet is 198.18.0.1
tabletService.showWebview("http://198.18.0.1/apps/boot-config/preloading_dialog.html")
time.sleep(3)
# Javascript script for displaying a prompt
# ALTabletBinding is a javascript binding inject in the web page displayed on the tablet
script = """
console.log('A test message');
"""
# Don't forget to disconnect the signal at the end
signalID = 0
# function called when the signal onJSEvent is triggered
# by the javascript function ALTabletBinding.raiseEvent(name)
def callback(message):
print "[callback] received : ", message
# attach the callback function to onJSEvent signal
signalID = tabletService.onConsoleMessage.connect(callback)
# inject and execute the javascript in the current web page displayed
tabletService.executeJS(script)
print("Waiting for Ctrl+C to disconnect")
signal.pause()
except Exception, e:
print "Error was: ", e
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--ip", type=str, default="127.0.0.1",
help="Robot IP address. On robot or Local Naoqi: use '127.0.0.1'.")
parser.add_argument("--port", type=int, default=9559,
help="Naoqi port number")
args = parser.parse_args()
session = qi.Session()
try:
session.connect("tcp://" + args.ip + ":" + str(args.port))
except RuntimeError:
print ("Can't connect to Naoqi at ip \"" + args.ip + "\" on port " + str(args.port) +".\n"
"Please check your script arguments. Run with -h option for help.")
sys.exit(1)
main(session)
Output:
python onConsoleMessage.py --ip=192.168.1.20
[W] 1515665783.618190 30615 qi.path.sdklayout: No Application was created, trying to deduce paths
Waiting for Ctrl+C to disconnect
[callback] received : null
Did someone face the same issue?
Thanks
I have the same issue. You can easily reproduce it by opening two ssh consoles on the robot, and on the first one executing
qicli watch ALTabletService.onConsoleMessage
and on the second
qicli call ALTabletService.showWebview
qicli call ALTabletService.executeJS "console.log('hello')"
... and instead of "hello", you will see "null" appear in your first console.
HOWEVER - if your goal is to effectively test your webpage, what I usually do is just open the page on my computer and use the chrome console (you can set chrome up to act as if the page was a tablet of the right size, 1280x800); you can do this while still connecting the page to Pepper, as if it was on her tablet, using the method described here. This is enough for 99% of the case; the remaining 1% is things where Pepper's tablet is actually different from Chrome.
Related
Problem:
My Plotly Dash app (python) has a clientside callback (javascript) which prompts the user to select a folder, then saves a file in a subfolder within that folder. Chrome asks for permission to read and write to the folder, which is fine, but I want the user to only have to give permission once. Unfortunately the permissions, which should persist until the tab closes, disappear often. Two "repeatable cases" are:
when the user clicks a simple button ~15 times very fast, previously accepted permissions will disappear (plotting a figure also does this in my real application)
downloading a file within a few seconds of reloading the page results in the permissions automatically going away within about 5 seconds
I can see the permissions (file and pen icon) disappear at the right of the chrome url banner.
What I've tried:
testing with Ublock Origin on/off (and removed from chrome) to see if the extension interfered (got idea from the only somewhat similar question I've come across: window.confirm disappears without interaction in Chrome)
turning debug mode off
using Edge instead of chrome (basically the same behavior was observed)
adding more computation to Test button to find repeatable case, but still needed to click it a lot to remove permissions (triggering callbacks / updating Dash components seems to be the issue, not server resources)
Example python script (dash app) to show permissions disappearing:
import dash
import dash_bootstrap_components as dbc
from dash.dependencies import Input, Output
from dash import html
app = dash.Dash(__name__, external_stylesheets=[dbc.themes.BOOTSTRAP])
app.layout = html.Div([
dbc.Button(id="model-export-button", children="Export Model"),
dbc.Label(id="test-label1", children="Click to download"),
html.Br(),
dbc.Button(id="test-button", children="Test button"),
dbc.Label(id="test-label2", children="Button not clicked")
])
# Chrome web API used for downloading: https://web.dev/file-system-access/
app.clientside_callback(
"""
async function(n_clicks) {
// Select directory to download
const directoryHandle = await window.showDirectoryPicker({id: 'save-dir', startIn: 'downloads'});
// Create sub-folder in that directory
const newDirectoryHandle = await directoryHandle.getDirectoryHandle("test-folder-name", {create: true});
// Download files to sub-folder
const fileHandle = await newDirectoryHandle.getFileHandle("test-file-name.txt", {create: true});
const writable = await fileHandle.createWritable();
await writable.write("Hello world.");
await writable.close();
// Create status message
const event = new Date(Date.now());
const msg = "File(s) saved successfully at " + event.toLocaleTimeString();
return msg;
}
""",
Output('test-label1', 'children'),
Input('model-export-button', 'n_clicks'),
prevent_initial_call=True
)
#app.callback(
Output('test-label2', 'children'),
Input('test-button', 'n_clicks'),
prevent_initial_call=True
)
def test_button_function(n):
return "Button has been clicked " + str(n) + " times"
if __name__ == "__main__":
app.run_server(debug=False)
This is now possible! In your code, replace the line…
await window.showDirectoryPicker({id: 'save-dir', startIn: 'downloads'});
…with…
await window.showDirectoryPicker({
id: 'save-dir',
startIn: 'downloads',
mode: 'readwrite', // This is new!
});
I am trying to automate the download of research articles from scihub (https://sci-hub.scihubtw.tw/) based on their corresponding article titles. I am using a library called scholarly (https://pypi.org/project/scholarly/) to get the url, author information related to the given article title as shown in the code below.
I use the fetched url (as described above) to emulate the download process using scihub. But I am unable to download directly, since I can't press the open button on the search page (https://sci-hub.scihubtw.tw/). And pressing enter after populating the query forwards me to another page with an open button. I am unable to fetch and press the open button for some reason and it always returns me a null element using the selenium library.
However, I am able to execute the following in the browser console and successfully download the pape,
document.querySelector("#open-button").click()
But, trying to get similar response from selenium is failing.
Kindly help me resolve this issue.
## This part of code fetches url using scholarly library from google scholar
from scholarly import scholarly
search_query = scholarly.search_pubs('Hydrogen-hydrogen pair correlation function in liquid water')
search_query = [query for query in search_query][0]
## This part of code uses selenium to automate download process
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
import time
download_dir = '/Users/cacsag4/Downloads'
# setup the browser
options = webdriver.ChromeOptions()
options.add_experimental_option('prefs', {
"download.default_directory": download_dir, #Change default directory for downloads
"download.prompt_for_download": False, #To auto download the file
"download.directory_upgrade": True,
"plugins.always_open_pdf_externally": True #It will not show PDF directly in chrome
})
browser = webdriver.Chrome('./chromedriver', options=options)
browser.delete_all_cookies()
browser.get('https://sci-hub.scihubtw.tw/')
# Find the search element to send the url string to it
searchElem = browser.find_element(By.CSS_SELECTOR, 'input[type="textbox"]')
searchElem.send_keys(search_query.bib['url'])
# Emulate pressing enter two different ways, either by pressing return key or by executing JS
#searchElem.send_keys(Keys.ENTER) # This produces the same effect as the next line
browser.execute_script("javascript:document.forms[0].submit()")
# Wait for page to load
time.sleep(10)
# Try to press the open button using JS or by fetching the button by its ID
# This returns error since its unable to fetch open-button id
browser.execute_script('javascript:document.querySelector("#open-button").click()')
#openElem = browser.find_element(By.ID, "open-button") ## This also returns a null element
Ok, so I got the answer to this question. Sci-hub stores its pdf inside an iframe, so all you got to do is fetch the src attribute of the iframe after pressing enter on the first page. The following code does the job.
from scholarly import scholarly
search_query = scholarly.search_pubs('Hydrogen-hydrogen pair correlation function in liquid water')
search_query = [query for query in search_query][0]
print(search_query.bib['url'])
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
import time
download_dir = '/Users/cacsag4/Downloads'
# setup the browser
options = webdriver.ChromeOptions()
options.add_experimental_option('prefs', {
"download.default_directory": download_dir, #Change default directory for downloads
"download.prompt_for_download": False, #To auto download the file
"download.directory_upgrade": True,
"plugins.always_open_pdf_externally": True #It will not show PDF directly in chrome
})
browser = webdriver.Chrome('./chromedriver', options=options)
browser.delete_all_cookies()
browser.get('https://sci-hub.scihubtw.tw/')
# Find the search element to send the url string to it
searchElem = browser.find_element(By.CSS_SELECTOR, 'input[type="textbox"]')
searchElem.send_keys(search_query.bib['url'])
# Emulate pressing enter two different ways, either by pressing return key or by executing JS
#searchElem.send_keys(Keys.ENTER) # This produces the same effect as the next line
browser.execute_script("javascript:document.forms[0].submit()")
# Wait for page to load
time.sleep(2)
# Try to press the open button using JS or by fetching the button by its ID
# This returns error since its unable to fetch open-button id
#browser.execute_script('javascript:document.querySelector("#open-button").click()')
openElem = browser.find_element(By.CSS_SELECTOR, "iframe") ## This also returns a null element
browser.get(openElem.get_attribute('src'))
Well I'm trying to get a nodejs process to launch a python script. - And this python script logs while it is busy - as it logs I wish to display this in the console window used by the nodejs process.
The python script is really trivial
from time import sleep
if __name__ == '__main__':
print('small text testing')
sleep(10)
raise Exception('test')
prints 'small text testing', sleeps for 10 seconds(!) and then raises an exception which is uncaught and thus finishes the script.
In node I tried to get this to work with:
const { exec } = require('child_process');
const exec_str = '. BackgroundServer/BackgroundServer/bin/activate && python BackgroundServer/main.py 1';
const child = exec(exec_str,
{
// detachment and ignored stdin are the key here:
detached: true,
stdio: [ 'ignore', 1, 2 ]
});
child.unref();
child.stdout.on('data', function(data) {
console.log(data.toString());
});
child.stderr.on('data', function(data) {
console.error(data.toString());
});
However this "fails" in the sense that it will only print after the python process has finished running.
Now I know it is possible to run a script through spawn but that would require me to create a temporary script, give that script permissions and then execute that script. Not optimal either.
Not knowing much about javascript or node.js I am pretty sure your problem is due to the fact that Python buffers its output if it is run as subprocess.
To fix this issue, you can do either manually ensure that Python flushes the buffer by adding calls to sys.stdout.flush() as
import sys
from time import sleep
if __name__ == '__main__':
print('small text testing')
sys.stdout.flush()
sleep(10)
raise Exception('test')
or you can force Python to not be buffered also when used as a subprocess by calling the intrepreter with the -u argument, thus modifying exec_str to
const exec_str = '. BackgroundServer/BackgroundServer/bin/activate && \
python -u BackgroundServer/main.py 1';
The first solution will always flush the output, if that is desirable and you use it in another place, without you having to think about the -u option. However, I would still recommend the second approach as it still allows the code to run buffered (which sometimes can be what you want) and also when working with longer scripts you may have to insert quite a number of manual sys.stdout.flush() calls otherwise.
Also, as a sidenote, there is no need for raising an exception in the Python script. It will end anyway, when it reaches its last line.
I need to capture the console logs (category: info) of a browser using Ruby & Capybara. Until now I have tried using driver.manage.logs.get(:browser) or (:client) but, using this, the result is not what I want. It gives out the interaction results between selenium and browser where I can see my javascript statements sent for execution, but the resulting output fails to get captured.
Whether or not logs are available when using selenium depends on what browser you are using with Selenium. If you were using Firefox you'd be out of luck since it doesn't support the log retrieval API, however since you're using Chrome they are accessible. The issue you're having is that, by default, only WARN or ERROR level logs are captured. You can change this in the driver registration through the loggingPrefs capability
Selenium 3
Capybara.register_driver :logging_selenium_chrome do |app|
caps = Selenium::WebDriver::Remote::Capabilities.chrome(loggingPrefs:{browser: 'ALL'})
browser_options = ::Selenium::WebDriver::Chrome::Options.new()
# browser_options.args << '--some_option' # add whatever browser args and other options you need (--headless, etc)
Capybara::Selenium::Driver.new(app, browser: :chrome, options: browser_options, desired_capabilities: caps)
end
Selenium 4
Capybara.register_driver :logging_selenium_chrome do |app|
options = Selenium::WebDriver::Chrome::Options.new
options.add_option("goog:loggingPrefs", {browser: 'ALL'})
browser_options = ::Selenium::WebDriver::Chrome::Options.new()
Capybara.register_driver :chrome do |app|
Capybara::Selenium::Driver.new(app,
capabilities: options,
browser: :chrome)
end
end
and then specify to use :logging_selenium_chrome as your driver
Capybara.javascript_driver = :logging_selenium_chrome # or however else you're specifying which driver to use
which should then allow you to get the logs in your tests with
page.driver.browser.manage.logs.get(:browser)
Thomas Walpole answer is correct but it seems that nowadays if you are using chrome as your driver you should use
Selenium::WebDriver::Remote::Capabilities.chrome( "goog:loggingPrefs": { browser: 'ALL' } )
Notice goog:loggingPrefs instead of loggingPrefs only with this solution i was able to get console.log printed in the log.
Took me a while and got it from here https://intellipaat.com/community/5478/getting-console-log-output-from-chrome-with-selenium-python-api-bindings after several frustrating attempts.
November 2022 update, use:
page.driver.browser.logs.get(:browser)
Not sure that this is what you want, but take a look at https://github.com/dbalatero/capybara-chromedriver-logger.
It helps me identify the problem with dynamic modules import(''). Works both locally and in Github Actions / Circle CI by displaying failed loads of assets (which i believe outputs as console.error).
what should be right way to click on a javascript generated link on a regular time interval using python and selenium bindings? should it be using a thread?
as i would need to continue to process the input data, i need to refresh/reset a timer to continue to receive data, clicking on this given link to do this refresh (and this link is html directly generated by javascript).
best regards
You don't need thread to do this.
Use javascript function setInterval to continuously click the link.
For example:
import time
from selenium import webdriver
driver = webdriver.Firefox()
driver.get('http://jsfiddle.net/falsetru/4UxgK/show/')
# Click the link every 3000 ms.
driver.execute_script('''
// argument passed from Python can be accessed by `arguments` array.
var link = arguments[0];
var timer = setInterval(function() {
link.click();
}, 3000);
''', driver.find_element_by_id('activity'))
while True:
data = driver.find_element_by_id('counter').text
print(data)
time.sleep(1)
NOTE
If you get error like follow, upgrade selenium to recent version. I experienced following error with Firefox 23.0 + selenium 2.32.0. Error was gone with selenium 2.35.0.
Traceback (most recent call last):
File "t2.py", line 12, in <module>
print driver.execute_script('''return 1 + 2;''')
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 397, in execute_script
{'script': script, 'args':converted_args})['value']
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 165, in execute
self.error_handler.check_response(response)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/errorhandler.py", line 158, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: u'waiting for evaluate.js load failed' ; Stacktrace:
at r (file:///tmp/tmpm1sJhH/extensions/fxdriver#googlecode.com/components/driver_component.js:8360)
at fxdriver.Timer.prototype.runWhenTrue/g (file:///tmp/tmpm1sJhH/extensions/fxdriver#googlecode.com/components/driver_component.js:392)
at fxdriver.Timer.prototype.setTimeout/<.notify (file:///tmp/tmpm1sJhH/extensions/fxdriver#googlecode.com/components/driver_component.js:386)
Alternative: using thread
import threading
import time
from selenium import webdriver
driver = webdriver.Firefox()
driver.get('http://jsfiddle.net/falsetru/4UxgK/show/')
def click_loop(link, interval):
while True:
link.click()
time.sleep(interval)
link = driver.find_element_by_id('activity')
threading.Thread(target=click_loop, args=(link, 3)).start()
while True:
data = driver.find_element_by_id('counter').text
print(data)
time.sleep(1)