How to access chromedriver logs for Protractor test - javascript

I have seen that chromedriver can output a logfile (https://sites.google.com/a/chromium.org/chromedriver/logging)
This page shows how to set this up when executing the exe directly:
chromedriver.exe --verbose --log-path=chromedriver.log
I cannot figure out how to set this up in Protractor however
My current protractor.conf.js
require('babel/register');
exports.config = {
framework: 'jasmine2',
seleniumServerJar: './node_modules/protractor/selenium/selenium-server-standalone-2.45.0.jar'
};
From #alecxe's answer below and protractor's browser setup docs I tried adding the following (with and without --s) but with no apparent effect:
capabilities: {
browserName: "chrome",
chromeOptions: {
args: [
"--verbose",
"--log-path=chromedriver.log"
]
}
}
I also tried specifying an absolute path (log-path=/chromedriver.log) which also didn't work.

You can always start up your own instance of chromedriver in a separate process and tell Protractor to connect to that. For example, if you start chromedriver with:
chromedriver --port=9515 --verbose --log-path=chromedriver.log
Then you could use a configuration file for Protractor like so:
exports.config = {
seleniumAddress: 'http://localhost:9515',
capabilities: {
'browserName': 'chrome'
},
specs: ['example_spec.js'],
};

We use a shell script to add chromedriver logging, among other checks. You can then point protractor at the shell script:
protractor config:
// When running chromedriver, use this script:
chromeDriver: path.resolve(topdir, 'bin/protractor-chromedriver.sh'),
bin/protractor-chromedriver.sh
TMPDIR="/tmp"
NODE_MODULES="$(dirname $0)/../node_modules"
CHROMEDRIVER="${NODE_MODULES}/protractor/selenium/chromedriver"
LOG="${TMPDIR}/chromedriver.$$.log"
fatal() {
# Dump to stderr because that seems reasonable
echo >&2 "$0: ERROR: $*"
# Dump to a logfile because webdriver redirects stderr to /dev/null (?!)
echo >"${LOG}" "$0: ERROR: $*"
exit 11
}
[ ! -x "$CHROMEDRIVER" ] && fatal "Cannot find chromedriver: $CHROMEDRIVER"
exec "${CHROMEDRIVER}" --verbose --log-path="${LOG}" "$#"

According to the protractor's source code, chromedriver service is started without any arguments and there is no direct way to configure the arguments. Even though the chromedriver's Service Builder that protractor uses actually has an ability to specify the verbosity and the log path:
var service = new chrome.ServiceBuilder()
.loggingTo('/my/log/file.txt')
.enableVerboseLogging()
.build();
Old (incorrect) answer:
You need to set the chrome arguments:
capabilities: {
browserName: "chrome",
chromeOptions: {
args: [
"verbose",
"log-path=chromedriver.log"
]
}
},
See also:
Viewing outstanding requests

Since, the previous answer by #P.T. didn't work for me on Windows 7, I started with his suggestions and got it working on Windows. Here is a working solution for Windows 7 users.
STEP 1: Install BASH and JQ and confirm they are working on your Windows box
Download bash (for Windows 10
https://itsfoss.com/install-bash-on-windows/ ; for Windows 7
download latest here:
https://sourceforge.net/projects/win-bash/files/shell-complete/latest/ ; for Windows Server 2012 or any Windows OS that already has Git installed on it, you already have a bash.exe and sh.exe installed at C:\Program Files\Git\usr\bin or C:\Program Files (x86)\Git\usr\bin already
)
Install bash - For Windows 7/ download it and extract the zip files to a directory.
Download jq (https://stedolan.github.io/jq/) and install it in the same directory location as bash
Make SURE that you add your above directory (for Windows 7- where you extracted the bash zip files to; for other applicable OSes that have git, the path it is installed at) to your PATH system environment variable.
Once the above is installed and added to your PATH, close ALL and reopen Webstorm and any CMD windows you wish to run your work in.
Test that bash is actually installed by simply typing it on a windows command prompt
C:\git\> bash .
Doing so should produce a bash cmd prompt like this
bash$
STEP 2: Add Custom Files for Redirecting Chromedriver to user Debug Logging
Add the following files to the top level of the project (wherever your protractor-conf.js is located). These files allow us to add custom debug switches to the chromedriver.exe execution.
Note that this is necessary because these switches are not exposed through protractor and cannot be done directly in the protractor.conf.js file via the chromeOptions/args flags as you would normally expect
chromedriver.cmd -- exact source shown below:
bash protractor-chromedriver.sh %*
protractor-chromedriver.sh -- exact source shown below:
TMPDIR="$(dirname $0)/tmp"
NODE_MODULES="$(dirname $0)/node_modules"
SELENIUM="${NODE_MODULES}/protractor/node_modules/webdriver-manager/selenium"
UPDATECONFIG="${SELENIUM}/update-config.json"
EXEFILENAME="$(cat ${UPDATECONFIG} | jq .chrome.last | tr -d '""')"
CHROMEDRIVER="${SELENIUM}/${EXEFILENAME##*'\\'}"
LOG="${TMPDIR}/chromedriver.$$.log"
fatal() {
# Dump to stderr because that seems reasonable
echo >&2 "$0: ERROR: $*"
# Dump to a logfile because webdriver redirects stderr to /dev/null (?!)
echo >"${LOG}" "$0: ERROR: $*"
exit 11
}
[ ! -x "$CHROMEDRIVER" ] && fatal "Cannot find chromedriver: $CHROMEDRIVER"
exec "${CHROMEDRIVER}" --verbose --log-path="${LOG}" "$#"
/tmp -- create this directory at the top level of your project (same as the location of the protractor.conf.js file.
STEP 3: Update protractor.conf.js file.
In the protractor.conf.js file, add the following line as a property in the exports.config object. As in:
exports.config = {
.. ..
chromeDriver: 'chromedriver.cmd',
.. ..
STEP 4: Launch your tests
your test should now run and if the chrome driver outputs any log information it will appear in a file called chromedriver.???.log in the tmp directory under your project.
Important caveats
This script set up assumes you install and run protractor (and the chrome driver under it) within the local node_modules directory inside your project. That is how I run my code, because I want it complete self-contained and re-generated in the build process/cycle. If you have protractor/chromedriver installed globally you should change the CHROMEDRIVER variable within the protractor-chromedriver.sh file to match your installation of protractor/chrome driver.
hope that helps.

If you're using the seleniumServerJar, in protractor.conf.js set the logfile path to wherever you want it to write the file:
seleniumArgs: [
'-Dwebdriver.chrome.logfile=/home/myUsername/tmp/chromedriver.log',
]
If you're using webdriver-manager start to run a local selenium server, you'll need to edit the webdriver-manager file:
// insert this line
args.push('-Dwebdriver.chrome.logfile=/home/myUsername/tmp/chromedriver.log');
// this line already exists in webdriver-manager, add the push to args before this line
var seleniumProcess = spawnCommand('java', args);

In case you use webdriver-manager: webdriver manager has the chrome_logs option (you can find it in its source code (in opts.ts or opts.js in the compiled code)), so you can use it something like:
webdriver-manager start --chrome_logs /path/to/logfile.txt

I'm using this as a global afterEach hook (mocha):
afterEach(() => {
browser.manage().logs().get('browser').then(function(browserLog) {
if(browserLog && browserLog.length) {
console.log('\nlog: ' + util.inspect(browserLog) + '\n');
}
});
});

Related

Windows 10, existing binding.node file cannot be found when running jest

When running windows 10, on appveyor and as a virtualbox, I'm getting the same, error when running jest tests for my electron app.
The specified module could not be found.
\\?\C:\Users\User\peruse\app\node_modules\ref\build\Release\binding.node
Error: The specified module could not be found.
\\?\C:\Users\User\peruse\app\node_modules\ref\build\Release\binding.node
at Runtime.requireModule (node_modules/jest-runtime/build/index.js:263:31)
at bindings (app/node_modules/bindings/bindings.js:76:44)
at Object.<anonymous> (app/node_modules/ref/lib/ref.js:5:47)
(https://ci.appveyor.com/project/joshuef/peruse/build/1.0.733/job/fwflo19to9rvt085#L4664)
The thing is... the file itself exists, as confirmed by running:
dir \\?\C:\Users\User\peruse\app\node_modules\ref\build\Release\binding.node
which results in:
-a---- 4/9/2018 1:44 AM 157696 binding.node
And the application itself runs fine (it's an electron app, compiled via webpack). Only when running tests against the native libs do I get this error. And only in windows. (osx/linux tests are running fine.)
I've tried rebuilding, using npm instead of yarn, re-installing vs2017 tools via the command line... I'm consistently getting this error on both systems (which is something), but I'm stumped as to what to try next...
Jest is being run from the command line.
Jest is configured thus:
module.exports = {
verbose : true,
moduleFileExtensions : ['js', 'jsx', 'json'],
setupFiles : ['raf/polyfill','<rootDir>/test/setup.js'],
testPathIgnorePatterns : ['node_modules'],
moduleDirectories : ['app', 'test', 'node_modules', 'app/node_modules'],
moduleNameMapper : {
"electron": "<rootDir>/mocks/electron.js",
"\\.(jpg|jpeg|png|gif|eot|otf|webp|svg|ttf|woff|woff2|mp4|webm|wav|mp3|m4a|aac|oga)$":
"<rootDir>/mocks/fileMock.js",
"\\.(css|scss)$": "<rootDir>/mocks/fileMock.js",
'^appPackage$' : '<rootDir>/package.json',
'^#actions(.*)$' : '<rootDir>/app/actions$1',
'^#components(.*)$' : '<rootDir>/app/components$1',
'^#containers(.*)$' : '<rootDir>/app/containers$1',
'^appConstants$' : '<rootDir>/app/constants.js',
'^#extensions(.*)$' : '<rootDir>/app/extensions$1',
'^#logger$' : '<rootDir>/app/logger.js',
'^#reducers(.*)$' : '<rootDir>/app/reducers$1',
'^#store(.*)$' : '<rootDir>/app/store',
'^#utils(.*)$' : '<rootDir>/app/utils$1'
}
};
the appveyor config file is here.
Any pointers/ideas/things to check super appreciated. If more code clarification could be needed, just let me know.
Thanks in advance!
I had a similar error with ibm_db module on Windows 10 ( but not in a Linux Docker container ):
The specified module could not be found.
\\?\C:\_projects\projectName\node_modules\ibm_db\build\Release\odbc_bindings.node
at Runtime._loadModule (node_modules/jest-runtime/build/index.js:572:29)
at bindings (node_modules/bindings/bindings.js:112:48)
Found the solution here:
Download odbc driver from IBM site. You might need to register
at IBM for this.
Install this odbc driver by simply extracting the
content of the package that you downloaded to some folder, for
examle to "C:\IBMDB2\CLIDRIVER\"
Set IBM_DB_HOME environmental variable pointing to folder with drivers. ( to
"C:\IBMDB2\CLIDRIVER\" if you used that path )
Add "%IBM_DB_HOME%\bin" path in "PATH" env variable.
Restart or Sing out/Sign in so changes to PATH take effect.
Reinstall packages using "npm install" since all those environmental variables are only used during the package installation phase.

Run selenium jar on travis CI from protractor node_modules folder

I am setting up Travis in order to execute e2e tests through protractor.
On my protractor.config.js I have the following:
seleniumServerJar: './node_modules/protractor/node_modules/webdriver-manager/selenium/selenium-server-standalone-3.5.0.jar'
So actually it refers to the selenium jar included by default inside the protractor plugin.
Then I use the plugin gulp-protractor in order to execute the tests pointing to the right protractor.config.js.
Locally everything works like a charm.
But when trying to execute this on Travis, I am getting the following error:
[18:59:15] I/launcher - Running 1 instances of WebDriver [18:59:15]
E/local - Error code: 135 [18:59:15] E/local - Error message: No
selenium server jar found at
/home/travis/build/quirimmo/Qprotractor/node_modules/protractor/node_modules/webdriver-manager/selenium/selenium-server-standalone-3.5.0.jar.
Run 'webdriver-manager update' to download binaries.
Any idea why it looks like it cannot retrieve the jar from the node_modules subfolder please?
Here my .travis.yml configuration, which is actually pretty simple:
sudo: required
dist: trusty
addons:
chrome: stable
language: node_js
node_js:
- '6.11'
before_script:
- export DISPLAY=:99.0
- sh -e /etc/init.d/xvfb start
- sleep 3
install:
- npm install
script:
- echo "Triggered!"
- gulp protractor-test
Thanks a lot, any help would be really appreciated!
p.s. I already did it on other projects with Travis running manually the webdriver-manager and then pointing to the selenium address from the protractor.config.js, but I don't want that solution and I want to go on through the seleniumServerJar property, because in this way it will run everything alone without any need of starting manually the webdriver-manager
Fixed in your repo. You should change your before_script to below
before_script:
- export DISPLAY=:99.0
- sh -e /etc/init.d/xvfb start
- sleep 3
- npm install -g webdriver-manager
- webdriver-manager update
- webdriver-manager start &
- sleep 3
And then in your protactor.confg.js add the seleniumAddress
exports.config = {
seleniumAddress: 'http://127.0.0.1:4444/wd/hub/',
specs: [
'./test/base-protractor.spec.js',
'./test/element-finder.spec.js',
'./test/element-array-finder.spec.js'
],
onPrepare: function() {
require('./index');
}
};
Posting here the answer if this could be useful for someone else in the future.
As explained very well in this link:
https://github.com/angular/protractor/issues/3225
You need to manually trigger the installation of the selenium server.
So in the install block of your travis file, you can simply add this:
install:
- npm install
- node_modules/protractor/bin/webdriver-manager update
And then inside the protractor.config.js, grab the current version of the installed selenium server:
const SELENIUM_FOLDER = './node_modules/protractor/node_modules/webdriver-manager/selenium';
const fs = require('fs');
let res, seleniumVersion;
fs.readdirSync(SELENIUM_FOLDER).forEach(file => {
res = file.match(/selenium-server-standalone-(\d{1}.\d{1}.\d{1}).jar/i);
if (res) {
seleniumVersion = res[1];
}
})
if (!seleniumVersion) {
throw new Error('No selenium server jar found inside your protractor node_modules subfolder');
}
And then execute it in this way:
seleniumServerJar: `./node_modules/protractor/node_modules/webdriver-manager/selenium/selenium-server-standalone-${seleniumVersion}.jar`
I hope this will help someone else avoiding to lose few hours of time against this issue!

Docker/Selenium/Headless Chrome: Configure SUID sandbox correctly

I want to run selenium and headless chrome in my docker container for testing purpose.
I have tried to run selenium in headless chrome (outside my docker container) with the following in my .js file. This worked:
const client = webdriverio.remote({
desiredCapabilities: {
browserName: 'chrome',
chromeOptions: {
args: ['--headless', '--disable-gpu']
},
binary: '/Applications/Google Chrome.app/Contents/MacOS/Google Chrome'
},
baseUrl: CONFIG.host,
logLevel: 'verbose',
waitForTimeout: 3000
})
But I can't get this to work in my docker container. In my docker container I use "FROM selenium/standalone-chrome". There does not seem to be any problem with my dockerfile. The problem occurs when I try to run my selenium tests. I changed the binary_path in my .js file to /opt/google/chrome/google-chrome. But the tests fails and client can not even be initiated.
So I tried to just run /opt/google/chrome/google-chrome in order to see if chrome works, but then I get this error:
[0711/005304.226472:ERROR:nacl_helper_linux.cc(311)] NaCl helper
process running without a sandbox!
Most likely you need to configure your SUID sandbox correctly
I am pretty new to this (and stack overflow) so there might be some basic things I have missed.
Try to include --no-sandbox
chromeOptions: {
args: ['--headless', '--disable-gpu', '--no-sandbox']
},
As I'm doing at docker-selenium
This error message...
[1003/144118.702053:ERROR:nacl_helper_linux.cc(310)] NaCl helper process running without a sandbox!
Most likely you need to configure your SUID sandbox correctly
...implies that you have no setuid sandbox in your system, hence the program was unable to initiate/spawn a new Browsing Context i.e. Chrome Browser session.
Solution
The easiest (not so clean) solution is, if you want to run Chrome and only use the namespace sandbox, you can set the flag:
--disable-setuid-sandbox
This flag will disable the setuid sandbox (Linux only). But if you do so on a host without appropriate kernel support for the namespace sandbox, Chrome will not spin up. As an alternative you can also use the flag:
--no-sandbox
This flag will disable the sandbox for all process types that are normally sandboxed.
Example:
chromeOptions: {
args: ['--disable-setuid-sandbox', '--no-sandbox']
},
You can find a detailed discussion in Security Considerations - ChromeDriver - Webdriver for Chrome
Deep dive
As per the documentation in Linux SUID Sandbox Development google-chrome needs a SUID helper binary to turn on the sandbox on Linux. In majority of the cases you can install the proper sandbox for you using the command:
build/update-linux-sandbox.sh
This program will install the proper sandbox for you in /usr/local/sbin and tell you to update your .bashrc if required.
However, there can be some exceptions as an example, if your setuid binary is out of date, you will get messages such as:
NaCl helper process running without a sandbox!
Most likely you need to configure your SUID sandbox correctly
Or
Running without the SUID sandbox!
In these cases, you need to:
Build chrome_sandbox whenever you build chrome (ninja -C xxx chrome chrome_sandbox instead of ninja -C xxx chrome)
After building, execute update-linux-sandbox.sh.
# needed if you build on NFS!
sudo cp out/Debug/chrome_sandbox /usr/local/sbin/chrome-devel-sandbox
sudo chown root:root /usr/local/sbin/chrome-devel-sandbox
sudo chmod 4755 /usr/local/sbin/chrome-devel-sandbox
Finally, you have to include the following line in your ~/.bashrc (or .zshenv):
export CHROME_DEVEL_SANDBOX=/usr/local/sbin/chrome-devel-sandbox

Node's spawn() silently failing when called from a forever script scheduled on boot

This is kind of a doozy. This issue is most likely server related and so my first recourse was AskUbuntu over here.
I'm trying to have crontab or rc.local or init.d to start a forever script on boot. It attaches a server to a port I can ping with some information and have it run a headless browser for me.
That said, it seems that I'm unable to get a response from Node.js's spawn():
var CASPER_PATH = '/home/ubuntu/dev/casperjs/bin/casperjs'; // actual binary location, not a symlink
var SCRIPTS_PATH = '/home/custom_user/endpoints/server.js';
var fileName = req.body.source + '_' + req.body.type + '.coffee'; // looks like: mysource_my_scrape_type.coffee
var scrapeId = 'test_scrape';
var user = 'user123';
var pass = 'pass123';
if (fs.existsSync(SCRIPTS_PATH + fileName)) {
// If file is in place, spawn casperjs
var sP = spawn(CASPER_PATH,
[SCRIPTS_PATH + fileName, '--ssl-protocol=any', '--user='+user, '--scrapeId='+scrapeId, '--pass='+pass],
{ detached: true },
function (err, stdout, stderr) {});
sP.stdout.on('data', function(data) { console.log('stdout', data.toString('utf8')); });
sP.stderr.on('data', function(data) { console.log('stderr', data.toString('utf8')); });
sP.stdout.on('close', function(code) { console.log('close', code); });
res.send({ scheduled: true, key: scrapeId });
} else {
res.send({ scheduled: false, error: 'Incorrect source, type or the script is missing.' });
}
Before I added the PHANTOMJS_EXECUTABLE env to crontab or rc.local (doesnt seem to matter no matter the user level), stdout was useful:
stdout Fatal: [Errno 2] No such file or directory; did you install
phantomjs?
close false
Now that the environment var is there, there is no output at all after spawn().
Mind you, Casper starts up just fine if a user (of any privilege level) runs node/forever from bash.
How can I see why spawn() is failing?
This actually looks like a combo-bug between forever, spawn and casperjs (maybe phantomjs).
I was able to reproduce your problem, here is the full code of my test application.
You didn't show the full code, so my guess is that you have an express application and there is a special URL to run the casperjs script.
I build a simple app like this and it behaved this way:
Just start app with node script.js (script.js is the express app which runs the casperjs script in server.js) - it works OK, renders response and writes output from the child process event handlers to console
Start app as root with init.d script - doesn't work, once the child is spawned, no event handlers are triggered
Start app as root with init.d script, replace casperjs with echo - the same, doesn't work (see, here we have this problem with just forever running as root, spawn and echo)
Start app as a regular user (not root) with init.d, replace casperjs with 'echo' - it works, event handlers are triggered, here I was almost sure the issue is solved, but ... :(
Start app as a regular user (not root) with init.d, put back casperjs - it doesn't work again, event handlers are not triggered
The practical solution to this it to use pm2, I did this:
# install pm2
sudo npm install -g pm2
# generate init.d scripts for pm2
# this command will fail, but hint about the correct format with sudo
pm2 startup ubuntu
# do this in the folder with your application
pm2 start script.js
# remember your application
pm2 save
# also useful
# sudo service stop/start/restart pm2
# pm2 stop/start/restart script
Now pm2 will start automatically with the system and it will launch your application. Everything works, child process event handlers are triggered.
I did not understand your requirement completely. But i do have a similar situation with Ubuntu headless server.
what i am trying to do here is what i did
First, How is my crontab ?
crontab -u USER -e
#reboot exec sudo -u USER /bin/bash /home/USER/SHELL_SCRIPT.sh
See, here i am actually starting a shell script, and not a node server
Now inside this shell script(SHELL_SCRIPT.sh)
#! /bin/bash
# SHELL_SCRIPT.sh
cd /home/USER/
/home/USER/.npm-packages/bin/forever start -p /home/USER -a -d --watch false --pidFile /home/USER/forever.pid -l /home/USER/forever.log -o /home/USER/forever.out -e /home/USER/forever.err /home/USER/MY_NODE.js
and even inside my MY_NODE.js i follow absolute path, i just ignore $PATH, and don't use that.
Inside this node server, I do 100's of spawn
Now, i did this around 2 years back, so if you ask me why do this way, which i cannot answer

Where can I find the node.js file when node is installed?

I've tried to remove this message from node:
(node) warning: Recursive process.nextTick detected
because nothing else works. I've downloaded the source of node from the Ubuntu repository (I use the binary from npm but it should be almost the same, right?) and there is a node.js file containing this:
function maxTickWarn() {
// XXX Remove all this maxTickDepth stuff in 0.11
var msg = '(node) warning: Recursive process.nextTick detected. ' +
'This will break in the next version of node. ' +
'Please use setImmediate for recursive deferral.';
if (process.throwDeprecation)
throw new Error(msg);
else if (process.traceDeprecation)
console.trace(msg);
else
console.error(msg);
}
Where can I find this file when node is installed as a binary?
Node's .js files are compiled into the node binary, so if you want to change this, you will need to check out the git repo, modify the file containing maxTickWarn and then compile Node from source.
Have you tried running node with --no-deprecation?
Usage: node [options] [ -e script | script.js ] [arguments]
node debug script.js [arguments]
Options:
-v, --version print node's version
-e, --eval script evaluate script
-p, --print evaluate script and print result
-i, --interactive always enter the REPL even if stdin
does not appear to be a terminal
--no-deprecation silence deprecation warnings
--trace-deprecation show stack traces on deprecations
--v8-options print v8 command line options
--max-stack-size=val set max v8 stack size (bytes)

Categories