I am trying to pass a URL from the command line when I run karate integration tests on the command line. I took a look at this and tried to do the same thing but so far no luck.
I have this karate-config.js file
function karateconf() {
karate.configure('connectTimeout', 5000);
karate.configure('readTimeout', 5000);
var config = { baseURL: 'http://localhost:8080' };
if (karate.env == 'ci') {
config.baseURL = karate.properties['base.URL'];
karate.log('*******************************', karate.properties['base.URL']);
}
return config;
}
And I run the test using gradle like this
./gradlew integrationTest -Dkarate.env=ci -Dbase.URL=http://someurl:8080
And here is the karate logs
14:12:54.599 [pool-1-thread-1] INFO com.intuit.karate - ******************************* null
14:12:54.827 [pool-1-thread-1] ERROR com.intuit.karate - url not set, please refer to the keyword documentation for 'url'
14:12:54.827 [pool-1-thread-1] ERROR com.intuit.karate - http request failed: url not set, please refer to the keyword documentation for 'url'
14:12:54.836 [pool-1-thread-1] INFO c.i.karate.cucumber.CucumberRunner - <<<< feature 1 of 1 on thread pool-1-thread-1: com/guidewire/lifecycle/controller/configuration-controller.feature
14:12:55.359 [Test worker] INFO n.m.cucumber.ReportParser - File '/workspace/configuration-service/configuration-infrastructure/app-backend/lifecycle/target/surefire-reports/TEST-com.guidewire.lifecycle.controller.configuration-controller.json' contain 1 features
I could not figure out what I am missing here.
Gradle ? This is covered in the documentation: https://github.com/intuit/karate#command-line - and looks like you need to add base.URL to your gradle build file the same way as below:
For gradle you must extend the test task to allow the cucumber.options
to be passed to the Cucumber-JVM (otherwise they get consumed by
gradle itself). To do that, add the following:
test {
// pull cucumber options into the cucumber jvm
systemProperty "cucumber.options", System.properties.getProperty("cucumber.options")
// pull karate options into the jvm
systemProperty "karate.env", System.properties.getProperty("karate.env")
// ensure tests are always run
outputs.upToDateWhen { false }
}
Related
After upgrading from Webpacker 4 to 5, I receive a new error while running rails webpacker:compile. Running yarn dev, alone; works without issue -- I can't seem to find what the cause of this bug is, or what file its located in. There's aren't many debugging tools in this case. How do I fix the problem where splitChunks is failing for webpacker 5?
Error:
➜ rails webpacker:compile
warning: parser/current is loading parser/ruby27, which recognizes
warning: 2.7.3-compliant syntax, but you are running 2.7.4.
warning: please see https://github.com/whitequark/parser#compatibility-with-ruby-mri.
I, [2022-01-04T14:59:51.4223 #20612] INFO -- : initializing Lit
Compiling...
Compilation failed:
[webpack-cli] Invalid configuration object. Webpack has been initialized using a configuration object that does not match the API schema.
- configuration.optimization.splitChunks should be one of these:
false | object { automaticNameDelimiter?, cacheGroups?, chunks?, defaultSizeTypes?, enforceSizeThreshold?, fallbackCacheGroup?, filename?, hidePathInfo?, maxAsyncRequests?, maxAsyncSize?, maxInitialRequests?, maxInitialSize?, maxSize?, minChunks?, minRemainingSize?, minSize?, minSizeReduction?, name?, usedExports? }
-> Optimize duplication and caching by splitting chunks by shared modules and cache group.
Details:
* configuration.optimization.splitChunks.name should be one of these:
false | string | function
-> Give chunks created a name (chunks with equal name are merged).
Details:
* configuration.optimization.splitChunks.name should be false.
* configuration.optimization.splitChunks.name should be a string.
* configuration.optimization.splitChunks.name should be an instance of function.
I don't what's your configurations, but the problem is in your configuration.optimization.splitChunks.name being something other then either false , string or instance of function
I am using Cypress along with the Mochawesome reporter and the cypress-image-snapshot library. The cypress-image-snapshot library creates screenshots of visual regressions any time a test fails. I am trying to see if there is a way of injecting these images to the final report.
I was able to inject the standard test snapshot to the report since I already know the path of the file. The file name always follows the same pattern so I could build the string easily.
// cypress/support/index.js
import './commands'
const addContext = require('mochawesome/addContext')
Cypress.on('test:after:run', (test, runnable) => {
if (test.state === 'failed') {
// Build image path
const screenshotFileName = `${runnable.parent.title} -- ${test.title} (failed).png`
addContext({ test }, `assets/${Cypress.spec.name}/${screenshotFileName}`)
// TODO: Here I need to inject all the images from the snapshots/SPEC_NAME/__diff_output__ directory
...
}
})
I tried creating a Cypress task to access the filesystem and simply read all the files from the specific diff_output folder, hovewer tasks are not available in the Cypress.on('test:after:run') event listener.
I managed to add diff image in mochawesome HTML report from cypress-image-snapshop plugin.
The way I did it is not the most secure but it's the fastest way and actually the only way I found.
Cypress.on('test:after:run', (test, runnable) => {
if(test.state === 'failed') {
let screenshot;
screenshot = `${Cypress.config('screenshotsFolder')}/${Cypress.spec.name}/${runnable.parent.title} -- ${test.title} (failed).png`;
if(test.err.message.includes('See diff')) {
// If the test failed due to cypress-image-snapshot the message will always be the same and the plugin gives you in the message the url of the path
screenshot = test.err.parsedStack[1].message.replace('See diff for details: ', '');
}
addContext({test}, {
title: 'Image',
value: screenshot
});
}
})
I am trying to integrate the stackdriver-error-js library into my Vue project as a module.
The code and the setup:
in package.json
"stackdriver-errors-js": "^0.2.0"
in bootstrap.js
import errorHandler from './error/error-reporting';
in error-reporting.js
import { StackdriverErrorReporter } from 'stackdriver-errors-js';
let errorHandler;
errorHandler = new StackdriverErrorReporter();
errorHandler.start({
key: "{{.Config.StackDriverApiKey}}",
projectId: "{{.Config.StackDriverProject}}",
service: "{{.Config.GoogleCloudProjectID}}",
version: "{{.Copacknfig.GaeEnv}}",
disabled: false
});
export default errorHandler;
The actual error
The error I got now is (console output and test)
[vue-devtools] Ready. Detected Vue v2.4.2
(function testErrorReporting() {window.onerror(null, null, null, null, new Error('Test: Something broke!'));})();
stackdriver-errors.js:109 Uncaught ReferenceError: StackTrace is not defined
at StackdriverErrorReporter.webpackJsonp.556.StackdriverErrorReporter.report (stackdriver-errors.js:109)
at window.onerror (stackdriver-errors.js:67)
at testErrorReporting (<anonymous>:1:40)
at <anonymous>:1:111
and line (stackdriver-errors.js:109)
...
StackTrace.fromError(err).then(function(stack){
...
If you do not load the stackdriver-errors-concat.min.js file, you also manually need to also the stacktrace-js module.
stackdriver-errors expects a StackTrace object to be present.
Since the library you want to use is experimental, and therefore cannot be used in a production environment, it would be better to use a different library which has been tested and validated for production use.
I suggest using this other library instead, which includes features related to Stackdriver error reporting for Node.js and JavaScript.
First of all, install the dependency by running this command:
npm install --save #google-cloud/error-reporting
This will add the dependency automatically to package.json.
In error-reporting.js, you can add the dependencyby adding this to your code (All the parameters are optional):
var errors = require('#google-cloud/error-reporting')({
projectId: 'my-project-id',
keyFilename: '/path/to/keyfile.json',
credentials: require('./path/to/keyfile.json'),
// if true library will attempt to report errors to the service regardless
// of the value of NODE_ENV
// defaults to false
ignoreEnvironmentCheck: false,
// determines the logging level internal to the library; levels range 0-5
// where 0 indicates no logs should be reported and 5 indicates all logs
// should be reported
// defaults to 2 (warnings)
logLevel: 2,
// determines whether or not unhandled rejections are reported to the
// error-reporting console
reportUnhandledRejections: true,
serviceContext: {
service: 'my-service',
version: 'my-service-version'
}
});
After that, use this code to test if the error is properly reported by Stackdriver:
errors.report(new Error('Something broke!'));
Please be aware that this library is currently on a beta stage, so there might be some changes to it in the future.
I'm trying to get webdriver.io and Jasmine working.
Following their example, my script is at test/specs/first/test2.js (in accordance with the configuration) and contains:
var webdriverio = require('webdriverio');
describe('my webdriverio tests', function() {
var client = {};
jasmine.DEFAULT_TIMEOUT_INTERVAL = 9999999;
beforeEach(function() {
client = webdriverio.remote({ desiredCapabilities: {browserName: 'firefox'} });
client.init();
});
it('test it', function(done) {
client
.url("http://localhost:3000/")
.waitForVisible("h2.btn.btn-primary")
.click("h2.btn.btn-primary")
.waitForVisible("h2.btn.btn-primary")
.call(done);
});
afterEach(function(done) {
client.end(done);
});
});
I'm using wdio as the test runner, and set it up using the interactive setup. That config is automatically-generated and all pretty straightforward, so I don't see a need to post it.
In another terminal window, I am running selenium-server-andalone-2.47.1.jar with Java 7. I do have Firefox installed on my computer (it blankly starts when the test is run), and my computer is running OS 10.10.5.
This is what happens when I start the test runner:
$ wdio wdio.conf.js
=======================================================================================
Selenium 2.0/webdriver protocol bindings implementation with helper commands in nodejs.
For a complete list of commands, visit http://webdriver.io/docs.html.
=======================================================================================
[18:17:22]: SET SESSION ID 46731149-79aa-412e-b9b5-3d32e75dbc8d
[18:17:22]: RESULT {"platform":"MAC","javascriptEnabled":true,"acceptSslCerts":true,"browserName":"firefox","rotatable":false,"locationContextEnabled":true,"webdriver.remote.sessionid":"46731149-79aa-412e-b9b5-3d32e75dbc8d","version":"40.0.3","databaseEnabled":true,"cssSelectorsEnabled":true,"handlesAlerts":true,"webStorageEnabled":true,"nativeEvents":false,"applicationCacheEnabled":true,"takesScreenshot":true}
NoSessionIdError: A session id is required for this command but wasn't found in the response payload
at waitForVisible("h2.btn.btn-primary") - test2.js:21:14
/usr/local/lib/node_modules/webdriverio/node_modules/q/q.js:141
throw e;
^
NoSessionIdError: A session id is required for this command but wasn't found in the response payload
0 passing (3.90s)
$
I find this very strange and inexplicable, especially considering that it even prints the session ID.
Any ideas?
Please check out the docs on the wdio test runner. You don't need to create an instance using init on your own. The wdio test runner takes care on creating and ending the session for you.
Your example covers the standalone WebdriverIO usage (without testrunner). You can find examples which use wdio here.
To clarify that: there are two ways of using WebdriverIO. You can embed it in your test system by yourself (using it as standalone / or as a scraper ). Then you need to take care of things like create and end an instance or run those in parallel. The other way to use WebdriverIO is using its test runner called wdio. The testrunner takes a config file with a bunch of information on your test setup and spawns instances updates job information on Sauce Labs and so on.
Every Webdriver command gets executed asynchronously.
You properly called the done callback in afterEach and in your test it test, but forgot to do it in beforeEach:
beforeEach(function(done) {
client = webdriverio.remote({ desiredCapabilities: {browserName: 'firefox'} });
client.init(done);
});
Are there equivalent to perl -c syntax check for JavaScript from command? Given that I have NodeJS installed?
JSLint is not considered as it is not a real parser. I think YUI compressor is possible but I don't want to install Java on production machines, so I am checking if Node.JS already provided this syntax check mechanism.
If you want to perform a syntax check like that way we do in perl ( another scripting language) you can simply use node -c <js file-name>
e.g. a JS file as test.js has:
let x = 30
if ( x == 30 ) {
console.log("hello");
else {
console.log( "world");
}
now type in node -c test.js
it will show you
test.js:5
else {
^^^^
SyntaxError: Unexpected token else
at startup (bootstrap_node.js:144:11)
at bootstrap_node.js:509:3
Now after fixing the syntax issue as
let x = 30
if ( x == 30 ) {
console.log("hello");
} else {
console.log( "world");
}
check syntax - node -c test.js will show no syntax error!!
Note - we can even use it to check syntax for all files in a dir. - node -c *.js
Try uglify. You can install it via npm.
Edit: The package name has changed. It is uglify-js.
nodejs --help
explains the -p switch: it evaluates the supplied code and prints the results. So using nodejs -p < /path/to/file.js would be a disastrous way to check the validity of node.js code on your server. One possible solution is the one indicated in this SO thread. The one thing not so good about it - the syntax error messages it reports are not terribly helpful. For instance, it tell you something is wrong but without telling you where it is wrong.