Inject cypress-image-snapshot diff images to Mochawesome reports - javascript

I am using Cypress along with the Mochawesome reporter and the cypress-image-snapshot library. The cypress-image-snapshot library creates screenshots of visual regressions any time a test fails. I am trying to see if there is a way of injecting these images to the final report.
I was able to inject the standard test snapshot to the report since I already know the path of the file. The file name always follows the same pattern so I could build the string easily.
// cypress/support/index.js
import './commands'
const addContext = require('mochawesome/addContext')
Cypress.on('test:after:run', (test, runnable) => {
if (test.state === 'failed') {
// Build image path
const screenshotFileName = `${runnable.parent.title} -- ${test.title} (failed).png`
addContext({ test }, `assets/${Cypress.spec.name}/${screenshotFileName}`)
// TODO: Here I need to inject all the images from the snapshots/SPEC_NAME/__diff_output__ directory
...
}
})
I tried creating a Cypress task to access the filesystem and simply read all the files from the specific diff_output folder, hovewer tasks are not available in the Cypress.on('test:after:run') event listener.

I managed to add diff image in mochawesome HTML report from cypress-image-snapshop plugin.
The way I did it is not the most secure but it's the fastest way and actually the only way I found.
Cypress.on('test:after:run', (test, runnable) => {
if(test.state === 'failed') {
let screenshot;
screenshot = `${Cypress.config('screenshotsFolder')}/${Cypress.spec.name}/${runnable.parent.title} -- ${test.title} (failed).png`;
if(test.err.message.includes('See diff')) {
// If the test failed due to cypress-image-snapshot the message will always be the same and the plugin gives you in the message the url of the path
screenshot = test.err.parsedStack[1].message.replace('See diff for details: ', '');
}
addContext({test}, {
title: 'Image',
value: screenshot
});
}
})

Related

Is there a way of creating log file from Cypress GUI log panel

I want to get the logs written in a log file for the execution.
I am trying to create a cypress spec and I want to have log files generated for the operations performed on webpage.
There is cy.log() to log something custom but the logs are there during the run, but after it I can only see them in the video.
I want to have logs on a .log file that I can export after the cypress run is completed.
You can get the raw log records into a json file by adding code to catch the log events, and saving to a file at the end.
cypress/support/e2e.js
const logs = {}
Cypress.on('log:added', (log) => {
logs[log.id] = log
})
Cypress.on('log:changed', (log) => {
logs[log.id] = log
})
after(() => {
cy.writeFile(`logs/${Cypress.spec.name}.log.json`, logs)
})
If you use the experimentalRunAllSpecs flag to run all, the log will be written to a file logs/__all.log.json. This is how the run-all feature is working now, the specs are combined into one uber spec called __all.
You can use the cypress-log-to-file plugin to generate log files for your Cypress tests. To use this plugin, you first need to install it using npm:
npm install cypress-log-to-file
Then, in your cypress/plugins/index.js file, include the following code:
const logToFile = require('cypress-log-to-file/lib/logToFile'); module.exports = (on, config) => { logToFile(on, config); };
This will create a cypress.log file in your project's root directory that contains the logs generated during your tests' execution.
You can access this file after your tests have completed and view the logs for debugging purposes.
Checkout Cypress terminal report you can use it in your cypress.config file like below:
setupNodeEvents(on, config) {
// ...
const options = {
outputRoot: config.projectRoot + '/logs/',
outputTarget: {
'out.txt': 'txt',
'out.json': 'json',
}
};
require('cypress-terminal-report/src/installLogsPrinter')(on, options);
// ...
}
more examples on the GitHub repo readme file.

Open Telemetry makes Next.js initialization extremely slow

When initializing Nextjs via node -r and node --require the application takes 4-5 minutes to load. The telemetry script loads within the first 5 seconds so this issue is likely related to Nextjs or node. This contrasts calling without node require module's 30 second load time.
Without node require module:
"dev": "env-cmd -f environments/.env.development next dev",
With node require module:
"dev": "env-cmd -f environments/.env.development node --require ./tracing.js ./node_modules/next/dist/bin/next dev",
This implementation is based on ross-hagan's blog about instrument-nextjs-opentelemetry
Alternative to a custom server
I originally started off with a completely separate tracing.js script
with the contents of our start.js script without the startServer call.
This separates the telemetry SDK startup from the server. You can then
keep the Next.js built-in startup behaviours by using the node
--require (-r) to load in a module before starting the Next app.
In your npm run dev script in your package.json this looks like:
node -r tracing.js ./node_modules/.bin/next dev
I switched away from this after frustration getting the node command
to run in a Dockerfile as this was destined for a Google Kubernetes
Engine runtime. Also, some concern about use of the --require flag.
See if it works for you to do it this way, as a Next.js custom server
comes with some consequences documented in their docs!
I've tried two separate tracing.js without success in reducing load times.
tracing.js provided by open telemetry:
/* tracing.js */
// Require dependencies
const opentelemetry = require("#opentelemetry/sdk-node");
const { getNodeAutoInstrumentations } = require("#opentelemetry/auto-instrumentations-node");
const { diag, DiagConsoleLogger, DiagLogLevel } = require('#opentelemetry/api');
// For troubleshooting, set the log level to DiagLogLevel.DEBUG
diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.INFO);
const sdk = new opentelemetry.NodeSDK({
traceExporter: new opentelemetry.tracing.ConsoleSpanExporter(),
instrumentations: [getNodeAutoInstrumentations()]
});
sdk.start()
As well as the customized tracing.js for jaeger:
const process = require('process');
const opentelemetry = require('#opentelemetry/sdk-node');
const {
getNodeAutoInstrumentations,
} = require('#opentelemetry/auto-instrumentations-node');
const { Resource } = require('#opentelemetry/resources');
const {
SemanticResourceAttributes,
} = require('#opentelemetry/semantic-conventions');
const { JaegerExporter } = require('#opentelemetry/exporter-jaeger');
const hostName = process.env.OTEL_TRACE_HOST || 'localhost';
const options = {
tags: [],
endpoint: `http://${hostName}:1234/api/traces`,
};
const traceExporter = new JaegerExporter(options);
// configure the SDK to export telemetry data to the console
// enable all auto-instrumentations from the meta package
const sdk = new opentelemetry.NodeSDK({
resource: new Resource({
[SemanticResourceAttributes.SERVICE_NAME]: 'my_app',
}),
traceExporter,
instrumentations: [
getNodeAutoInstrumentations({
// Each of the auto-instrumentations
// can have config set here or you can
// npm install each individually and not use the auto-instruments
'#opentelemetry/instrumentation-http': {
ignoreIncomingPaths: [
// Pattern match to filter endpoints
// that you really want to stop altogether
'/ping',
// You can filter conditionally
// Next.js gets a little too chatty
// if you trace all the incoming requests
...(process.env.NODE_ENV !== 'production'
? [/^\/_next\/static.*/]
: []),
],
// This gives your request spans a more meaningful name
// than `HTTP GET`
requestHook: (span, request) => {
span.setAttributes({
name: `${request.method} ${request.url || request.path}`,
});
},
// Re-assign the root span's attributes
startIncomingSpanHook: (request) => {
return {
name: `${request.method} ${request.url || request.path}`,
'request.path': request.url || request.path,
};
},
},
}),
],
});
// initialize the SDK and register with the OpenTelemetry API
// this enables the API to record telemetry
sdk
.start()
.then(() => console.log('Tracing initialized'))
.catch((error) =>
console.log('Error initializing tracing and starting server', error)
);
// gracefully shut down the SDK on process exit
process.on('SIGTERM', () => {
sdk
.shutdown()
.then(() => console.log('Tracing terminated'))
.catch((error) => console.log('Error terminating tracing', error))
.finally(() => process.exit(0));
});
Separately, building and then serving does not speed up the load times either.
Check to see if you were experiencing the issue that was reported here [#opentelemetry/instrumentation] require performance grows linearly with each instrumentation plugin. Within this issue, there was a bug that caused instrumentation to be layered on top of itself repeatedly. They have since fixed the issue.
See also this answer.

How do I initialize a webworker in NextJS 10?

I have a Next 10 project where I am trying to use WebWorkers. The worker is being initialized like so:
window.RefreshTokenWorker = new Worker(new URL('../refreshToken.worker.js', import.meta.url))
I also have the Worker defined as
self.addEventListener('message', (e) => {
console.info("ON MESSAGE: ", e)
// some logic with e.data
})
Its also being called like this:
const worker = getWorker() // gets worker that is attached at the window level
worker.postMessage('start')
My next.config.js file is defined as
const nextConfig = {
target: 'serverless',
env: getBuildEnvVariables(),
redirects,
rewrites,
images: {
domains: []
},
future: { webpack5: true },
webpack (config) {
config.resolve.alias['#'] = path.join(__dirname, 'src')
return config
}
}
// more definitions
module.exports = nextConfig
The issue I have is the console.info in the Web Worker definition does not receive the message being sent from postMessage on the build version (yarn build && yarn start) but it does on the dev version (yarn dev). Any ways to fix this?
This is not a solution. But can be a messy way to do the job. This turned out to be a nightmare for me.
I have the same setup as yours. I was initializing web worker as you have shown in your question. I got this idea from the nextjs doc itself: https://nextjs.org/docs/messages/webpack5
const newWebWorker = new Worker(new URL('../worker.js', import.meta.url))
Everything working correctly when I work in dev mode. it is picking up the worker.js file correctly and everything looks alright.
But when I build the nextjs and try it, then web worker won't work. When I dive deeply into the issues, I found out that the worker.js chunk file is created directly under the .next folder. It should come under .next/static/chunk/[hash].worker.js ideally.
I could not resolve this issue in a proper way.
So what i did, i placed my worker.js file directly under public directory. I put my worker.js file transpiled and optimized and put the code in the public/worker.js file.
After this, I modified the worker initialization like this:
const newWebWorker = new Worker('/worker.js', { type: 'module' });
it is working in the production build now. I will report once I get a cleaner solution for this.

How to set/define environment variables and api_Server in cypress?

Currently, we are using cypress to test our application. We have 2 environments with 2 different api_Servers. I want to define this inside the environment files. I am not sure how to define both the url in same file.
For example,
Environment-1:
baseUrl - https://environment-1.me/
Api_Serever - https://api-environment-1.me/v1
Environment-2:
baseUrl - https://environment-2.me/
Api_Serever - https://api-environment-2.me/v1
So few test cases depend on the baseUrl and 1 test case to check API depends on Api_Serever.
To resolve this I tried to set the baseUrl and Api_Serever inside the config file inside a plugin following this link https://docs.cypress.io/api/plugins/configuration-api.html#Usage.
I created two config files for 2 environments,
{
"baseUrl": "https://environment-2.me/",
"env": {
"envname": "environment-1",
"api_server": "https://api-environment-1.me/v1"
}
}
Another file similar to this changing the respective endpoints.
plugin file has been modified as,
// promisified fs module
const fs = require('fs-extra')
const path = require('path')
function getConfigurationByFile (file) {
const pathToConfigFile = path.resolve('..', 'cypress', 'config', `${file}.json`)
return fs.readJson(pathToConfigFile)
}
module.exports = (on, config) => {
// `on` is used to hook into various events Cypress emits
// `config` is the resolved Cypress config
// accept a configFile value or use development by default
const file = config.env.configFile || 'environment-2'
return getConfigurationByFile(file)
}
inside test cases, whichever refers to the baseUrl we used visit('/')
This works fine when we run a specific file from the command line using the command cypress run --env configFile=environment-2 all the test cases pass, as the visit('/') automatically replaces with the respective environments expect the API test case.
I am not sure how the API test should be modified to call the API endpoint instead of the base URL.
Can somebody help, please?
Thanks,
indhu.
If I understand your question correctly, you need to run tests with different urls. Urls being set in cypress.json or in env file.
Can you configure the urls in cypress.json file as below. I haven't tried though, can you give it a go.
{
"baseUrl": "https://environment-2.me/",
"api_server1": "https://api1_url_here",
"api_server2": "https://api2_url_here"
}
Inside the test call pass the urls as below;
describe('Test for various Urls', () => {
it('Should test the base url', () => {
cy.visit('/') // this point to baseUrl configured in cypress.json file
// some tests to continue based on baseUrl..
})
it('Should test the api 1 url', () => {
cy.visit(api_server1) // this point to api server 1 configured in cypress.json file
// some tests to continue based on api server1..
})
it('Should test the api 2 url', () => {
cy.visit(api_server2) // this point to api server 2 configured in cypress.json file
// some tests to continue based on api server2..
})
})
This issue has been resolved.
The best way is to do with a plugin as suggested by their docs (https://docs.cypress.io/api/plugins/configuration-api.html#Usage).
I kept the structure same as such in my question and in my test case I called it using, cy.request(Cypress.env('api_server'))
This solved my issue :)

How to probe if a file was download using Selenium/WebdriverIO

I want to know how I can verify if a file was downloaded using Selenium Webdriver after I click the download button.
Your question doesn't say whether you want to confirm it locally or remotely(like browserstack) . If it is remotely then my answer will be "NO" as you can see that the file is getting downloaded but you can not access the folder. So you wont be able to assert that the file has been downloaded.
If you want to achieve this locally(in Chrome) then the answer is "YES", you can do it something like this:
In wdio.conf.js(To know where it is getting downloaded)
var path = require('path');
const pathToDownload = path.resolve('chromeDownloads');
// chromeDownloads above is the name of the folder in the root directory
exports.config = {
capabilities: [{
maxInstances: 1,
browserName: 'chrome',
os: 'Windows',
chromeOptions: {
args: [
'user-data-dir=./chrome/user-data',
],
prefs: {
"download.default_directory": pathToDownload,
}
}
}],
And your spec file(To check if the file is downloaded or not ?)
const fsExtra = require('fs-extra');
const pathToChromeDownloads = './chromeDownloads';
describe('User can download and verify a file', () =>{
before(() => {
// Clean up the chromeDownloads folder and create a fresh one
fsExtra.removeSync(pathToChromeDownloads);
fsExtra.mkdirsSync(pathToChromeDownloads);
});
it('Download the file', () =>{
// Code to download
});
it('Verify the file is downloaded', () =>{
// Code to verify
// Get the name of file and assert it with the expected name
});
});
more about fs-extra : https://www.npmjs.com/package/fs-extra
Hope this helps.
TL;DR: Unless your web-app has some kind of visual/GUI trigger once the download finishes (some text, an image/icon-font, push-notification, etc.), then the answer is a resounding NO.
Webdriver can't go outside the scope of your browser, but your underlying framework can. Especially if you're using NodeJS. :)
Off the top of my head I can think of a few ways I've been able to do this in the past. Choose as applicable:
1. Verify if the file has been downloaded using Node's File System (aka fs)
Since you're running WebdriverIO, under a NodeJS environment, then you can make use its powerful lib tool-suite. I would use fs.exists, or fs.existsSync to verify if the file is in the expected folder.
If you want to be diligent, then also use fs.statSync in conjunction with fs.exists & poll the file until it has the expected size (e.g.: > 2560 bytes)
There are multiple examples online that can help you put together such a script. Use the fs documentation, but other resources as well. Lastly, you can add said script inside your it/describe statement (I remember your were using Mocha).
2. Use child_process's exec command to launch third-party scripts
Though this method requires more work to setup, I find it more relevant on the long run.
!!! Caution: Apart from launching the script, you need to write a script in a third-party framework.
Using an AutoIT script;
Using a Sikuli script;
Using a TestComplete (not linking it, I don't like it that much), or [insert GUI verification script here] script;
Note: All the above frameworks can generate an .exe file that you can trigger from your WebdriverIO test-cases in order to check if your file has been downloaded, or not.
Steps to take:
create one of the stand-alone scripts like mentioned above;
place the script's .exe file inside your project in a known folder;
use child_process.exec to launch the script and assert its result after it finishes its execution;
Example:
exec = require('child_process').exec;
// Make sure you also remove the .exe from scriptName
var yourScript = pathToScript + scriptName;
var child = exec(yourScript);
child.on('close', function (code, signal) {
if (code!==0) {
callback.fail(online.online[module][code]);
} else {
callback();
}
});
Finally: I'm sure there are other ways to do it. But, your main take-away from such a vague question should be: YES, you can verify if the file has been downloaded if you absolutely must, expecially if this test-case is CRITICAL to your regression-run.

Categories