Firstly: as far as I can tell, this is not a duplicate. The other questions with similar problems are all slightly different, e.g. use a transformation like babel or have problems with transitive imports. In my case I have no transformation, I have one test file and one file imported file that will be tested. I just started using jest and use the default setting, so there is no configuration file to post.
When I try to run my tests I get the error message:
Test suite failed to run
Jest encountered an unexpected token
This usually means that you are trying to import a file which Jest cannot parse, e.g. it's not plain JavaScript.
The tested file:
export function showTooltip(x, y, content) {
const infoElement = document.getElementById('info');
infoElement.style.left = `${x}px`;
infoElement.style.top = `${y}px`;
infoElement.style.display = 'block';
infoElement.innerText = createTooltipText(content);
}
function createTooltipText(object) {
return Object.keys(object)
.filter(key => key != 'id')
.map(key => `${key} : ${object[key]}`)
.join('\n');
}
export function hideTooltip() {
const infoElement = document.getElementById('info');
infoElement.style.display = 'none';
}
The test:
import {showTooltip, hideTooltip} from '../../../src/public/javascripts/tooltip.js';
const TOOLTIP_DUMMY = {
style: {
left: 0,
top: 0,
display: '',
innerText: ''
}
};
test('showTooltip accesses the element with the id \'info\'', () => {
const getElementByIdMock = jest.fn(() => TOOLTIP_DUMMY);
document.getElementById = getElementByIdMock;
showTooltip(0, 0, {});
expect(getElementByIdMock).toHaveBeenCalledWith('info');
});
test('hideTooltip accesses the element with the id \'info\'', () => {
const getElementByIdMock = jest.fn(() => TOOLTIP_DUMMY);
document.getElementById = getElementByIdMock;
hideTooltip();
expect(getElementByIdMock).toHaveBeenCalledWith('info');
});
As you can see I am using plain javascript so I am not sure what to do here. The error message gives further hints about Babel which does not really apply to my case.
Sidenote: My test might be flawed. I am currently trying to figure out how to use mocks to avoid interaction with the document and I am not sure if that is the way. However this is not the point of this question as it should not affect the ability of the tests to run, but I am very open for suggestions.
EDIT: Why this is not a duplicate to this question: It kind of is, but I feel that question and the accepted answer were not really helpful for me and hopefully someone will profit from this one.
I have found the solution to my problem:
As suggested in this answer, you need to use Babel. This can be done as suggested here, but I used #babel/env-preset as it is suggested on the Babel website.
This left me with the problem that jest internally uses babel-core#6.26.3, but at least babel 7 was required. This problem is described here. I used the temporary fix of manually copying and overwriting babel-core from my node-modules directory to the node-modules directories of jest-config and jest-runtime. This dirty fix is also described in the previous link.
I have yet to find a clean solution, but at least this works.
Use global.document.getElementById = getElementByIdMock; In some configurations Jest doesn't have access to document object directly.
Related
The following snippet represents a Pinia store in my Vue 3 / Quasar 2 application. This store uses the environment variable VUE_APP_BACKEND_API_URL which shall be read from either the window object or process.env.
However I don't understand why the first variant is wokring but the second is not. Using the getEnv function always results in a Uncaught (in promise) ReferenceError: process is not defined error.
import { defineStore } from 'pinia';
function getEnv(name) {
return window?.appConfig?.[name] || process.env[name];
}
// 1. this is working
const backendApiUrl = window?.appConfig?.VUE_APP_BACKEND_API_URL || process.env.VUE_APP_BACKEND_API_URL;
// 2. this is NOT working
const backendApiUrl = getEnv('VUE_APP_BACKEND_API_URL');
export const useAppConfigStore = defineStore('appConfig', {
state: () => ({
authorizationUrl: new URL(
'/oauth2/authorization/keycloak',
backendApiUrl,
).toString(),
logoutUrl: new URL('/logout', backendApiUrl).toString(),
backendApiUrl: new URL(backendApiUrl).toString(),
}),
});
NodeJS-specific stuff like process doesn't exist in the browser environments. Both Webpack and Vite implementations work by replacing process.env.XYZ expressions with their values on build time. So, just process.env, or process.env[name] will not be replaced, which will lead to the errors you are experiencing. See the caveats section and related Webpack/Vite docs and resources. So, unfortunately, the only easy way seems to be the first long and repetitive way you've tried(const backendApiUrl = window?.appConfig?.VUE_APP_BACKEND_API_URL || process.env.VUE_APP_BACKEND_API_URL;). You can try embedding this logic in a single object, then use the function to access it.
const config = {
VUE_APP_BACKEND_API_URL: window?.appConfig?.VUE_APP_BACKEND_API_URL || process.env.VUE_APP_BACKEND_API_URL
}
export function getEnv(name) {
return config[name];
}
This way it will be longer and more repetitive to define it the first time, but at least you will be able to use it easily through the code base.
This is late, but it might help someone, I was able to resolve this by adding below to my quasar.conf.js
build: {
vueRouterMode: 'hash', // available values: 'hash', 'history'
env: {
API_ENDPOINT: process.env.API_ENDPOINT ? process.env.API_ENDPOINT : 'http://stg.....com',
API_ENDPOINT_PORT: process.env.API_ENDPOINT_PORT ? process.env.API_ENDPOINT_PORT : '0000',
...env
},
}
For more information ge here: https://github.com/quasarframework/quasar/discussions/9967
I am working on a script that runs during our build process in Jenkins right before npm install. My issue is that I need to download a JavaScript file from an external resource and read a variable from it.
unzipper.on('extract', () => {
const content = fs.readFileSync(`${outputDir}/js/en.js`, 'utf8');
eval(content); // Less smellier alternative?
if (obj) {
const str = JSON.stringify(obj);
fs.writeFileSync(`${outputDir}/public/data.json`, str);
} else {
throw 'Variable `obj` not found';
}
});
I know that "eval is evil", but any suggested alternatives I've found online don't seem to work.
I have tried different variations of new Function(obj)(), but Node seems to exit the script after (the if-case never runs).
Ideas?
Since node.js provides the API to talk to the V8 runner directly, it might be a good idea to use it. Basically, it's the API used by node's require under the hood.
Assuming the js file in question contains the variable obj we're interested in, we do the following:
read the code from the file
append ; obj to the code to make sure it's the last expression it evaluates
pass the code to V8 for evaluation
grab the return value (which is our obj):
const fs = require('fs'),
vm = require('vm');
const code = fs.readFileSync('path-to-js-file', 'utf8');
const obj = vm.runInNewContext(code + ';obj');
This answer is heavily based on #georg's comments, but since it helped me I'll provide it as an alternative answer.
Explanation in the comments.
let content = fs.readFileSync(`${outputDir}/js/en.js`, 'utf8');
content += '; module.exports=obj'; // Export "obj" variable
fs.writeFileSync(`${outputDir}/temp`, content); // Create a temporary file
const obj = require(`${outputDir}/temp`); // Import the variable from the temporary file
fs.unlinkSync(`${outputDir}/temp`); // Remove the temporary file
As the test suite grows I need to be able to run something in BeforeSuite() which will connect to external suite and skip the suite if an external resource is unavailable.
Feature('External Server');
BeforeSuite((I) => {
// Check if server is available and skip all scenarios if it is not
});
Scenario('Login to the server', (I) => {
// Do not run this if the server is not available
})
I understand I could probably set a variable, but I think it would be nice if there was a way to tell the runner that a suite has been skipped.
The goal is to have a suite marked as skipped in the output eg:
Registration --
✓ Registration - pre-checks in 4479ms
✓ Registration - email validation in 15070ms
✓ Registration - password validation in 8194ms
External Server -- [SKIPPED]
- Login to the server [SKIPPED]
maybe prepend x before every scenario in your feature? example xScenario. I don't think codecept supports something similar to only for features. it currently works with scenarios only as far as I know.
you can use
Scenario.skip
in your step definition to skip a scenario.
Note: if any steps have been executed before skipping then it will still show it in the report
https://codecept.io/basics/#todo-test
My answer is compiled from a number of comments on the CodeceptJS github and stackoverflow. However, I can't recall the exact links or comments which helped me derive this solution, it's been at least a year, maybe two, since I started and have slowly modified this.
Edit: Found the github thread - https://github.com/codeceptjs/CodeceptJS/issues/661
Edit2: I wrote a post about "selective execution" (which avoids tagging unwanted tests with skip status) https://github.com/codeceptjs/CodeceptJS/issues/3544
I'll add a snippet at the bottom.
I'm on CodeceptJS 3.3.6
Define a hook file (eg: skip.js) and link it to your codeceptjs.conf.js file.
exports.config = {
...
plugins: {
skipHook: {
require: "../path/to/skip.js",
enabled: true,
}
}
...
}
The basic code in skip.js is
module.exports = function () {
event.dispatcher.on(event.test.before, function (test) {
const skipThisTest = decideSkip(test.tags);
if (skipThisTest) {
test.run = function skip() {
this.skip();
};
return;
}
});
};
I've defined a decision function decideSkip like so:
function decideSkip(testTags) {
if (!Array.isArray(testTags)) {
output.log(`Tags not an array, don't skip.`);
return false;
}
if (testTags.includes("#skip")) {
output.log(`Tags contain [#skip], should skip.`);
return true;
}
if (
process.env.APP_ENVIRONMENT !== "development" &&
testTags.includes("#stageSkip")
) {
output.log(`Tags contain [#stageSkip], should skip on staging.`);
return true;
}
}
(Mine is a bit more complicated, including evaluating whether a series of test case ids are in a provided list but this shows the essence. Obviously feel free to tweak as desired, the point is a boolean value returned to the defined event listener for event.test.before.)
Then, using BDD:
#skip #otherTags
Scenario: Some scenario
Given I do something first
When I do another thing
Then I see a result
Or standard test code:
const { I } = inject();
Feature("Some Feature");
Scenario("A scenario description #tag1 #skip #tag2", async () => {
console.log("Execute some code here");
});
I don't know if that will necessarily give you the exact terminal output you want External Server -- [SKIPPED]; however, it does notate the test as skipped in the terminal and report it accordingly in allure or xunit results; at least insofar as CodeceptJS skip() reports it.
For "selective execution" (which is related but not the same as "skip"), I've implemented a custom mocha.grep() utilization in my bootstrap. A key snippet is as follows. To be added either to a bootstrap anonymous function on codecept.conf.js or some similar related location.
const selective = ["tag1", "tag2"];
const re = selective.join("|");
const regex = new RegExp(re);
mocha.grep(regex);
I'm struggling to come up with a pattern that will satisfy both my tests and ability for Travis to run my script.
I'll start off by saying that the way I have Travis running my script is that I specify the script to be run via node-babel command in my travis.yml as so:
script:
- babel-node ./src/client/deploy/deploy-feature-branch.js
That means when babel-node runs this, I need a method to auto run in deploy-feature-branch.js which I have. That's the line let { failure, success, payload } = deployFeatureBranch(). That forces deployFeatureBranch() to run because it's set to a destructure command.
In there I also have an options object:
let options = {
localBuildFolder: 'build',
domain: 'ourdomain',
branch: process.env.TRAVIS_PULL_REQUEST_BRANCH
}
During a PR build, travis automatically sets the value for process.env.TRAVIS_PULL_REQUEST_BRANCH. That's great! However the way I've set up this module doesn't work so well for tests. The problem I have is that if I try to set options from my test, for some reason the options object isn't being set.
I guess the problem I want to address is first and foremost, why options isn't being set when I try to set them from my test. And then is there a better way to design this module overall.
Test
import {options, deployFeatureBranch } from '../../../client/deploy/deploy-feature-branch'
it.only('creates a S3 test environment for a pull request', async () => {
options.branch = 'feature-100'
options.domain = 'ourdomain'
options.localDeployFolder = 'build'
const result = await deployFeatureBranch()
expect(result.success).to.be.true
})
})
When deployFeatureBranch() runs above in my test, the implementation of
tries to reference options.branch but it ends up being undefined even though I set it to be 'feature-100'. branch is defaulted to process.env.TRAVIS_PULL_REQUEST_BRANCH but I want to be able to override that and set it from tests.
deploy-feature-branch.js
import * as deployApi from './deployApi'
let options = {
localBuildFolder: 'build',
domain: 'ourdomain',
branch: process.env.TRAVIS_PULL_REQUEST_BRANCH
}
const deployFeatureBranch = async (options) => {
console.log(green(`Deploying feature branch: ${options.branch}`))
let { failure, success, payload } = await deployApi.run(options)
return { failure, success, payload }
}
let { failure, success, payload } = deployFeatureBranch(options)
export {
options,
deployFeatureBranch
}
I can't really think of a better way to structure this and also to resolve the setting options issue. I'm also not limited to using Node Modules either, I would be fine with ES6 exports too.
Instead of exporting options and modifying it, just pass in your new options object when calling the function in your test:
import {deployFeatureBranch } from '../../../client/deploy/deploy-feature-branch'
it.only('creates a S3 test environment for a pull request', async () => {
const options = {
branch: 'feature-100',
domain: 'ourdomain',
localDeployFolder: 'build'
};
const result = await deployFeatureBranch(options)
expect(result.success).to.be.true
})
});
The reason it isn't working is because your deployFeatureBranch() function expects options to be passed in when you call it, which you aren't doing.
Also, exporting and changing an object, while it might work, is also really weird and should be avoided. Creating a new object (or cloning the exported object) is definitely the way to go.
I have another question (last question). At the moment i am working on a Node.js project and in this I have many console.log() functions. This has worked okay so far but I also want everything that's written to the console to also be written in a log-file. Can someone please help me?
For example:
Console.log('The value of array position [5] is '+ array[5]);
In my real code its a bit more but this should give you an idea.
Thank you hopefully.
Just run the script in your terminal like this...
node script-file.js > log-file.txt
This tells the shell to write the standard output of the command node script-file.js to your log file instead of the default, which is printing it to the console.
This is called redirection and its very powerful. Say you wanted to write all errors to a separate file...
node script-file.js >log-file.txt 2>error-file.txt
Now all console.log are written to log-file.txt and all console.error are written to error-file.txt
I would use a library instead of re-inventing the wheel. I looked for a log4j-type library on npm, and it came up with https://github.com/nomiddlename/log4js-node
if you want to log to the console and to a file:
var log4js = require('log4js');
log4js.configure({
appenders: [
{ type: 'console' },
{ type: 'file', filename: 'logs/cheese.log', category: 'cheese' }
]
});
now your code can create a new logger with
var logger = log4js.getLogger('cheese');
and use the logger in your code
logger.warn('Cheese is quite smelly.');
logger.info('Cheese is Gouda.');
logger.debug('Cheese is not a food.');
const fs = require('fs');
const myConsole = new console.Console(fs.createWriteStream('./output.txt'));
myConsole.log('hello world');
This will create an output file with all the output which can been triggered through console.log('hello world') inside the console.
This is the easiest way to convert the console.log() output into a text file.`
You could try overriding the built in console.log to do something different.
var originalLog = console.log;
console.log = function(str){
originalLog(str);
// Your extra code
}
However, this places the originalLog into the main scope, so you should try wrapping it in a function. This is called a closure, and you can read more about them here.
(function(){
var originalLog = console.log;
console.log = function(str){
originalLog(str);
// Your extra code
})();
To write files, see this stackoverflow question, and to override console.log even better than the way I showed, see this. Combining these two answers will get you the best possible solution.
Just write your own log function:
function log(message) {
console.log(message);
fs.writeFileSync(...);
}
Then replace all your existing calls to console.log() with log().
#activedecay's answer seems the way to go. However, as of april 30th 2018, I have had trouble with that specific model (node crashed due to the structure of the object passed on to .configure, which seems not to work in the latest version). In spite of that, I've managed to work around an updated solution thanks to nodejs debugging messages...
const myLoggers = require('log4js');
myLoggers.configure({
appenders: { mylogger: { type:"file", filename: "path_to_file/filename" } },
categories: { default: { appenders:["mylogger"], level:"ALL" } }
});
const logger = myLoggers.getLogger("default");
Now if you want to log to said file, you can do it just like activedecay showed you:
logger.warn('Cheese is quite smelly.');
logger.info('Cheese is Gouda.');
logger.debug('Cheese is not a food.');
This however, will not log anything to the console, and since I haven't figured out how to implement multiple appenders in one logger, you can still implement the good old console.log();
PD: I know that this is a somewhat old thread, and that OP's particular problem was already solved, but since I came here for the same purpose, I may as well leave my experience so as to help anyone visiting this thread in the future
Here is simple solution for file logging
#grdon/logger
const logger = require('#grdon/logger')({
defaultLogDirectory : __dirname + "/logs",
})
// ...
logger(someParams, 'logfile.txt')
logger(anotherParams, 'anotherLogFile.log')