When running windows 10, on appveyor and as a virtualbox, I'm getting the same, error when running jest tests for my electron app.
The specified module could not be found.
\\?\C:\Users\User\peruse\app\node_modules\ref\build\Release\binding.node
Error: The specified module could not be found.
\\?\C:\Users\User\peruse\app\node_modules\ref\build\Release\binding.node
at Runtime.requireModule (node_modules/jest-runtime/build/index.js:263:31)
at bindings (app/node_modules/bindings/bindings.js:76:44)
at Object.<anonymous> (app/node_modules/ref/lib/ref.js:5:47)
(https://ci.appveyor.com/project/joshuef/peruse/build/1.0.733/job/fwflo19to9rvt085#L4664)
The thing is... the file itself exists, as confirmed by running:
dir \\?\C:\Users\User\peruse\app\node_modules\ref\build\Release\binding.node
which results in:
-a---- 4/9/2018 1:44 AM 157696 binding.node
And the application itself runs fine (it's an electron app, compiled via webpack). Only when running tests against the native libs do I get this error. And only in windows. (osx/linux tests are running fine.)
I've tried rebuilding, using npm instead of yarn, re-installing vs2017 tools via the command line... I'm consistently getting this error on both systems (which is something), but I'm stumped as to what to try next...
Jest is being run from the command line.
Jest is configured thus:
module.exports = {
verbose : true,
moduleFileExtensions : ['js', 'jsx', 'json'],
setupFiles : ['raf/polyfill','<rootDir>/test/setup.js'],
testPathIgnorePatterns : ['node_modules'],
moduleDirectories : ['app', 'test', 'node_modules', 'app/node_modules'],
moduleNameMapper : {
"electron": "<rootDir>/mocks/electron.js",
"\\.(jpg|jpeg|png|gif|eot|otf|webp|svg|ttf|woff|woff2|mp4|webm|wav|mp3|m4a|aac|oga)$":
"<rootDir>/mocks/fileMock.js",
"\\.(css|scss)$": "<rootDir>/mocks/fileMock.js",
'^appPackage$' : '<rootDir>/package.json',
'^#actions(.*)$' : '<rootDir>/app/actions$1',
'^#components(.*)$' : '<rootDir>/app/components$1',
'^#containers(.*)$' : '<rootDir>/app/containers$1',
'^appConstants$' : '<rootDir>/app/constants.js',
'^#extensions(.*)$' : '<rootDir>/app/extensions$1',
'^#logger$' : '<rootDir>/app/logger.js',
'^#reducers(.*)$' : '<rootDir>/app/reducers$1',
'^#store(.*)$' : '<rootDir>/app/store',
'^#utils(.*)$' : '<rootDir>/app/utils$1'
}
};
the appveyor config file is here.
Any pointers/ideas/things to check super appreciated. If more code clarification could be needed, just let me know.
Thanks in advance!
I had a similar error with ibm_db module on Windows 10 ( but not in a Linux Docker container ):
The specified module could not be found.
\\?\C:\_projects\projectName\node_modules\ibm_db\build\Release\odbc_bindings.node
at Runtime._loadModule (node_modules/jest-runtime/build/index.js:572:29)
at bindings (node_modules/bindings/bindings.js:112:48)
Found the solution here:
Download odbc driver from IBM site. You might need to register
at IBM for this.
Install this odbc driver by simply extracting the
content of the package that you downloaded to some folder, for
examle to "C:\IBMDB2\CLIDRIVER\"
Set IBM_DB_HOME environmental variable pointing to folder with drivers. ( to
"C:\IBMDB2\CLIDRIVER\" if you used that path )
Add "%IBM_DB_HOME%\bin" path in "PATH" env variable.
Restart or Sing out/Sign in so changes to PATH take effect.
Reinstall packages using "npm install" since all those environmental variables are only used during the package installation phase.
Related
I was learning to build android apps from nodejs using androidjs library.
Official site- https://android-js.github.io/ ,
documentation- https://android-js.github.io/androidjs/
So, I started with sample app provided by its offcial site, source code link- https://github.com/android-js/androidjs/ , install all the necessary packages and follow the procedure provided. But apk file is never built and I got build process exited with code 1 and sign process exited with code 1.
Here is what I got in my console:
{ DEBUG: false,
BUILDER__cwd: '/usr/local/lib/node_modules/androidjs-builder',
PROJECT__cwd: '/home/satnam/AndroidStudioProjects/mywork/story app',
PROJECT__dist: '/home/satnam/AndroidStudioProjects/mywork/story app/dist',
PROJECT__DIST__name: 'dist',
platform: 'linux',
force_replace: true }
app core copied !
Core Modules Copied !
copying user app done.
User data copied
reading /home/satnam/AndroidStudioProjects/mywork/story app/dist/app-debug/AndroidManifest.xml
User assets cleared
package name com.androidjs.mypkg
{ '$': { 'android:name': 'android.permission.INTERNET' } }
{ '$':
{ 'android:name': 'android.permission.WRITE_EXTERNAL_STORAGE' } }
{ '$':
{ 'android:name': 'android.permission.READ_EXTERNAL_STORAGE' } }
android.webkit.PermissionRequest
Done!
AndroidManifest updated!
changing app name /home/satnam/AndroidStudioProjects/mywork/story app/dist/app-debug/res/values/strings.xml
{ _: 'myapp', '$': { name: 'app_name' } }
App Name updated!
Icon updated!
Building...
(node:5860) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 end listeners added. Use emitter.setMaxListeners() to increase limit
I: Using Apktool 2.4.0
I: Checking whether sources has changed...
I: Smaling smali folder into classes.dex...
I: Checking whether resources has changed...
I: Building resources...
build process exited with code 1
Build finished!
Sign apk
stderr: provided apk path or file '/home/satnam/AndroidStudioProjects/mywork/story app/dist/app.apk' does not exist
stderr: java.lang.IllegalArgumentException: provided apk path or file '/home/satnam/AndroidStudioProjects/mywork/story app/dist/app.apk' does not exist
stderr:
at at.favre.tools.apksigner.ui.FileArgParser.parseAndSortUniqueFilesNonRecursive(FileArgParser.java:38)
at at.favre.tools.apksigner.SignTool.execute(SignTool.java:63)
stderr:
at at.favre.tools.apksigner.SignTool.mainExecute(SignTool.java:48)
at at.favre.tools.apksigner.SignTool.main(SignTool.java:36)
stderr:
Cmd history for debugging purpose:
-----------------------
sign process exited with code 1
I have tried with my own nodejs app too, but it's giving me same error.
I don't know what to do, I need help building this. I have to submit android project in college but I have a great interest in nodejs, so I want to build android app from node app. In addition, if you know any other method to build android app from node app you can suggest me. Thanks in advance!
It seems that they are currently having an issue
Maybe there are some issues in androidjs-builder#1.0.7.
We'll fix this
issues ASAP, For now you can downgrade to older version 1.0.6 follow
these steps:
npm uninstall -g androidjs-builder
npm install -g androidjs-builder#1.0.6
I have just tested their Getting Started Example and it has worked (androidjs-builder#1.0.6)
This issue has been fixed and also added some new features to the newer version of Android JS 2.0.0.
So let's check it out version 2.0
website: https://android-js.github.io
docs: https://android-js.github.io/docs
repo: https://github.com/android-js/androidjs
npm: https://npmjs.com/package/androidjs
Don't know why 2 days ago my projects ( created via vue create ) stopped working - in Chrome i get
Invalid Host Header
and
WDS Disconnected
errors. In cmd everything compiles properly( npm run serve )
I don't know webpack, so i have no idea how to fix it.
What i've already done:
reinstalled node
deleted and reinstalled all npm packages
This issue is caused by this webpack-dev-server issue that has been fixed recently.
To avoid getting the Invalid Host/Origin header error add this to your devServer entry on vue.config.js file:
disableHostCheck: true
Note that disableHostCheck: true is not recommended because it creates security vulnerabilities.
For a dev server running on my local machine, I could resolve the issue by explicitly setting --host in vue-cli-service serve:
scripts: {
serve: "vue-cli-service serve --host myapp.localhost"
}
The --host option is documented here.
Visit the app in your browser under myapp.localhost:8080 (assuming you're using default port 8080).
Found this question searching for the same "Invalid Host Header" issue. Here's how I solved it.
I am running Vue dev server npm run serve in Docker on my remote server. Couldn't access it at http://example.com:8080 with the error message above.
Correct and secure way is to add the domain name to the vue.config.js file:
"devServer": {
"public": "example.com"
}
This is a fresh vue project initiated with Vue Cli command: vue create myproject with Vuetify added via vue add vuetify. Full content of my vue.config.js after that is:
module.exports = {
"transpileDependencies": [
"vuetify"
],
"devServer": {
"public": "example.com"
}
}
This is because of the dev server which isn't accepting external requests. To solve this, we've to configure vue.config.js as below.
If vue.config.js is not found in your vue project, please create the file in root directory and add the following line.
module.exports = {
// options...
devServer: {
disableHostCheck: true
}
}
Source
I'm running a video processing script on AWS Lambda.
While it seems to work perfectly locally (tested using lambda-local), I'm having a strange issue when it runs on Lambda:
{
"errorMessage": "Cannot find module 'fluent-ffmpeg'",
"errorType": "Error",
"stackTrace": [
"Function.Module._resolveFilename (module.js:338:15)",
"Function.Module._load (module.js:280:25)",
"Module.require (module.js:364:17)",
"require (module.js:380:17)",
"Object.<anonymous> (/var/task/processing.js:2:14)",
"Module._compile (module.js:456:26)",
"Object.Module._extensions..js (module.js:474:10)",
"Module.load (module.js:356:32)",
"Function.Module._load (module.js:312:12)",
"Module.require (module.js:364:17)"
]
}
The ZIP I am uploading contains the following files:
~$ find . -maxdepth 2
.
./bin
./bin/ffmpeg
./config.js
./event-samples
./event-samples/custom.js
./event-samples/dynamodb-update.js
./event-samples/kinesis.js
./event-samples/s3-put.js
./frames
./Gulpfile.js
./index.js
./node_modules
./node_modules/async
./node_modules/aws-sdk
./node_modules/fluent-ffmpeg
./node_modules/gulp
./node_modules/gulp-awslambda
./node_modules/gulp-zip
./package.json
./processing.js
./utils.js
(The buggy require is located in processing.js)
If I open the ZIP, node_modules/fluent-ffmpeg/* does exist.
I tried to include the module using:
require("./node_modules/fluent-ffmpeg/index")
require(__dirname + "./node_modules/fluent-ffmpeg/index")
require(process.env.LAMBDA_TASK_ROOT + "/node_modules/fluent-ffmpeg/index")
But none of these solved the problem. I also tried reinstalling node and npm on my machine, rm -rf node_modules and npm install (just in case).
Since Lambda runs Node 0.10.36, I also tried using this version on my machine to do the npm install, but that doesn't change anything either.
Help appreciated.
Thanks!
Solved. My development machine runs Windows 7, and that's apparently what caused the issue... Would be interested in the reasons, though.
Anyway, running npm install on a linux installation and uploading the code to Lambda did the trick.
I have seen that chromedriver can output a logfile (https://sites.google.com/a/chromium.org/chromedriver/logging)
This page shows how to set this up when executing the exe directly:
chromedriver.exe --verbose --log-path=chromedriver.log
I cannot figure out how to set this up in Protractor however
My current protractor.conf.js
require('babel/register');
exports.config = {
framework: 'jasmine2',
seleniumServerJar: './node_modules/protractor/selenium/selenium-server-standalone-2.45.0.jar'
};
From #alecxe's answer below and protractor's browser setup docs I tried adding the following (with and without --s) but with no apparent effect:
capabilities: {
browserName: "chrome",
chromeOptions: {
args: [
"--verbose",
"--log-path=chromedriver.log"
]
}
}
I also tried specifying an absolute path (log-path=/chromedriver.log) which also didn't work.
You can always start up your own instance of chromedriver in a separate process and tell Protractor to connect to that. For example, if you start chromedriver with:
chromedriver --port=9515 --verbose --log-path=chromedriver.log
Then you could use a configuration file for Protractor like so:
exports.config = {
seleniumAddress: 'http://localhost:9515',
capabilities: {
'browserName': 'chrome'
},
specs: ['example_spec.js'],
};
We use a shell script to add chromedriver logging, among other checks. You can then point protractor at the shell script:
protractor config:
// When running chromedriver, use this script:
chromeDriver: path.resolve(topdir, 'bin/protractor-chromedriver.sh'),
bin/protractor-chromedriver.sh
TMPDIR="/tmp"
NODE_MODULES="$(dirname $0)/../node_modules"
CHROMEDRIVER="${NODE_MODULES}/protractor/selenium/chromedriver"
LOG="${TMPDIR}/chromedriver.$$.log"
fatal() {
# Dump to stderr because that seems reasonable
echo >&2 "$0: ERROR: $*"
# Dump to a logfile because webdriver redirects stderr to /dev/null (?!)
echo >"${LOG}" "$0: ERROR: $*"
exit 11
}
[ ! -x "$CHROMEDRIVER" ] && fatal "Cannot find chromedriver: $CHROMEDRIVER"
exec "${CHROMEDRIVER}" --verbose --log-path="${LOG}" "$#"
According to the protractor's source code, chromedriver service is started without any arguments and there is no direct way to configure the arguments. Even though the chromedriver's Service Builder that protractor uses actually has an ability to specify the verbosity and the log path:
var service = new chrome.ServiceBuilder()
.loggingTo('/my/log/file.txt')
.enableVerboseLogging()
.build();
Old (incorrect) answer:
You need to set the chrome arguments:
capabilities: {
browserName: "chrome",
chromeOptions: {
args: [
"verbose",
"log-path=chromedriver.log"
]
}
},
See also:
Viewing outstanding requests
Since, the previous answer by #P.T. didn't work for me on Windows 7, I started with his suggestions and got it working on Windows. Here is a working solution for Windows 7 users.
STEP 1: Install BASH and JQ and confirm they are working on your Windows box
Download bash (for Windows 10
https://itsfoss.com/install-bash-on-windows/ ; for Windows 7
download latest here:
https://sourceforge.net/projects/win-bash/files/shell-complete/latest/ ; for Windows Server 2012 or any Windows OS that already has Git installed on it, you already have a bash.exe and sh.exe installed at C:\Program Files\Git\usr\bin or C:\Program Files (x86)\Git\usr\bin already
)
Install bash - For Windows 7/ download it and extract the zip files to a directory.
Download jq (https://stedolan.github.io/jq/) and install it in the same directory location as bash
Make SURE that you add your above directory (for Windows 7- where you extracted the bash zip files to; for other applicable OSes that have git, the path it is installed at) to your PATH system environment variable.
Once the above is installed and added to your PATH, close ALL and reopen Webstorm and any CMD windows you wish to run your work in.
Test that bash is actually installed by simply typing it on a windows command prompt
C:\git\> bash .
Doing so should produce a bash cmd prompt like this
bash$
STEP 2: Add Custom Files for Redirecting Chromedriver to user Debug Logging
Add the following files to the top level of the project (wherever your protractor-conf.js is located). These files allow us to add custom debug switches to the chromedriver.exe execution.
Note that this is necessary because these switches are not exposed through protractor and cannot be done directly in the protractor.conf.js file via the chromeOptions/args flags as you would normally expect
chromedriver.cmd -- exact source shown below:
bash protractor-chromedriver.sh %*
protractor-chromedriver.sh -- exact source shown below:
TMPDIR="$(dirname $0)/tmp"
NODE_MODULES="$(dirname $0)/node_modules"
SELENIUM="${NODE_MODULES}/protractor/node_modules/webdriver-manager/selenium"
UPDATECONFIG="${SELENIUM}/update-config.json"
EXEFILENAME="$(cat ${UPDATECONFIG} | jq .chrome.last | tr -d '""')"
CHROMEDRIVER="${SELENIUM}/${EXEFILENAME##*'\\'}"
LOG="${TMPDIR}/chromedriver.$$.log"
fatal() {
# Dump to stderr because that seems reasonable
echo >&2 "$0: ERROR: $*"
# Dump to a logfile because webdriver redirects stderr to /dev/null (?!)
echo >"${LOG}" "$0: ERROR: $*"
exit 11
}
[ ! -x "$CHROMEDRIVER" ] && fatal "Cannot find chromedriver: $CHROMEDRIVER"
exec "${CHROMEDRIVER}" --verbose --log-path="${LOG}" "$#"
/tmp -- create this directory at the top level of your project (same as the location of the protractor.conf.js file.
STEP 3: Update protractor.conf.js file.
In the protractor.conf.js file, add the following line as a property in the exports.config object. As in:
exports.config = {
.. ..
chromeDriver: 'chromedriver.cmd',
.. ..
STEP 4: Launch your tests
your test should now run and if the chrome driver outputs any log information it will appear in a file called chromedriver.???.log in the tmp directory under your project.
Important caveats
This script set up assumes you install and run protractor (and the chrome driver under it) within the local node_modules directory inside your project. That is how I run my code, because I want it complete self-contained and re-generated in the build process/cycle. If you have protractor/chromedriver installed globally you should change the CHROMEDRIVER variable within the protractor-chromedriver.sh file to match your installation of protractor/chrome driver.
hope that helps.
If you're using the seleniumServerJar, in protractor.conf.js set the logfile path to wherever you want it to write the file:
seleniumArgs: [
'-Dwebdriver.chrome.logfile=/home/myUsername/tmp/chromedriver.log',
]
If you're using webdriver-manager start to run a local selenium server, you'll need to edit the webdriver-manager file:
// insert this line
args.push('-Dwebdriver.chrome.logfile=/home/myUsername/tmp/chromedriver.log');
// this line already exists in webdriver-manager, add the push to args before this line
var seleniumProcess = spawnCommand('java', args);
In case you use webdriver-manager: webdriver manager has the chrome_logs option (you can find it in its source code (in opts.ts or opts.js in the compiled code)), so you can use it something like:
webdriver-manager start --chrome_logs /path/to/logfile.txt
I'm using this as a global afterEach hook (mocha):
afterEach(() => {
browser.manage().logs().get('browser').then(function(browserLog) {
if(browserLog && browserLog.length) {
console.log('\nlog: ' + util.inspect(browserLog) + '\n');
}
});
});
My Node app is running fine locally, but has run into an error when deploying to Heroku. The app uses Sequelize in a /models folder, which contains index.js, Company.js and Users.js. Locally, I am able to import the models using the following code in /models/index.js:
// load models
var models = [
'Company',
'User'
];
models.forEach(function(model) {
module.exports[model] = sequelize.import(__dirname + '/' + model);
});
This works fine, however, when I deploy to Heroku the app crashes with the following error:
Error: Cannot find module '/app/models/Company'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at module.exports.Sequelize.import (/app/node_modules/sequelize/lib/sequelize.js:219:24)
at module.exports.sequelize (/app/models/index.js:60:43)
at Array.forEach (native)
at Object.<anonymous> (/app/models/index.js:59:8)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
Process exited with status 8
Initially I thought it was due to case sensitivity (local mac vs heroku linux), but I moved the file, made a git commit, and then moved back and committed again to ensure Company.js is capitalized in the git repository. This didn't solve the problem and I'm not sure what the issue could be.
The problem was due to case sensitivity and file naming. Mac OS X is case insensitive (but aware) whereas Heroku is based on Linux and is case sensitive. By running heroku run bash from my terminal, I was able to see how the /models folder appeared on Heroku's file system. The solution was to rename User.js and Company.js on my local system to new temporary files, commit the changes to git, then rename back to User.js and Company.js being mindful of the capitalized first letter and then committing the changes again via git. Previously I had tried to rename the files directly from user.js to User.js and company.js to Company.js but the git commit and case-sensitive file name changes did not reflect on Heroku.
I can't see the exact fix, but you can figure it out yourself by running heroku run bash to log into a Heroku instance, then run node to enter a REPL, and try requiring the paths directly.
For me, it was caused by a folder that I had accidentally included in .gitignore!
I've been through an error like this one and the cause was that I renamed a module module.js to Module.js and the Heroku cache was conflicting the names. You must disable module caching to avoid this kind of error:
$ heroku config:set NODE_MODULES_CACHE=false
Source: https://help.heroku.com/TO64O3OG/cannot-find-module-in-node-js-at-runtime
One of my files had a lowercase name locally and it was required as a lowercase.
const Product = require('../models/product');
On the git repo it was capitalized.
'../models/Product'
The server was trying to require a file which did not exist. I had to use git mv to rename the file locally then reupload it to fix the issue.
Not sure if this is the same issue as described here, but for me my require("dotenv").config() was not conditioned upon the environment that the code was running in, thus Heroku could not find it since it was installed as a devDependency.
Fix:
if (process.env.NODE_ENV !== "production") {
require("dotenv").config();
}
For me, I just deleted the older app from Heroku and created the new one via Heroku web, and then pushed the code to the newer one, and then it worked.
For me what I changed was:
File name was CheckPermissions and I changed it to checkPermissions and then hosted. Error occurred.
Then revert the changes and hosted. This time worked well.
I faced the same issue and resolved using dockerizing my application.
Create dockerFile from node
set heroku stack as docker
Deploy
Ref : https://devcenter.heroku.com/categories/deploying-with-docker