Heroku + Node: Cannot find module error - javascript

My Node app is running fine locally, but has run into an error when deploying to Heroku. The app uses Sequelize in a /models folder, which contains index.js, Company.js and Users.js. Locally, I am able to import the models using the following code in /models/index.js:
// load models
var models = [
'Company',
'User'
];
models.forEach(function(model) {
module.exports[model] = sequelize.import(__dirname + '/' + model);
});
This works fine, however, when I deploy to Heroku the app crashes with the following error:
Error: Cannot find module '/app/models/Company'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at module.exports.Sequelize.import (/app/node_modules/sequelize/lib/sequelize.js:219:24)
at module.exports.sequelize (/app/models/index.js:60:43)
at Array.forEach (native)
at Object.<anonymous> (/app/models/index.js:59:8)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
Process exited with status 8
Initially I thought it was due to case sensitivity (local mac vs heroku linux), but I moved the file, made a git commit, and then moved back and committed again to ensure Company.js is capitalized in the git repository. This didn't solve the problem and I'm not sure what the issue could be.

The problem was due to case sensitivity and file naming. Mac OS X is case insensitive (but aware) whereas Heroku is based on Linux and is case sensitive. By running heroku run bash from my terminal, I was able to see how the /models folder appeared on Heroku's file system. The solution was to rename User.js and Company.js on my local system to new temporary files, commit the changes to git, then rename back to User.js and Company.js being mindful of the capitalized first letter and then committing the changes again via git. Previously I had tried to rename the files directly from user.js to User.js and company.js to Company.js but the git commit and case-sensitive file name changes did not reflect on Heroku.

I can't see the exact fix, but you can figure it out yourself by running heroku run bash to log into a Heroku instance, then run node to enter a REPL, and try requiring the paths directly.

For me, it was caused by a folder that I had accidentally included in .gitignore!

I've been through an error like this one and the cause was that I renamed a module module.js to Module.js and the Heroku cache was conflicting the names. You must disable module caching to avoid this kind of error:
$ heroku config:set NODE_MODULES_CACHE=false
Source: https://help.heroku.com/TO64O3OG/cannot-find-module-in-node-js-at-runtime

One of my files had a lowercase name locally and it was required as a lowercase.
const Product = require('../models/product');
On the git repo it was capitalized.
'../models/Product'
The server was trying to require a file which did not exist. I had to use git mv to rename the file locally then reupload it to fix the issue.

Not sure if this is the same issue as described here, but for me my require("dotenv").config() was not conditioned upon the environment that the code was running in, thus Heroku could not find it since it was installed as a devDependency.
Fix:
if (process.env.NODE_ENV !== "production") {
require("dotenv").config();
}

For me, I just deleted the older app from Heroku and created the new one via Heroku web, and then pushed the code to the newer one, and then it worked.

For me what I changed was:
File name was CheckPermissions and I changed it to checkPermissions and then hosted. Error occurred.
Then revert the changes and hosted. This time worked well.

I faced the same issue and resolved using dockerizing my application.
Create dockerFile from node
set heroku stack as docker
Deploy
Ref : https://devcenter.heroku.com/categories/deploying-with-docker

Related

Streamr node install on putty (issues)

The guide I am using https://www.youtube.com/watch?v=9cdsQWRDNM4 time marker (7:27) is where it goes to all wrong.
This is where I am having issue with my Streamr node install. Its a very basic install. The issue I am having is I'm very new to putty and this section isn't the same as the install is supose to be -- /root/.streamr/config/default.json but when I change it it becomes another error. The following sentence is what I am generating which becomes an error also which prevents myself from starting my node up (on my vps) Select a path to store the generated config in /home/streamr/.streamr/config/default.json
Either way I am getting an error.
**This is a screen shot of what I am dealing with !!! Any help with putty and setting up VPS would be great. **
can provide actual screenshot if needed!!
: Welcome to the Streamr Network
: Your node's generated name is Leopard Wrist Pond.
: View your node in the Network Explorer:
:
: You can start the broker now with
: streamr-broker /home/streamr/.streamr/config/default.json
root#ubuntu-s-1vcpu-1gb-tor1-01:~# docker run -it -p 7170:7170 -p 7171:7171 -p 1883:1883 -v $(cd ~/.streamr Docker; pwd):/root/.streamr streamr/broker-node:latest streamr/broker-node:lates **Error: Config file not found in the default location. You can run "streamr-broker-init" to generate a confi g file interactively, or specify the config file as argument: "streamr-broker path-to-config/file.json"
at readConfigAndMigrateIfNeeded (/home/streamr/network/packages/broker/dist/src/config/migration.js:155 ** :19)
at Command. (/home/streamr/network/packages/broker/dist/bin/broker.js:20:69)
at Command.listener [as _actionHandler] (/home/streamr/network/node_modules/commander/lib/command.js:48 8:17)
at /home/streamr/network/node_modules/commander/lib/command.js:1227:65
at Command._chainOrCall (/home/streamr/network/node_modules/commander/lib/command.js:1144:12)
at Command._parseCommand (/home/streamr/network/node_modules/commander/lib/command.js:1227:27)
at Command.parse (/home/streamr/network/node_modules/commander/lib/command.js:897:10)
at Object. (/home/streamr/network/packages/broker/dist/bin/broker.js:41:6)
at Module._compile (node:internal/modules/cjs/loader:1218:14)
at Module._extensions..js (node:internal/modules/cjs/loader:1272:10)
npm notice
npm notice New major version of npm available! 8.19.3 -> 9.4.2
npm notice Changelog: https://github.com/npm/cli/releases/tag/v9.4.2
npm notice Run npm install -g npm#9.4.2 to update!
npm notice
Ive tried both ways for the file name but both end in errors. Im nt sure why its not generating a /root/.streamr/config/default.json. Its giving me this strange file name that doesn't work either streamr-broker /home/streamr/.streamr/config/default.json. This one will let me finish the set up of the docker but it will not let me start the node and generates an error as seen in the code!

Expo app thowing file missing error though files are actually present there

I am using pnpm to create expo app. Just created pnpm create expo-app, installed dependencies and then run yarn android === expo start --android.
Tried shamefully-hoisting as well as moving project to location with no spaces in path but each time I get the same error.
I manually checked, all the files that are shown missing in error are actually present there.
How do I fix this?
Error: Unable to resolve module ./node_modules\expo\AppEntry from C:\Users\HariC\AndroidStudioProjects\chat/.:
None of these files exist:
* node_modules\expo\AppEntry(.native|.android.ts|.native.ts|.ts|.android.tsx|.native.tsx|.tsx|.android.js|.native.js|.js|.android.jsx|.native.jsx|.jsx|.android.json|.native.json|.json)
* node_modules\expo\AppEntry\index(.native|.android.ts|.native.ts|.ts|.android.tsx|.native.tsx|.tsx|.android.js|.native.js|.js|.android.jsx|.native.jsx|.jsx|.android.json|.native.json|.json)
at ModuleResolver.resolveDependency (C:\Users\HariC\AndroidStudioProjects\chat\node_modules\.pnpm\metro#0.72.3\node_modules\metro\src\node-haste\DependencyGraph\ModuleResolution.js:152:15)
at DependencyGraph.resolveDependency (C:\Users\HariC\AndroidStudioProjects\chat\node_modules\.pnpm\metro#0.72.3\node_modules\metro\src\node-haste\DependencyGraph.js:264:43)
at C:\Users\HariC\AndroidStudioProjects\chat\node_modules\.pnpm\metro#0.72.3\node_modules\metro\src\lib\transformHelpers.js:170:21
at Server._resolveRelativePath (C:\Users\HariC\AndroidStudioProjects\chat\node_modules\.pnpm\metro#0.72.3\node_modules\metro\src\Server.js:1196:12)
at async Server.requestProcessor [as _processBundleRequest] (C:\Users\HariC\AndroidStudioProjects\chat\node_modules\.pnpm\metro#0.72.3\node_modules\metro\src\Server.js:484:37)
at async Server._processRequest (C:\Users\HariC\AndroidStudioProjects\chat\node_modules\.pnpm\metro#0.72.3\node_modules\metro\src\Server.js:435:9)
Since android didn't work. I tried running web and I was prompted to install #expo/webpack-config. I installed it using expo install #expo/webpack-config#^0.17.2 and wow… It worked!

Cannot find module:'#algolia/cache-common'"

note (tl:dr): everything works locally but not in lambda.
I have a lambda function in AWS and when I run the server locally everything works perfectly, the usage of algolia is being made inside a service, which is in an endpoint inside my server.
I tried installing #algolia/cache-common and it didn't help either.
Every call that is made to the lambda crashes the entire app because of this error.
is there any way to fix it?
the error is the following:
"errorType": "Runtime.ImportModuleError",
"errorMessage": "Error: Cannot find module '#algolia/cache-common'
Require stack:
/opt/nodejs/node_modules/algoliasearch/dist/algoliasearch.cjs.js
/opt/nodejs/node_modules/algoliasearch/index.js
/var/task/dist/api/v1/services/algolia.service.js
/var/task/dist/api/v1/handlers/jobs.handler.js
/var/task/dist/api/v1/controllers/jobs.controller.js
/var/task/dist/api/v1/v1.routes.js
/var/task/dist/api/routes.js
/var/task/dist/serverless.js
/var/runtime/UserFunction.js
/var/runtime/index.js",
"stack": [
"Runtime.ImportModuleError: Error: Cannot find module '#algolia/cache-common'",
"Require stack:",
"- /opt/nodejs/node_modules/algoliasearch/dist/algoliasearch.cjs.js",
"- /opt/nodejs/node_modules/algoliasearch/index.js",
"- /var/task/dist/api/v1/services/algolia.service.js",
"- /var/task/dist/api/v1/handlers/jobs.handler.js",
"- /var/task/dist/api/v1/controllers/jobs.controller.js",
"- /var/task/dist/api/v1/v1.routes.js",
"- /var/task/dist/api/routes.js",
"- /var/task/dist/serverless.js",
"- /var/runtime/UserFunction.js",
"- /var/runtime/index.js",
" at _loadUserApp (/var/runtime/UserFunction.js:202:13)",
" at Object.module.exports.load (/var/runtime/UserFunction.js:242:17)",
" at Object.<anonymous> (/var/runtime/index.js:43:30)",
" at Module._compile (internal/modules/cjs/loader.js:1085:14)",
" at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10)",
" at Module.load (internal/modules/cjs/loader.js:950:32)",
" at Function.Module._load (internal/modules/cjs/loader.js:790:12)",
" at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:76:12)",
" at internal/main/run_main_module.js:17:47"
]
the way I use the algolia is like the following:
const applicationId: any = config.get("ALGOLIA.APPLICATION_ID");
const apiKey: any = config.get("ALGOLIA.ADMIN_API_KEY");
const client = algoliasearch(applicationId, apiKey);
const index = client.initIndex("my-actual-index");
my lambda structure is a lambda and 3 module layers, which worked for every library I used, but doesn't work for algolia in particular. when I watch the lambda's packages I can see algolia related packages
I tried installing the exact package(#algolia/cache-common) and it didn't do anything,
also tried instally #types/algolia and didn't work as well.
is there anything I missed?
This was fixed, the problem was me splitting to layers, and skipping the first index in the loop.
When you "install" dependencies like #algolia/cache-common you are installing them locally.
Your installed dependencies are not automatically available on AWS Lambda. Like your application code, your dependencies need to be deployed as well.
That's why it works on your local machine but not in Lambda.
You did not write anything on how you deploy your code. Tools like AWS SAM or the serverless framework usually take care of not only deploying your application code but also it's dependencies.
So I imagine you are deploying by hand. That means that you most likely will have to also deploy your node_modules folder to AWS Lambda.
Your deployment ZIP archive should look like this:
node_modules/
index.js
The node_modules folder will have a lot of sub-folders etc and obviously, you can have more than one .js file. But for the purpose of this post we leave it at that.
It would definitely be helpful to see how you are building your service. Are the Algolia modules in your core service lambda or one of the layers? I don't know much about layers but I'm curious if you have the same issue if all dependencies are bundled into the service itself?

Google Clould Functions deploy: EROFS: read-only file system

I'm trying to deploy my api to Google Cloud Functions, and I'm getting this:
EROFS: read-only file system, mkdir '/user_code/uploads'
⚠ functions[post]: Deployment error. Function load error:
Code in file index.js can't be loaded. Is there a syntax error in your code?
Detailed stack trace: Error: EROFS: read-only file system, mkdir '/user_code/uploads'
at Error (native)
at Object.fs.mkdirSync (fs.js:932:18)
at Function.sync (/user_code/node_modules/multer/node_modules/mkdirp/index.js:71:13)
at new DiskStorage (/user_code/node_modules/multer/storage/disk.js:21:12)
at module.exports (/user_code/node_modules/multer/storage/disk.js:65:10)
at new Multer (/user_code/node_modules/multer/index.js:15:20)
at multer (/user_code/node_modules/multer/index.js:95:12)
at Object.<anonymous> (/user_code/api/user.js:105:46)
at Module._compile (module.js:577:32)
at Object.Module._extensions..js (module.js:586:10)
Everything in the Cloud Functions runtime is read-only except for os.tmpdir() (which is likely going to be /tmp, but you shouldn't assume that). If you have any code (in api/user.js for example) that attempt to write anywhere else, it'll error.
Same issue for python, but putting this for clarity. My error -
File "/env/local/lib/python3.7/site-packages/google/cloud/storage/blob.py", line 753, in download_to_filename
with open(filename, "wb") as file_obj:
OSError: [Errno 30] Read-only file system: 'testFile.zip'
Get the temp directory as follows (usually /tmp):
import tempfile
tmpdir = tempfile.gettempdir()
Googles documentation can be found here.
While Cloud Storage is the recommended solution for reading and writing files in App Engine, if your app only needs to write temporary files, you can use standard Python 3.7 methods to write files to a directory named /tmp.
All files in this directory are stored in the instance's RAM, therefore writing to /tmp takes up system memory. In addition, files in the /tmp directory are only available to the app instance that created the files. When the instance is deleted, the temporary files are deleted.
Gen1 cloud functions are read-only systems. However, Gen2 cloud functions aren't. I'd recommend changing your function to Gen2
(be careful, this might interfere with another config as gen 2 can be considered as a cloud run)

Windows 10, existing binding.node file cannot be found when running jest

When running windows 10, on appveyor and as a virtualbox, I'm getting the same, error when running jest tests for my electron app.
The specified module could not be found.
\\?\C:\Users\User\peruse\app\node_modules\ref\build\Release\binding.node
Error: The specified module could not be found.
\\?\C:\Users\User\peruse\app\node_modules\ref\build\Release\binding.node
at Runtime.requireModule (node_modules/jest-runtime/build/index.js:263:31)
at bindings (app/node_modules/bindings/bindings.js:76:44)
at Object.<anonymous> (app/node_modules/ref/lib/ref.js:5:47)
(https://ci.appveyor.com/project/joshuef/peruse/build/1.0.733/job/fwflo19to9rvt085#L4664)
The thing is... the file itself exists, as confirmed by running:
dir \\?\C:\Users\User\peruse\app\node_modules\ref\build\Release\binding.node
which results in:
-a---- 4/9/2018 1:44 AM 157696 binding.node
And the application itself runs fine (it's an electron app, compiled via webpack). Only when running tests against the native libs do I get this error. And only in windows. (osx/linux tests are running fine.)
I've tried rebuilding, using npm instead of yarn, re-installing vs2017 tools via the command line... I'm consistently getting this error on both systems (which is something), but I'm stumped as to what to try next...
Jest is being run from the command line.
Jest is configured thus:
module.exports = {
verbose : true,
moduleFileExtensions : ['js', 'jsx', 'json'],
setupFiles : ['raf/polyfill','<rootDir>/test/setup.js'],
testPathIgnorePatterns : ['node_modules'],
moduleDirectories : ['app', 'test', 'node_modules', 'app/node_modules'],
moduleNameMapper : {
"electron": "<rootDir>/mocks/electron.js",
"\\.(jpg|jpeg|png|gif|eot|otf|webp|svg|ttf|woff|woff2|mp4|webm|wav|mp3|m4a|aac|oga)$":
"<rootDir>/mocks/fileMock.js",
"\\.(css|scss)$": "<rootDir>/mocks/fileMock.js",
'^appPackage$' : '<rootDir>/package.json',
'^#actions(.*)$' : '<rootDir>/app/actions$1',
'^#components(.*)$' : '<rootDir>/app/components$1',
'^#containers(.*)$' : '<rootDir>/app/containers$1',
'^appConstants$' : '<rootDir>/app/constants.js',
'^#extensions(.*)$' : '<rootDir>/app/extensions$1',
'^#logger$' : '<rootDir>/app/logger.js',
'^#reducers(.*)$' : '<rootDir>/app/reducers$1',
'^#store(.*)$' : '<rootDir>/app/store',
'^#utils(.*)$' : '<rootDir>/app/utils$1'
}
};
the appveyor config file is here.
Any pointers/ideas/things to check super appreciated. If more code clarification could be needed, just let me know.
Thanks in advance!
I had a similar error with ibm_db module on Windows 10 ( but not in a Linux Docker container ):
The specified module could not be found.
\\?\C:\_projects\projectName\node_modules\ibm_db\build\Release\odbc_bindings.node
at Runtime._loadModule (node_modules/jest-runtime/build/index.js:572:29)
at bindings (node_modules/bindings/bindings.js:112:48)
Found the solution here:
Download odbc driver from IBM site. You might need to register
at IBM for this.
Install this odbc driver by simply extracting the
content of the package that you downloaded to some folder, for
examle to "C:\IBMDB2\CLIDRIVER\"
Set IBM_DB_HOME environmental variable pointing to folder with drivers. ( to
"C:\IBMDB2\CLIDRIVER\" if you used that path )
Add "%IBM_DB_HOME%\bin" path in "PATH" env variable.
Restart or Sing out/Sign in so changes to PATH take effect.
Reinstall packages using "npm install" since all those environmental variables are only used during the package installation phase.

Categories