Google Clould Functions deploy: EROFS: read-only file system - javascript

I'm trying to deploy my api to Google Cloud Functions, and I'm getting this:
EROFS: read-only file system, mkdir '/user_code/uploads'
⚠ functions[post]: Deployment error. Function load error:
Code in file index.js can't be loaded. Is there a syntax error in your code?
Detailed stack trace: Error: EROFS: read-only file system, mkdir '/user_code/uploads'
at Error (native)
at Object.fs.mkdirSync (fs.js:932:18)
at Function.sync (/user_code/node_modules/multer/node_modules/mkdirp/index.js:71:13)
at new DiskStorage (/user_code/node_modules/multer/storage/disk.js:21:12)
at module.exports (/user_code/node_modules/multer/storage/disk.js:65:10)
at new Multer (/user_code/node_modules/multer/index.js:15:20)
at multer (/user_code/node_modules/multer/index.js:95:12)
at Object.<anonymous> (/user_code/api/user.js:105:46)
at Module._compile (module.js:577:32)
at Object.Module._extensions..js (module.js:586:10)

Everything in the Cloud Functions runtime is read-only except for os.tmpdir() (which is likely going to be /tmp, but you shouldn't assume that). If you have any code (in api/user.js for example) that attempt to write anywhere else, it'll error.

Same issue for python, but putting this for clarity. My error -
File "/env/local/lib/python3.7/site-packages/google/cloud/storage/blob.py", line 753, in download_to_filename
with open(filename, "wb") as file_obj:
OSError: [Errno 30] Read-only file system: 'testFile.zip'
Get the temp directory as follows (usually /tmp):
import tempfile
tmpdir = tempfile.gettempdir()
Googles documentation can be found here.
While Cloud Storage is the recommended solution for reading and writing files in App Engine, if your app only needs to write temporary files, you can use standard Python 3.7 methods to write files to a directory named /tmp.
All files in this directory are stored in the instance's RAM, therefore writing to /tmp takes up system memory. In addition, files in the /tmp directory are only available to the app instance that created the files. When the instance is deleted, the temporary files are deleted.

Gen1 cloud functions are read-only systems. However, Gen2 cloud functions aren't. I'd recommend changing your function to Gen2
(be careful, this might interfere with another config as gen 2 can be considered as a cloud run)

Related

AWS Lambda read-only file system error failed to create directory with Docker image

Problem
Docker image compiles successfully, however fails when ran from Lambda because of its read only file system.
Summary
Luminati-proxy has a docker integration for their proxy manager. I copied over their docker file and appended it to my own docker file for pushing out a script to AWS Lambda. The building of the docker image was successful, but when pushed off to Lambda, it failed because of a read only file system error:
Failed to create directory /home/sbx_user1051/proxy_manager: [code=EROFS] Error: EROFS: read-only file system, mkdir '/home/sbx_user1051'
2022-02-28 19:37:22.049 FILE (8): Failed to create directory /home/sbx_user1051/proxy_manager: [code=EROFS] Error: EROFS: read-only file system, mkdir '/home/sbx_user1051'
Analysis
Upon examining the trackback, the error is focused on the proxy_manager installation and fails with directory changes (mkdir, mk_work_dir ...). These changes are made within the .js files of the GitHub which is pulled from the docker file as the proxy_manager installation. Obviously the only mutable directory on Lambda is the /tmp directory, but is there a workaround for getting this set up without resorting to putting everything under the /tmp directory as it wipes itself upon runtime? Reinstalling a proxy_manager each run is not at all ideal...
Answer?
Could this be as simple as setting environment stipulations such as:
ENV PATH=...
ENV LD_LIBRARY_PATH=...
If so, I how should they be configured? I am adding the docker file below for quick reference:
FROM node:14.18.1
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
&& apt-get update \
&& apt-get install -y google-chrome-stable fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-kacst fonts-freefont-ttf \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
USER root
RUN npm config set user root
RUN npm install -g npm#8.1.3
RUN npm install -g #luminati-io/luminati-proxy
ENV DOCKER 1
CMD ["luminati", "--help"]
I appreciate the insight!
TL;DR:
You should instead leverage an S3 bucket to store, read and modify any file. All lambdas and microservices. In general, should always be treated as stateless
All Luminati-proxy functionality comes prebuilt within amazon lambdas and API Gateway
Lambda functions are not meant to run long-running processes as they are limited to 15 minutes maximum so the design of the container that you are trying to run in lambdas has to have AWS serverless architecture considerations in its design
Explanation:
According to the documentation of AWS
Lambda functions:
The container image must be able to run on a read-only file system. Your function code can access a writable /tmp directory with 512 MB of storage.
Since containers based on Linux based images are already supposed to have a folder called /tmp you should pretty much be able to access that folder any time from your code to read( remember, read-only FS)
If you are looking to store content amazon's solution for that is for you to have any content created and manage over an S3 bucket, buckets are as easy to use as if you read a file locally but will remain accessible after the lambda instance finishes the workload
Please refer to Read file from aws s3 bucket using node fs and Upload a file to Amazon S3 with NodeJS for more details on how to use an S3 bucket. There are plenty of ways to achieve it regardless of the language been used.
This is all based on a best practice promoted by AWS over their platform. Where lambdas remain stateless
AWS Lambda provides /tmp folder for users to write files on lambda, as I don't about your question context but hope this help.
You can write files to AWS Lambda at /tmp folder
eg. I want to create a file demo.txt at runtime/programmatically using AWS lambda, then i can write the file to /txt/demo.txt

Problem running same script in different platforms(Termux and Windows)

I'm trying to run my server script on nodejs through Termux on my phone and normally on windows. On windows it all runs perfectly without erros, but on termux there's an error "Cannot find module gameserver.js", even though that is the main file(I'm running "sudo node gameserver.js" inside the folder its locatedd), and no other file tries to do a require on it.
The error points to js files from node itself(loader.js, run_main.js and run_main_module.js). I've given termux root access, and I run node.js using sudo, so I've no clue what could be happening. I've no .json file since I'm just trying to run a js file through node on my phone. Both windows and my phone are using the same node.js version
Did you try to give an absolute path to your file as a parameter? It could happen if working directory has changed (because of sudo?).

Error: EROFS: read-only file system while streaming the xlsx content in Lambda

I am Using xlsx library to parse an excel document to get the data as
sheet per file, row per file , column per file etc...
While processing inside AWS lambdaI am getting the below error stack
{"errorType":"Runtime.UnhandledPromiseRejection","errorMessage":"Error: EROFS: read-only file system, open 'Dist Share Summary.xlsx'","reason":{"errorType":"Error","errorMessage":"EROFS: read-only file system, open 'Dist Share Summary.xlsx'","code":"EROFS","errno":-30,"syscall":"open","path":"Dist Share Summary.xlsx","stack":["Error: EROFS: read-only file system, open 'Dist Share Summary.xlsx'"," at Object.openSync (fs.js:443:3)"," at Object.writeFileSync (fs.js:1194:35)"," at write_dl (/var/task/node_modules/xlsx/xlsx.js:2593:112)"," at write_zip_type (/var/task/node_modules/xlsx/xlsx.js:20730:31)"," at writeSync (/var/task/node_modules/xlsx/xlsx.js:20818:22)"," at Object.writeFileSync (/var/task/node_modules/xlsx/xlsx.js:20841:9)"," at workbook.SheetNames.forEach.element (/var/task/index.js:31:26)"," at Array.forEach ()"," at getParsedData (/var/task/index.js:27:32)"," at Parsing (/var/task/index.js:20:32)"]},"promise":{},"stack":["Runtime.UnhandledPromiseRejection: Error: EROFS: read-only file system, open 'Dist Share Summary.xlsx'"," at process.on (/var/runtime/index.js:37:15)"," at process.emit (events.js:198:13)"," at process.EventEmitter.emit (domain.js:448:20)"," at emitPromiseRejectionWarnings (internal/process/promises.js:140:18)"," at process._tickCallback (internal/process/next_tick.js:69:34)"]}
The Lambda file system is read only aside from /tmp - you have up to 500mb to use, don't forget to remove the file when you're don, if the container is reused the file will still be there and you'll run out of space over time.

node js web app on AWS failed to lookup failed to lookup view "pages/home" in views directory "/var/app/views"

I'm trying to put my simple node js web app on AWS EB but it seems like it has a problem with path. I'm running windows on my machine and it works but when I deploy it on EB it gives me the following error
Error: Failed to lookup view "pages/home" in views directory "/var/app/views"
at EventEmitter.render (/var/app/current/node_modules/express/lib/application.js:579:17)
at ServerResponse.render (/var/app/current/node_modules/express/lib/response.js:961:7)
at null.<anonymous> (/var/app/current/controller/app.js:113:14)
at tryCatcher (/var/app/current/node_modules/bluebird/js/release/util.js:16:23)
at Promise.successAdapter [as _fulfillmentHandler0] (/var/app/current/node_modules/bluebird/js/release/nodeify.js:23:30)
at Promise._settlePromise (/var/app/current/node_modules/bluebird/js/release/promise.js:557:21)
at Promise._settlePromise0 (/var/app/current/node_modules/bluebird/js/release/promise.js:605:10)
at Promise._settlePromises (/var/app/current/node_modules/bluebird/js/release/promise.js:684:18)
at Async._drainQueue (/var/app/current/node_modules/bluebird/js/release/async.js:126:16)
at Async._drainQueues (/var/app/current/node_modules/bluebird/js/release/async.js:136:10)
at Immediate.Async.drainQueues [as _onImmediate] (/var/app/current/node_modules/bluebird/js/release/async.js:16:14)
at processImmediate [as _immediateCallback] (timers.js:383:17)
my code for path
var path = require('path');
var app = express();
app.use(express.static(path.resolve('../public')));
app.set('views',path.resolve('../views'));
app.set('view engine', 'ejs');
I'm assuming you're using the path module to resolve directory and file names (to avoid issues with the differences between Windows and linux filesystems).
It could be a file permissions or ownership issue. Check that the user under which the node process is running has read permissions on the relevant directories and files.
It could also be a path resolution issue. It looks like your app is in /var/app/current/, but you're trying to find files in /var/app/views/. Should work if that's how your app is structured, but it would be problematic if you intended to look for files in /var/app/current/views/.
I had similar issue, my problem is that I forget to zip the view sub directory in my root directory recursively, thus resulting in a empty view directory when deploy to AWS Elastic Beanstalk. So when packing your code you should use something like zip -r example.zip . .
Just concentrate on case sensitivity. Ensure that the filename passed on the render is exactly the same as the file on the server ...
eg.
res.render("Challenge",... having a view filename challenge.hbs... WILL FAIL
res.render("challenge",... having a view filename challenge.hbs... WILL PASS

Heroku + Node: Cannot find module error

My Node app is running fine locally, but has run into an error when deploying to Heroku. The app uses Sequelize in a /models folder, which contains index.js, Company.js and Users.js. Locally, I am able to import the models using the following code in /models/index.js:
// load models
var models = [
'Company',
'User'
];
models.forEach(function(model) {
module.exports[model] = sequelize.import(__dirname + '/' + model);
});
This works fine, however, when I deploy to Heroku the app crashes with the following error:
Error: Cannot find module '/app/models/Company'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at module.exports.Sequelize.import (/app/node_modules/sequelize/lib/sequelize.js:219:24)
at module.exports.sequelize (/app/models/index.js:60:43)
at Array.forEach (native)
at Object.<anonymous> (/app/models/index.js:59:8)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
Process exited with status 8
Initially I thought it was due to case sensitivity (local mac vs heroku linux), but I moved the file, made a git commit, and then moved back and committed again to ensure Company.js is capitalized in the git repository. This didn't solve the problem and I'm not sure what the issue could be.
The problem was due to case sensitivity and file naming. Mac OS X is case insensitive (but aware) whereas Heroku is based on Linux and is case sensitive. By running heroku run bash from my terminal, I was able to see how the /models folder appeared on Heroku's file system. The solution was to rename User.js and Company.js on my local system to new temporary files, commit the changes to git, then rename back to User.js and Company.js being mindful of the capitalized first letter and then committing the changes again via git. Previously I had tried to rename the files directly from user.js to User.js and company.js to Company.js but the git commit and case-sensitive file name changes did not reflect on Heroku.
I can't see the exact fix, but you can figure it out yourself by running heroku run bash to log into a Heroku instance, then run node to enter a REPL, and try requiring the paths directly.
For me, it was caused by a folder that I had accidentally included in .gitignore!
I've been through an error like this one and the cause was that I renamed a module module.js to Module.js and the Heroku cache was conflicting the names. You must disable module caching to avoid this kind of error:
$ heroku config:set NODE_MODULES_CACHE=false
Source: https://help.heroku.com/TO64O3OG/cannot-find-module-in-node-js-at-runtime
One of my files had a lowercase name locally and it was required as a lowercase.
const Product = require('../models/product');
On the git repo it was capitalized.
'../models/Product'
The server was trying to require a file which did not exist. I had to use git mv to rename the file locally then reupload it to fix the issue.
Not sure if this is the same issue as described here, but for me my require("dotenv").config() was not conditioned upon the environment that the code was running in, thus Heroku could not find it since it was installed as a devDependency.
Fix:
if (process.env.NODE_ENV !== "production") {
require("dotenv").config();
}
For me, I just deleted the older app from Heroku and created the new one via Heroku web, and then pushed the code to the newer one, and then it worked.
For me what I changed was:
File name was CheckPermissions and I changed it to checkPermissions and then hosted. Error occurred.
Then revert the changes and hosted. This time worked well.
I faced the same issue and resolved using dockerizing my application.
Create dockerFile from node
set heroku stack as docker
Deploy
Ref : https://devcenter.heroku.com/categories/deploying-with-docker

Categories