vercel read-only file system, chmod - javascript

Hi everyone,
I use vercel to deploy my project. One of my NextJS project dependencies, located inside node_modules, reads and writes files in its own folder. When I import this dependency, I get the following error:
"errorType":"Runtime.UnhandledPromiseRejection","errorMessage":"Error: EROFS: read-only file system, chmod '/var/task/node_modules/MY_DEPENDENCY/src/FILE'"
Is there a solution?

Check out following thread - https://github.com/vercel/community/discussions/314?sort=new
If you need to store something temporarily, you may try to use /tmp directory.
Limit 512 MB + no guaranty - https://github.com/vercel/vercel/discussions/5320

You did not mention what the project dependency does, however its generally an anti-pattern to use serverless functions to write files as there is no guarantee of existence on the next invocation
Potential directional Solves
It might be better for you depending on the type of file written (assets or flat files ) to a bucket like S3 OR to a database - There are multiple ways to accomplish writing of shared state between functions.

Related

How to share code between client and cloud functions [duplicate]

I have a Node server and multiple controllers that perform DB operations and helpers (For e-mail, for example) within that directory.
I'd like to use source from that directory within my functions. Assuming the following directory structure:
src/
server/
/app/controllers/email_helper.js
fns/
send-confirm/
What's the best way to use email_helper within the send-confirm function?
I've tried:
Symbolically linking the 'server' directory
Adding a local repo to send-confirm/package.json
Neither of the above work.
In principle, your Cloud Functions can use any other Node.js module, the same way any standard Node.js server would. However, since Cloud Functions needs to build your module in the cloud, it needs to be able to locate those dependency modules from the cloud. This is where the issue lies.
Cloud Functions can load modules from any one of these places:
Any public npm repository.
Any web-visible URL.
Anywhere in the functions/ directory that firebase init generates for you, and which gets uploaded on firebase deploy.
In your case, from the perspective of functions/package.json, the ../server/ directory doesn't fall under any of those categories, and so Cloud Functions can't use your module. Unfortunately, firebase deploy doesn't follow symlinks, which is why that solution doesn't work.
I see two possible immediate fixes:
Move your server/ directory to be under functions/. I realize this isn't the prettiest directory layout, but it's the easiest fix while hacking. In functions/package.json you can then have a local dependency on ./server.
Expose your code behind a URL somewhere. For example, you could package up a .tar and put that on Google Drive, or on Firebase Cloud Storage. Alternatively, you can use a public git repository.
In the future, I'd love it if firebase deploy followed symlinks. I've filed a feature request for that in Firebase's internal bug tracker.

Is it best practice to hide API keys in .env file in React

I am new to React and was learning how to hide API keys gotten from GitHub API and from other APIs. I found out that it is possible to hide keys in .env files and get access to those keys by using REACT_APP and ensure that .env file is added into .gitignore file in order not to be submitted to a server. `The question is Is it considered best practice the way of hiding keys I described above. Secondly, is .env file added to a server even if we add .env file into .gitignore file.
If you are going to be passing the contents of the environment data file to React, which is client-side code, then it isn't likely to be very useful for keeping things secret.
Mostly this will be useful for keeping your various environments separate (e.g. so you don't accidentally use the URL for your test API server in the production deployment of your app).
If you were using this for server-side code, then it would be useful to keep your secrets secret and not publishing them in a git repository (that you might want to allow other people access to).
Whether or not the environment data file would be deployed to your server would depend on your deployment process. If your deployment process consisted of nothing more than checking out your git repository to the live server, then no, it wouldn't be deployed.

.env vs config.json

I just started learning JavaScript / node.js (a gentle introduction to back-end webdev), and thus I am completely green in the subject.
A few days ago I was reading a tutorial from which I learned to keep my confidential data (like passwords) in a config.json file.
Today I have discovered (by a chance) .env file, and the more I learn about it, it seems, the more people are using it to actually store passwords.
So when should I use .env and when should I use config.json?
The Question
When should .env be used over config.json and for what?
The answer
This is a rather difficult answer. On the one hand, you should only really ever use either of these tools while in development. This means that when in a production or prod-like environment, you would add these variables directly to the environment:
NODE_ENV=development node ./sample.js --mongodb:host "dharma.mongohq.com" --mongodb:port 10065
There is no real clear winner over the other per se as they are both helpful in different ways. You can have nested data with config.json, but on the other hand, you can also have a cleaner data structure with .env
Some thing also to note is that you never want to commit these files to source control (git, svc etc).
On the other hand, these tools make it very easy for beginners to get started quickly without having to worry about how to set the environment variables and the differences between a windows environment and a linux one.
All in all, I'd say its really up to the developer.
.env files are generally used to store information related to the particular deployment environment, while config.json files might be used to store data particular to the application as a whole.
either approach works, and whether or not your config files are stored in your repository is more a function of whether the data needs to be confidential.
This largely comes down to personal preference and the conventions of the frameworks you're using. They're just different formats for storing the same kind of information.
Some example config file formats:
.env
*.yml (YAML files)
*.ini (normally Windows-only)
*.json
At the end of the day, they all accomplish the same purpose: providing your application with environment-specific information (credentials, file paths, connection strings, etc.). Choose the format which best fits your choice of framework.
I think it's really up to you, the important thing to remember is why you're using this approach. The idea is to save your sensitive data in a file that doesn't get pushed to source control or any other place other than your local environment - this keeps the data safer. Then when you're ready to deploy to a remote server somewhere, you need to manually insert those values into that environment.
I generally use .env because the syntax for getting data from a .env file is supported in many remote environments - like heroku. When I deploy an app to heroku, I can go into the settings of the app and put in the environment variables using the heroku dashboard UI - I don't have to figure out how to get a json file manually created, etc... (maybe there are other workarounds). After the variables are in place, I just use process.env.variableName to access the data.
Statistically comparing the two NPM packages (along with other similar solutions) might be the best way to decide for yourself.
At the time of this writing, dotenv is a much smaller package with greater support (actual contributors aside, only deduced by the number of remaining issues and immense popularity). It's also newer by 2.5 years and, if fanfare is important to you, has twice as many stars.
If you are targeting deployment of your application in Docker, then .env is 100% the way to go. The ability to use nested data in config.json is great, but you'll be looking at some PITA when you need to migrate that data to a .env to make deployment with Docker work. docker build and docker-compose both are designed to use .env natively, so if you start with that then it will facilitate a smooth path to "Dockerizing" your application.
I am currently porting an application to run in Docker which was written with no such forethought, and it is quite painful... lots of refactoring, and only for the nested stuff. The basic key:value properties are easy to migrate to a .env:
~$ cat config.json
{
"PROTOCOL": "https",
"HOST": "example.com",
"PORT": 443
}
...
~$ cat .env
PROTOCOL="https"
HOST="example.com"
PORT=443
I prefer json because typescript can infer type from it, this is not possible with env file

How do I obtain the path of a file in a Meteor package?

I know how to get the current directory from a Meteor package, but how do I get the path of a specific file in the project?
node's __dirname and __filename don't work in Meteor.
It is complicated.
meteor run copies your project files to a tree of directories inside <project-dir>/.meteor/local/build, reorganizes them in non-obvious ways (e.g.. the private subdirectory in the original tree becomes the assets subdirectory) and mixes it in with various npm modules to create a bundle that can be executed as a nodejs project. Indeed, to avoid duplications, there is a .gitignore file automatically set up in the .meteor directory that tells git, if you use it for version control, not to copy the .meteor/local directory.
The original project directory gets watched in case you change a file. The change then gets copied into the current project build directory and the project rebuilt.
If you deploy to a remote system, the build gets copied to a server and then run.
process is usually a defined global server-side object, and works according to the node.js API, because the meteor server code is ultimately running in node.js.
So you can run console.log(process.cwd()); in your server-side to obtain the current working directory for the server process, usually something like:
~/<meteor project directory>/.meteor/local/build/programs/server
This suggests that when meteor run is done locally, original project files are in ../../../../../, but don't use that as it may change in the future.
Instead, for the directory containing the original project files, you could use:
baseDir = process.cwd().replace(/\/\.meteor.*$/, '');
This will get the working directory, and truncate everything beginning with /.meteor
This won't work for a server deploy, though, because the original project tree is not needed on the server, only the build. Files that aren't intended to be client or server code could possibly be stuck in the private subdir, which as I mentioned becomes the assets subdir in the build. Ways to currently find files in the build is either manual inspection .meteor/local in a local run, or use of a JS library that calls or imitates gnu find.
Since you mentioned packages, I note that in the build, server-side package code finally ends up in:
~/<project-dir>/.meteor/local/build/programs/server/packages
and client side in:
~/<project-dir>/.meteor/local/build/programs/web.browser/packages

Concatenate NPM package into one JS file

I am trying to get Swig (the template language) working on Parse Cloud Code with Express. Parse Cloud Code is a Node/Express host that doesn't allow NPM. Ridiculous, I know. I can still load external files into code with requires statements however, so I think that there's hope I can get this working.
So my question is how do I get the whole entire Swig package into a single JS file that I can include from my Parse Express app like so:
var swig = require("./cloud/swig.js");
Worth noting that Parse breaks normal require statements so that the NPM package as-is doesn't work without modifying each and every single file in the node_modules folder to have cloud in its path (which is why my above path has cloud in it). Parse also chokes while uploading lots of small files. Concatenation is a need on this platform.
I have tried playing with browserify for hours, but no combination of anything I do makes exposes the Swig object when I load the browserified file with the require statement. I think it may be the right option since the Browserified file includes all the files from Swig, but it doesn't expose them externally.
My question is either can this be done in browserify, and if so, how? Or is there another way to concatenate a NPM repo down to one file so it can be more easily included from this platform?
Thanks so much.
Browserify is not the right tool for the job.
As the name implies, browserify is intended to be used to generate files you want to execute in the browser. It walks the require calls from an entrypoint (i.e. some JS file you pass to browserify) and bundles them in an object that maps their names to functions wrapping the modules. It does not expect a require function to already exist and doesn't make any use of it. It substitutes its own implementation of require that only does one thing: look up names from the bundle, execute the matching function and return its exports.
You could theoretically require a browserify bundle, but it would just return an empty object (although it might mess with globals). And in all likelihood it might break because the bundled modules think they are being executed in a browser. This won't do any good.
The only sane option if you want to stick with the host, is to copy over the node_modules folder from your local project folder. This may not work if your computer and the server are not 100% compatible (e.g. 32-bit vs 64-bit, Debian vs RedHat, OSX/Windows vs Linux) but this mostly depends on your exact dependencies (basically anything that is built with node-gyp can be a problem).
Node.js uses the node_modules folder when looking up dependencies in require calls automagically. If you can somehow get a node_modules folder with the right contents on the server, require("foo") will work as long as node_modules contains a module foo.
Ultimately, you are trying to use npm modules in Parse Cloud code and currently it's not possible:
https://parse.com/questions/using-npm-modules-in-cloud-code
But if you are only trying to use Swig, then as a work-around, you can consider using underscore template instead. Parse already includes underscore:
https://parse.com/docs/cloud_modules_guide#underscore

Categories