knexfile.js does not read Dotenv variables - javascript

So I am trying out knexjs and the first setup works like a charm. I've set up my connection, created a datastructure and in my terminal i ran $ knex migrate:latest.
It all worked fine... the migrated tables showed up in my database ran the migrate again and got Already up to date.
Now here is where I get an issue: Using Dotenv... Here is my code:
require('dotenv').config();
module.exports = {
development: {
client: process.env.DB_CLIENT,
connection: {
host: process.env.DB_HOST,
user: process.env.DB_ROOT,
password: process.env.DB_PASS,
database: process.env.DB_NAME,
charset: process.env.DB_CHARSET
}
}
};
As far as i can see nothing wrong with it and when i run the script through node no errors show up.
Then I wanted to check if I still could do a migrate and i get the following error:
Error: ER_ACCESS_DENIED_ERROR: Access denied for user ''#'[MY IP]'
(using password: YES)
I am using the same vars only this time from my .env file. But when i look at the error nothing is loaded from it, and yes both the knexfile.js and .env are in the root of my project :) Among the things i tried is setting the path in different ways within require('dotenv').config(); but then it would throw an error from dotenv meaning the file was already correctly loaded.
Can anyone help me figuring this out?

So after some trial and error i finally figured out what was wrong. I don't know what caused it but somehow the install of Knex wasn't done properly...
I un- and reinstalled Knex (local and global). Then first I installed it on the global level and than as a dependency. After that I initialized Knex again ( $ knex init ) and started from the ground up.
I think, but i am still not sure why because i could not find any info about it, the order of installing Knex matters (or mattered in my case and i am not even sure what i did wrong the first time).
On the side
If you are new to Knex and just blindly follow a random tutorial/article and just create a new file for Knex (i.e. knexfile.js), Knex will still work but other packages could fail to execute properly. This is what i don't see in most articles i found, read the documentation on how to generate the files needed (migrations and seeds). Most articles don't cover these steps properly.
Hope this is worth anything

Related

Strapi CMS Get Bad Request when starting Admin Panel

i have a really strange behavour. I dockerized my Strapi CMS App and everything worked fine until a few days ago. Now i cannot start the application without getting the following error on loading:
In console:
[2022-04-25 11:40:11.922] error: Malicious Path BadRequestError: Malicious Path
at resolvePath (MY_PATH/backend/node_modules/resolve-path/index.js:78:11)
In Browser:
It calls the following URL:
http://0.0.0.0:1337//admin/init
I noticed the // after the port. When i curl http://0.0.0.0:1337/admin/init i get the following response:
{"data":{"uuid":"SOME_UUID","hasAdmin":true}}
But that doesnt help me.
This is my server.js:
module.exports = ({ env }) => ({
host: env('HOST'),
port: env.int('BACKEND_PORT'),
//url: env("PUBLIC_URL"),
app: {
keys: env.array('APP_KEYS'),
},
});
In my .env file is set the following:
HOST=localhost
BACKEND_PORT=1337
It would be awesome if anyone could help me, i am stuck with this problem for a few days. :/
Thank you!
After testing a loooot of stuff i was really with my latin at the end, like we germans like to say for "i have absolutely no idea what is going on here". So at some point i thought this was a bug within strapi. After heavy weird "hooking into my node_modules and editing urls their" i realized, that when i use docker-compose to spin up my containers there is some kind of cache even if i rebuild the images.
So i started docker compose with:
docker-compose up --build -d
So turned out a few days ago i added a backslash to the STRAPI_ADMIN_BACKEND_URL and/or PUBLIC_URL environment variable and this was some kind of cached the whole time..
So if you have the same problem, make sure that your .env file inside your docker container is correct and do compose with the --build flag to make sure everything is builded new until you fixed your bug.
You can also exec -it into the container als check with nano how the .env file looks.
Hope this saves someone some time.

Firebase secrets not defined in process.env

I'm writing a Firebase function with Cloud Storage trigger. Like this
const functions = require('firebase-functions')
const doSomethingWithSecrets = require('./doSomethingWithSecrets')
const doSomethingWhenUploaded = functions.runWith({
secrets: ["MY_SECRET_1", "MY_SECRET_2", "MY_SECRET_3"]
}).storage.object().onFinalize(o => {
functions.logger.debug([
process.env.MY_SECRET_1 // undefined
process.env.MY_SECRET_2 // undefined
process.env.MY_SECRET_3 // undefined
])
doSomethingWithSecrets(process.env.MY_SECRET_1, process.env.MY_SECRET_2, process.env.MY_SECRET_3)
// Error: Invalid secret.
})
All three of them returns undefined. I've made sure that they're properly set. They show up both when using firebase functions:secret:accesss MY_SECRET_1 and in Google Cloud Console.
What's wrong?
Additional info
I previously used it with only one secret and it worked. I don't know what happened, I'm using nvm and lost track of which Node version I used when it worked, so it may be a clue.
process.env returns all the env like normal and none of my secrets shows up.
Update your firebase-tools and the issue will resolve itself. I was dealing with this issue all day today and found a git hub issue that fixed the problem in the latest release of 10.9.2.
npm install -g firebase-tools
https://github.com/firebase/firebase-tools/issues/4540
https://github.com/firebase/firebase-tools/issues/4459
I encountered the exact same issue. After much suffering I finally got it to work. When you start using secrets in your functions, you need to redeploy all functions and NOT just the individual function.
e.g
firebase deploy --only functions:myFunc
results in my secret values coming through as 'undefined' when accessed from my function using runWith(), however after doing a full functions deploy
firebase deploy --only functions
everything worked.

4 FOSJSRouting callback=fos.Router.setData & Route not found in production only

I am developing a page within Symfony 4 that requires the FOSJSrouting bundle. Within my DEV environment - using docker - I got it working fine using the steps below.
However, in my prod environment I keep getting the errors:
- http://url/js/routing?callback=fos.Router.setData 500 (Internal Server Error)
- router.min.js:1 Uncaught Error: The route "get_coinTicker_from_platform" does not exist.
My steps to get it working in DEV:
$ composer require friendsofsymfony/jsrouting-bundle
Adding the following to routes.yaml:
fos_js_routing:
resource: "#FOSJsRoutingBundle/Resources/config/routing/routing.xml"
Adding the following to my base.html.twig
<script src="{{ asset('bundles/fosjsrouting/js/router.min.js') }}"></script>
<script src="{{ path('fos_js_routing_js', { callback: 'fos.Router.setData' }) }}"></script>
This was sufficient to get my exposed routes to work:
/**
* #Route("/ticker/{coin}/{plat}", name="get_coinTicker_from_platform", options={"expose"=true})
*/
Then in my JavaScript I did:
$.ajax({
method: 'POST',
url: Routing.generate('get_coinTicker_from_platform', {coin: coin.val(), plat: exch.val()})
}).done(function(data) {
$('.loader').hide();
}
});
I installed the routing bundle using composer on my Linux server and even tried the steps included in the docs to publish assets as well as dump routes like so:
bin/console fos:js-routing:dump --format=json --target=public/js/fos_js_routes.json
I have checked the symfony and Apache logs. Nothing is hinting about this issue there. Everything else is running fine, just FOSrouting is causing trouble.
Also, I tried:
npm install fos-routing --save
This actually temporarily solved the issue, but the next day, after I did another rsync from my local repository it was broken again.
I had the same problem and it has been solved by giving the right rwxrwxrwx on the folder var/cache/prod.
I am a fair bit late to the party but anyway let's get into it.
I think your issue - as hinted by #Samiul Amin Shanto comes from cached content that is not wiped out before dumping the routes.
In prod environment, symfony caches the Router. So if you do not wipe the cache
and update your code with new actions in your controller for example, they won't
be available at all because the route is not referencing them yet.
Hopefully you somehow managed to find this out because at some point you did run cache:clear --env=prod
So I'm just putting out this response here for any other internet user who might come across this question.
Take care.
Just in case someone has the same issue, I also spent about 2 hours on this, the actual issue is the right permission on the var folder inside your symfony project. Just refer to symfony official docs for giving the right permission here!

Removing Migrations With Knex.js In My Node.js Application

I am trying to get knex working in my node.js application. I was following a tutorial and at some point created a table but could not repeat the process. I removed the table and deleted all the migrations folders. At tis point I started over but after creating a new migration and then running knex migrate:latest I get an error saying the migration directory is corrupt because the original migration I had is missing.
I was under the impression that if the file is missing it should not know it was ever there.
What is the proper way to remove a migration from my project?
knexfile.js
development: {
client: 'pg',
connection: {
host: '127.0.0.1',
user: 'postgres',
password: 'password',
database: 'myDatabase'
},
pool: {
min: 10,
max: 20
},
migrations: {
directory: __dirname + '/db/migrations'
},
seeds: {
directory: __dirname + '/db/seeds/development'
}
db.js
var config = require('../knexfile.js');
var env = 'development';
var knex = require('knex')(config[env]);
module.exports = knex;
console.log('Getting knex');
knex.migrate.latest([config]);
console.log('Applying migration...');
Running this gives error,
knex migrate:latest
Using environment: development
Error: The migration directory is corrupt, the following files are missing: 20161110130954_auth_level.js
but this migration does not exist because I deleted it.
You had to rollback a migration (knex migrate:rollback) before deleting the file, I guess what you can do is:
touch [full_path_to_migrations_here]/migrations/20161110130954_auth_level.js
knex migrate:rollback
rm [full_path_to_migrations_here]/migrations/20161110130954_auth_level.js
Reference here
https://github.com/tgriesser/knex/issues/1569
Your migration files have been deleted,
but they are still referenced in a table called "migrations"
(see update below), which was generated by knex.
You should be able to check it by connecting to your local database.
I faced the same problem and I solved it by removing records corresponding to my deleted migration files.
delete from migrations
where migrations."name" in ('20191027220145_your_migration_file.js', ...);
EDIT: The migrations table name might change according to the options or to the version you use.
The constant is set here and used there.
To be sure of the name, you could list all tables as suggested by #MohamedAllal.
Hope it helps
To remove either you can rollback and so rollback then remove.
Or you can not ! And follow on the bellow:
(> The bellow, answer too the following error which you may already get to see:<)
Error: The migration directory is corrupt
Which will happens if
The migrations files are deleted, while the records on the migration table created by knex remains there!
So simply clear them up!
(remove then clear up, or clear up then remove!)
Important to note the migration table by now is knex_migration. Don't know if it was different in the past!
But better list the db tables to make sure!
I'm using postgres! Using psql :
> \d
i get :
You can do it with Raw SQL! Using your db terminal client, Or using knex itself! Or any other means (an editor client (pgAdmin, mysql workbench, ...).
Raw sql
DELETE FROM knex_migration
WHERE knex_migration."name" IN ('20200425190608_yourMigFile.ts', ...);
Note you can copy past the files from the error message (if you get it)
ex: 20200425190608_creazteUserTable.ts, 20200425193758_createTestTestTable.ts
from
Error: The migration directory is corrupt, the following files are missing: 20200425190608_creazteUserTable.ts, 20200425193758_createTestTestTable.ts
Copy past! And it's fast!
(you can get the error by trying to migrate)
Using knex itself
knex('knex_migration')
.delete()
.whereIn('name', ['20200425190608_yourMigFile.ts', ...]);
Create a script! Call your knex instance! Done! Cool!
After cleaning
The migrations will run nicely! And your directory no more corrupt!
How much i do love that green!
Happy coding!
Removing a migration: (Bring it down then remove to remove)
What is the right way to remove a migration?
The answer is bring that one migration down and then remove it's file!
Illustration
$ knex migrate:down "20200520092308_createUsersWorksTable.ts"
$ rm migrations/20200520092308_createUsersWorksTable.ts
You can list the migrations to check as bellow
$ knex migrate:list
(from v0.19.3! if not available update knex (npm i -g knex))
Alter migration to alter only (No)
If you are like me and like to update the migration directly in the base migration! And you may think about creating an alter migration! run it then remove it!
A fast flow! You just make the update on the base creation table! Copy past into the new created alter table! And run it then remove it!
If you're thinking that way! Don't !!!
You can't rollback because it's the changes that you want! You can't cancel them!
You can do it! And then you have to clear the records! Or you'll get the error! And just not cool!
Better create an later file script! Not a migration file! And run it directly! Done!
My preference is to create an alter.ts (.js) file in the Database folder!
Then create the alter schema code there! And create an npm script to run it!
Each time you just modify it! And run!
Here the base skeleton:
import knex from './db';
(async () => {
try {
const resp = await knex.schema.alterTable('transactions', (table) => {
table.decimal('feeAmount', null).nullable();
});
console.log(resp);
} catch (err) {
console.log(err);
}
})();
And better with vscode i just use run code (if run code extension is installed! Which is a must hhhh)!
And if no errors then it run well! And you can check the response!
You can check up the schema api in the doc!
Also to alter you'll need to use alter() method as by the snippet bellow that i took from the doc:
// ________________ alter fields
// drops previous default value from column, change type
// to string and add not nullable constraint
table.string('username', 35).notNullable().alter();
// drops both not null constraint and the default value
table.integer('age').alter();
Now happy coding!
try to use the option below, it will do exactly what it refers to:
migrations: {
disableMigrationsListValidation: true,
}
knex.js migration api
Since you are in development with your seeds and migration files that you are able to recreate the tables and data you can just drop and recreate the databases using the available knex migrations.
My development environment was using sqlite so rm -rf dev.sqlite3 on terminal fixed this.
With postgresql it will be dropdb "db_name" on the terminal or see this for more alternatives.
This is simple and there is no need to do a rollback after that.
The record about your old migrations is gone and you can recreate it with a new knex migrate:latest.
If using the postgresql on Heroku navigate to the database credentials and clicking the "Reset Database" button works
You can reset the database by running this command.
Heroku run pg:reset --confirm app-name
Hot fix
Issue is happening because the deleted migrations are still the knex_migrations table in the database so the knex_migrations table does not match with the migrations file in the working directory. You have to delete the migrations from the knex_migrations table.
delete
from knex_migrations km
where km."name" = 'deleted_migration.js'
Go to the graphical user interface you are using to see the data in the database and inside tables, there will be two other tables along with the tables you made named knex_migrations and knex_migrations_lock. Delete both of these files and others tables that you created. Then run
knex migrate:latest
Caution: This will remove all data from the database(That's fine if you are using it for development)

MONGO_URL for running multiple Meteor apps on one server

I have one Meteor application running on my Ubuntu server (Digital Ocean). I use Meteor Up (MUP) to deploy and keep the app running. Everything works fine.
However, when I try to deploy a second app on the same server, something goes wrong in connecting to the MongoDB. I get a long and unreadable error message that starts "Invoking deployment process: FAILED" and then ends with
Waiting for MongoDB to initialize. (5 minutes)
connected
myapp start/running, process 25053
Waiting for 15 seconds while app is booting up
Checking is app booted or not?
myapp stop/waiting
myapp start/running, process 25114
And the app refuses to run. I have tried a number of things to fix this and will edit this post if more info is requested, but I'm not sure what's relevant. Essentially I don't understand the Error message, so I need to know what the heck is going on?
EDIT:
I want to add that my app runs fine if I go into the project folder and use the "meteor" command. Everything runs as expected. It is only when I try to deploy it for long-term production mode with MUP that I get this error.
EDIT:
I moved on to trying mupx instead of mup. This time I can't even get past the installation process, I get the following error message:
[Neal] x Installing MongoDB: FAILED
-----------------------------------STDERR-----------------------------------
Error response from daemon: no such id: mongodb
Error: failed to remove containers: [mongodb]
Error response from daemon: Cannot start container c2c538d34c15103d1d07bcc60b56a54bd3d23e50ae7a8e4f9f7831df0d77dc56: failed to create endpoint mongodb on network bridge: Error starting userland proxy: listen tcp 127.0.0.1:27017: bind: address already in use
But I don't understand why! Mongod is clearly already running on port 27017 and a second application should just add a new database to that instance, correct? I don't know what I'm missing here, why MUP can't access MongoDB.
It's tricky without your mup.json to see what's going on here. Given what you said, it looks like your 2nd app deployment tries to override/boot mongodb over the 1st one which is locked, the mongodb environment fails to boot, causing then the fail. You should tackle this different ways:
If your objective is to share your mongoDB, point the MONGO_URL from your 2nd mup.jon on your first mongodb instance. It's generally something along the 2701X ports. As it's a shared DB, changes in one database could affect the other.
meteor-up oversees the deployment of your app from a meteor-nice-to-test thing to a node+mongodb environment. You can spawn another mongod instance with :
mongod --port 2701X --dbpath /your/dbpath --fork --logpath /log/path on your DO server and then point MONGO_URL there.
Last but not least, mupx having docker under the hood. Using mupx for your deployments should isolate both apps from each other.

Categories