I am trying to get knex working in my node.js application. I was following a tutorial and at some point created a table but could not repeat the process. I removed the table and deleted all the migrations folders. At tis point I started over but after creating a new migration and then running knex migrate:latest I get an error saying the migration directory is corrupt because the original migration I had is missing.
I was under the impression that if the file is missing it should not know it was ever there.
What is the proper way to remove a migration from my project?
knexfile.js
development: {
client: 'pg',
connection: {
host: '127.0.0.1',
user: 'postgres',
password: 'password',
database: 'myDatabase'
},
pool: {
min: 10,
max: 20
},
migrations: {
directory: __dirname + '/db/migrations'
},
seeds: {
directory: __dirname + '/db/seeds/development'
}
db.js
var config = require('../knexfile.js');
var env = 'development';
var knex = require('knex')(config[env]);
module.exports = knex;
console.log('Getting knex');
knex.migrate.latest([config]);
console.log('Applying migration...');
Running this gives error,
knex migrate:latest
Using environment: development
Error: The migration directory is corrupt, the following files are missing: 20161110130954_auth_level.js
but this migration does not exist because I deleted it.
You had to rollback a migration (knex migrate:rollback) before deleting the file, I guess what you can do is:
touch [full_path_to_migrations_here]/migrations/20161110130954_auth_level.js
knex migrate:rollback
rm [full_path_to_migrations_here]/migrations/20161110130954_auth_level.js
Reference here
https://github.com/tgriesser/knex/issues/1569
Your migration files have been deleted,
but they are still referenced in a table called "migrations"
(see update below), which was generated by knex.
You should be able to check it by connecting to your local database.
I faced the same problem and I solved it by removing records corresponding to my deleted migration files.
delete from migrations
where migrations."name" in ('20191027220145_your_migration_file.js', ...);
EDIT: The migrations table name might change according to the options or to the version you use.
The constant is set here and used there.
To be sure of the name, you could list all tables as suggested by #MohamedAllal.
Hope it helps
To remove either you can rollback and so rollback then remove.
Or you can not ! And follow on the bellow:
(> The bellow, answer too the following error which you may already get to see:<)
Error: The migration directory is corrupt
Which will happens if
The migrations files are deleted, while the records on the migration table created by knex remains there!
So simply clear them up!
(remove then clear up, or clear up then remove!)
Important to note the migration table by now is knex_migration. Don't know if it was different in the past!
But better list the db tables to make sure!
I'm using postgres! Using psql :
> \d
i get :
You can do it with Raw SQL! Using your db terminal client, Or using knex itself! Or any other means (an editor client (pgAdmin, mysql workbench, ...).
Raw sql
DELETE FROM knex_migration
WHERE knex_migration."name" IN ('20200425190608_yourMigFile.ts', ...);
Note you can copy past the files from the error message (if you get it)
ex: 20200425190608_creazteUserTable.ts, 20200425193758_createTestTestTable.ts
from
Error: The migration directory is corrupt, the following files are missing: 20200425190608_creazteUserTable.ts, 20200425193758_createTestTestTable.ts
Copy past! And it's fast!
(you can get the error by trying to migrate)
Using knex itself
knex('knex_migration')
.delete()
.whereIn('name', ['20200425190608_yourMigFile.ts', ...]);
Create a script! Call your knex instance! Done! Cool!
After cleaning
The migrations will run nicely! And your directory no more corrupt!
How much i do love that green!
Happy coding!
Removing a migration: (Bring it down then remove to remove)
What is the right way to remove a migration?
The answer is bring that one migration down and then remove it's file!
Illustration
$ knex migrate:down "20200520092308_createUsersWorksTable.ts"
$ rm migrations/20200520092308_createUsersWorksTable.ts
You can list the migrations to check as bellow
$ knex migrate:list
(from v0.19.3! if not available update knex (npm i -g knex))
Alter migration to alter only (No)
If you are like me and like to update the migration directly in the base migration! And you may think about creating an alter migration! run it then remove it!
A fast flow! You just make the update on the base creation table! Copy past into the new created alter table! And run it then remove it!
If you're thinking that way! Don't !!!
You can't rollback because it's the changes that you want! You can't cancel them!
You can do it! And then you have to clear the records! Or you'll get the error! And just not cool!
Better create an later file script! Not a migration file! And run it directly! Done!
My preference is to create an alter.ts (.js) file in the Database folder!
Then create the alter schema code there! And create an npm script to run it!
Each time you just modify it! And run!
Here the base skeleton:
import knex from './db';
(async () => {
try {
const resp = await knex.schema.alterTable('transactions', (table) => {
table.decimal('feeAmount', null).nullable();
});
console.log(resp);
} catch (err) {
console.log(err);
}
})();
And better with vscode i just use run code (if run code extension is installed! Which is a must hhhh)!
And if no errors then it run well! And you can check the response!
You can check up the schema api in the doc!
Also to alter you'll need to use alter() method as by the snippet bellow that i took from the doc:
// ________________ alter fields
// drops previous default value from column, change type
// to string and add not nullable constraint
table.string('username', 35).notNullable().alter();
// drops both not null constraint and the default value
table.integer('age').alter();
Now happy coding!
try to use the option below, it will do exactly what it refers to:
migrations: {
disableMigrationsListValidation: true,
}
knex.js migration api
Since you are in development with your seeds and migration files that you are able to recreate the tables and data you can just drop and recreate the databases using the available knex migrations.
My development environment was using sqlite so rm -rf dev.sqlite3 on terminal fixed this.
With postgresql it will be dropdb "db_name" on the terminal or see this for more alternatives.
This is simple and there is no need to do a rollback after that.
The record about your old migrations is gone and you can recreate it with a new knex migrate:latest.
If using the postgresql on Heroku navigate to the database credentials and clicking the "Reset Database" button works
You can reset the database by running this command.
Heroku run pg:reset --confirm app-name
Hot fix
Issue is happening because the deleted migrations are still the knex_migrations table in the database so the knex_migrations table does not match with the migrations file in the working directory. You have to delete the migrations from the knex_migrations table.
delete
from knex_migrations km
where km."name" = 'deleted_migration.js'
Go to the graphical user interface you are using to see the data in the database and inside tables, there will be two other tables along with the tables you made named knex_migrations and knex_migrations_lock. Delete both of these files and others tables that you created. Then run
knex migrate:latest
Caution: This will remove all data from the database(That's fine if you are using it for development)
Related
I have a project that is worked in local and then in cloud.
When we are in local we need to do some changes in some file to work well, and then rewrite this files to upload again when make the git push.
For example a file:
'use strict';
const mysql = require('mysql');
/* const dbConn = mysql.createPool({
connectionLimit: 5,
host: process.env.MYSQL_CONNECTION_STRING.split(':')[0],
user: process.env.MYSQL_USER,
password: process.env.MYSQL_PASSWORD,
database: process.env.MYSQL_CONNECTION_STRING.split('/')[1],
charset: 'utf8mb4'
});
module.exports = dbConn; */
const dbConn = mysql.createPool({
connectionLimit: 5,
host: 'localhost',
user: 'root',
password: 'Mraixa2015L',
database: 'tool_bbdd',
charset: 'utf8mb4'
});
module.exports = dbConn;
in local use mysql local with my credentials in cloud the others, and everytime i have to comment and uncomment to work.
this is one example, but i need to do things like this in other files.
I think if is possible to include in .gitignore file this files.
Inside my .gitignore
config/db.config.js
And when i make the git push, not upload this changes and not overwritte the data to use then in the cloud.
is it possible?
Thanks
The point is not about pushing. What will be pushed is what is the the commits.... whatever it is. Git can't just remove a file from a commit to push. What you should care about is about committing it in the first place.
If the file is already tracked, .gitignore makes no difference. You can ask git to ignore it if it is already tracked with git update-index --assume-unchanged.... or, what I do sometimes, is keep those changes in a private branch.... so you can "easily" apply them / unapply them
git show X/some-private-branch | git apply # boom! I have my changes there
# when I want to remove them
git show X/some-private-branch | git apply -r # The change is gone
The situation here is that you want to ignore some changes to tracked files with Git. Unfortunately, as the Git FAQ mentions, there's no way to do that with Git. Specifically, using git update-index for this purposes doesn't work properly:
It’s tempting to try to use certain features of git update-index, namely the assume-unchanged and skip-worktree bits, but these don’t work properly for this purpose and shouldn’t be used this way.
The easiest way to solve this problem is to include two separate files, one which has the production values and one which has the development values, both with names independent of the actual desired location, and then use a script to copy the correct file into the desired location, which is ignored (and not tracked). You can even have that script adjust the development or production values to include additional data that's appropriate based on things like environment variables.
I uploaded my repo and it has a database string named 'dbstring' which I do not want to share with anyone.
I created a repository secret on github and created a value named DBSTRING with its value but the thing is I dont know how to access it.
this is my uploaded code which reveals my dbstring.
const dbstring = mongodb+srv:/***********b.net
mongoose.connect(dbstring, { useUnifiedTopology: true, useNewUrlParser: true });
const db = mongoose.connection;
db.once('open', () => {
console.log('Database connected:', url);
});
How can I replace dbstring with secret value I created on my github repo?
What you need to do is to use Environment variables, where you can have a .env ( if you use dotenv ) for each environment. Then you keep your database credentials safe on your computer and on the server, this will also make it possible to target different environments like database production, dev, test, etc. Make sure you have .env file added in the .gitignore file.
It's also important that when you run this code it's executed on the server-side otherwise anyone with the dev tools open will be able to see the credentials as well. Then on your client side you make a request using axios to the URL related to that database connection.
If the ENV file works for you then what you can do is you can encrypt it before uploading it to the GitHub like creating an env-production file and encrypting it and once you use that repo you can decrypt it and you can also add that step to your CD/CI Line use this
I'm using Knex.js to manage migrations and seeds in my project, with connection options managed by the --env switch to individual commands.
How can I ensure that the seed commands like knex seed:run are never run against the production environment?
I solved this by giving it a seeds directory that doesn't exists. If it is then run in production it will crash with a no such file or directory.
As an example here is my knexfile.js:
module.exports = {
...,
production: {
client: 'pg',
connection: ...,
seeds: {
directory: 'you-are-not-able-to-run-seeds-in-production'
}
}
}
This is how I do it:
In development, make sure to create an env variable like "APP_ENV=development"
In every seed file, I put:
if (process.env.APP_ENV !== "development") {
console.error("Error: seeds can only be used in development");
process.exit(1);
}
Done!
For example by not using seeds or by adding code to start of every seed, which checks that if knex client configuration is not pointing to dev database, it will throw an error.
There is nothing builtin functionality in knex to prevent that.
So I am trying out knexjs and the first setup works like a charm. I've set up my connection, created a datastructure and in my terminal i ran $ knex migrate:latest.
It all worked fine... the migrated tables showed up in my database ran the migrate again and got Already up to date.
Now here is where I get an issue: Using Dotenv... Here is my code:
require('dotenv').config();
module.exports = {
development: {
client: process.env.DB_CLIENT,
connection: {
host: process.env.DB_HOST,
user: process.env.DB_ROOT,
password: process.env.DB_PASS,
database: process.env.DB_NAME,
charset: process.env.DB_CHARSET
}
}
};
As far as i can see nothing wrong with it and when i run the script through node no errors show up.
Then I wanted to check if I still could do a migrate and i get the following error:
Error: ER_ACCESS_DENIED_ERROR: Access denied for user ''#'[MY IP]'
(using password: YES)
I am using the same vars only this time from my .env file. But when i look at the error nothing is loaded from it, and yes both the knexfile.js and .env are in the root of my project :) Among the things i tried is setting the path in different ways within require('dotenv').config(); but then it would throw an error from dotenv meaning the file was already correctly loaded.
Can anyone help me figuring this out?
So after some trial and error i finally figured out what was wrong. I don't know what caused it but somehow the install of Knex wasn't done properly...
I un- and reinstalled Knex (local and global). Then first I installed it on the global level and than as a dependency. After that I initialized Knex again ( $ knex init ) and started from the ground up.
I think, but i am still not sure why because i could not find any info about it, the order of installing Knex matters (or mattered in my case and i am not even sure what i did wrong the first time).
On the side
If you are new to Knex and just blindly follow a random tutorial/article and just create a new file for Knex (i.e. knexfile.js), Knex will still work but other packages could fail to execute properly. This is what i don't see in most articles i found, read the documentation on how to generate the files needed (migrations and seeds). Most articles don't cover these steps properly.
Hope this is worth anything
I'm using DynamoDB Local for my app, and it keeps deleting all the sample data every time I shut down the instance. Does anyone know why this happens?
I've tried to look it up, but I don't see anyone else has this issue.
I've used the downloadable version of dynamodb and use this command dynamodb_local_latest baopham$ java -Djava.library.path=./DynoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb -inMemory to start and instance.
Am I missing anything? Thank you.
DynamoDB local usage documentation clearly states that if use -inMemory option, data will be in memory and data will be lost when you terminate. Take out -inMemory option in your command.
If you use the -inMemory option, DynamoDB does not write any database files at all. Instead, all data is written to memory, and the data is not saved when you terminate DynamoDB.
for docker, this command worked for me
docker run --name dynamodb -p 8000:8000 -d amazon/dynamodb-local -jar DynamoDBLocal.jar