Is it possible to revert a specific migration in Typeorm?, I want to only revert a particular migration and not all till I get to the migration I want to revert,
Because normally you just call typeorm migration: revert multiple times and it starts reverting from the last executed and removing it from the database, if you want to revert multiple migrations.
If you are really sure with reverting specific migration before some others.
You might try tweaking its id value on the migration Table.
If you have a table update you want to change that is not related to the last migration committed then you should write a new migration to make the change.
Reverting any migration is a last resort operation that is available to you when things don't go as planned, but I find that most problems can be solved forward with new migrations rather than reverting back.
Also if you find your migrations are too large, rebase your migrations. You can remove all migrations and generate a single base migration that creates the database as is current. We find this useful to do after a long period of time as migrations become redundant overtime.
Related
I have a very large table with 100,000's of rows with a key and a timestamp.
I currently use batch of AWS servers that are using NodeJS and PG to query for the oldest last_updated timestamp, perform the function, and then update the row with the NOW().
My issue is I am trying to find a way scale with more processes and optimize my row checkouts. I started with each process selecting 1000 at a time and each one having a multiple of 1000 for OFFSET, doing the operation, and then updating
I started to read about SELECT using FOR UPDATE and SKIP LOCK but seems like this could have some performance impacts? Also can't find a clear way to do the SELECT/UPDATE in the same query or do I keep doing single updates at a time similar to this post but seems like this may not be good for larger operations like this?
implementing an UPDATE on SELECT in Postgres
Has anyone approached this type of setup? I also have been debating do I need to build my own middleware that manages a pool of work items and then the workers use that table to select/delete?
I'm wondering what is the best way to do an "on cascade copy/insert" of related elements in PostgreSQL.
I made a simple example to illustrate my use case:
Database Definition
Three entities: Version, Element, ElementEffect.
A version has many elements.
An element belongs to one version.
An Element has many element effects.
An Effect belongs to one Element.
The Problem
Let say that we have 1 version, with 1 element with 1 effect in the database.
For my use case, I need to be able to create a new version copying all elements and element effects of the previous version and update some of them.
For example, if a new version is created: Version 2:
The database should copy the existence element into a new one referencing the new version.
The new element should create a new element effect referencing the new element.
A new version arrived. The new version has the same elements and effects that the previous version and one change: the element effect text changed from null to "loremp ipsum".
The operations that we need to do are:
Create a copy for all elements and their relationships relate to version —> Element > ElementEffect
Make data updates to new copy elements with the changes that the new version has.
Question
What is the best way to achieve requisite 1 in PostgreSQL with/without Sequelize ORM and Node.js?
Does PostgreSQL have any built-in feature to make this possible?
Should I solve this at the database level (maybe with psql rules and triggers) or a code level with a node script making queries and transactions?
Solution Requisites
I'm building this on PostgreSQL using Sequelize as ORM managed by Node.js so if I can build this using Sequelize will be even better.
My use case is way more complex than the example. I have more than 15 entities including Many to Many relationships so the solution needs to scale over time. Meaning, I do not want to test this every time I add a new entity or modify an existing one.
The best idea coming in my mind is to create the stored procedure in psql like
make_new_version(my_table, oldid, newid) and this procedure must copy rows with old id in table my_table and replace oldid with newid. this function also needs to return id from newly inserted rows to make another call of same function to copy rows for the next entity.
When you test and confirm this stored procedure, you can call it from another procedure (almost) without testing.
This way you will solve problem at the database layer.
Is there a way of implementing a deletion of a record after a specific time period in sequelize.js?
I didn't manage to find something like cron-jobs in the docs, however I stumbled across hooks which I guess may be the solution, but I'm not sure...
My goal is to have records that are older than x minutes/hours/etc. to be deleted automatically
I have been wondering about how to write down function in migration file. Ideally, it should be exactly opposite of what we are doing in up method. Now suppose I wrote up function to drop unique constraint on a column, added some new rows(having duplicate data) to a table and now I want to rollback the migration. Ideally, I would write down method to add a unique constraint again on the column but migration would not rollback as a table now contains duplicate data.
So my questions are -
What to do in such a situation?
How to write down function in migrations?
Can I keep the down function blank in such a situations?
Thanks.
I usually don't write down functions at all and just leave them empty.
I never rollback migrations and if I want to get to earlier DB state I just restore whole DB from backups.
If I just want to put unique constraint back, I will write another up migration which fixes duplicate rows and then adds unique constraint back.
I know that many people is using rollback between tests to reset DB, but that is really slow way to do it.
I am a newbie for MongoDB.
Please help me with the following query:
I am using oplogs for adding Triggers for MongoDB operation. On doing insert/update operations, I receive the complete information about all fields added/updated in the collection.
My problem is:
When I do delete operations in MongoDB, the oplog triggers received contains ONLY the object_id.
Can somebody please point out to some example where I can receive complete information in all fields - for the deleted row in the trigger.
Thanks
You have to fetch that document by its ObjectID, which will not be possible on the current node you are tailing the oplog from because by the time you have received the delete operation from the oplog, the document is gone. Which I believe means you have two choices:
Make sure that all deletes are preceded by an update operation which allows you to see the document fields you require prior to deletion (this will make deletes more expensive of course)
Run a secondary with a slave delay and then query that node for the document that has been deleted (either directly or by using tags).
For number 2, the issue is having a delay that is long enough to guarantee that you can fetch the document and short enough to make sure you are getting an up to date version of the document. Unless you add versioning to the document as a check (which is then getting similar to option 1, you would likely want to update the version before deleting), this would have to be essentially an optimistic, best efforts solution.