I'm currently seeding data with Sequelize.js and using hard coded values for association IDs. This is not ideal because I really should be able to do this dynamically right? For example, associating users and profiles with a "has one" and "belongs to" association. I don't necessarily want to seed users with a hard coded profileId. I'd rather do that in the profiles seeds after I create profiles. Adding the profileId to a user dynamically once profiles have been created. Is this possible and the normal convention when working with Sequelize.js? Or is it more common to just hard code association IDs when seeding with Sequelize?
Perhaps I'm going about seeding wrong? Should I have a one-to-one number of seeds files with migrations files using Sequelize? In Rails, there is usually only 1 seeds file you have the option of breaking out into multiple files if you want.
In general, just looking for guidance and advice here. These are my files:
users.js
// User seeds
'use strict';
module.exports = {
up: function (queryInterface, Sequelize) {
/*
Add altering commands here.
Return a promise to correctly handle asynchronicity.
Example:
return queryInterface.bulkInsert('Person', [{
name: 'John Doe',
isBetaMember: false
}], {});
*/
var users = [];
for (let i = 0; i < 10; i++) {
users.push({
fname: "Foo",
lname: "Bar",
username: `foobar${i}`,
email: `foobar${i}#gmail.com`,
profileId: i + 1
});
}
return queryInterface.bulkInsert('Users', users);
},
down: function (queryInterface, Sequelize) {
/*
Add reverting commands here.
Return a promise to correctly handle asynchronicity.
Example:
return queryInterface.bulkDelete('Person', null, {});
*/
return queryInterface.bulkDelete('Users', null, {});
}
};
profiles.js
// Profile seeds
'use strict';
var models = require('./../models');
var User = models.User;
var Profile = models.Profile;
module.exports = {
up: function (queryInterface, Sequelize) {
/*
Add altering commands here.
Return a promise to correctly handle asynchronicity.
Example:
return queryInterface.bulkInsert('Person', [{
name: 'John Doe',
isBetaMember: false
}], {});
*/
var profiles = [];
var genders = ['m', 'f'];
for (let i = 0; i < 10; i++) {
profiles.push({
birthday: new Date(),
gender: genders[Math.round(Math.random())],
occupation: 'Dev',
description: 'Cool yo',
userId: i + 1
});
}
return queryInterface.bulkInsert('Profiles', profiles);
},
down: function (queryInterface, Sequelize) {
/*
Add reverting commands here.
Return a promise to correctly handle asynchronicity.
Example:
return queryInterface.bulkDelete('Person', null, {});
*/
return queryInterface.bulkDelete('Profiles', null, {});
}
};
As you can see I'm just using a hard coded for loop for both (not ideal).
WARNING: after working with sequelize for over a year, I've come to realize that my suggestion is a very bad practice. I'll explain at the bottom.
tl;dr:
never use seeders, only use migrations
never use your sequelize models in migrations, only write explicit SQL
My other suggestion still holds up that you use some "configuration" to drive the generation of seed data. (But that seed data should be inserted via migration.)
vv DO NOT DO THIS vv
Here's another pattern, which I prefer, because I believe it is more flexible and more readily understood. I offer it here as an alternative to the accepted answer (which seems fine to me, btw), in case others find it a better fit for their circumstances.
The strategy is to leverage the sqlz models you've already defined to fetch data that was created by other seeders, use that data to generate whatever new associations you want, and then use bulkInsert to insert the new rows.
In this example, I'm tracking a set of people and the cars they own. My models/tables:
Driver: a real person, who may own one or more real cars
Car: not a specific car, but a type of car that could be owned by someone (i.e. make + model)
DriverCar: a real car owned by a real person, with a color and a year they bought it
We will assume a previous seeder has stocked the database with all known Car types: that information is already available and we don't want to burden users with unnecessary data entry when we can bundle that data in the system. We will also assume there are already Driver rows in there, either through seeding or because the system is in-use.
The goal is to generate a whole bunch of fake-but-plausible DriverCar relationships from those two data sources, in an automated way.
const {
Driver,
Car
} = require('models')
module.exports = {
up: async (queryInterface, Sequelize) => {
// fetch base entities that were created by previous seeders
// these will be used to create seed relationships
const [ drivers , cars ] = await Promise.all([
Driver.findAll({ /* limit ? */ order: Sequelize.fn( 'RANDOM' ) }),
Car.findAll({ /* limit ? */ order: Sequelize.fn( 'RANDOM' ) })
])
const fakeDriverCars = Array(30).fill().map((_, i) => {
// create new tuples that reference drivers & cars,
// and which reflect the schema of the DriverCar table
})
return queryInterface.bulkInsert( 'DriverCar', fakeDriverCars );
},
down: (queryInterface, Sequelize) => {
return queryInterface.bulkDelete('DriverCar');
}
}
That's a partial implementation. However, it omits some key details, because there are a million ways to skin that cat. Those pieces can all be gathered under the heading "configuration," and we should talk about it now.
When you generate seed data, you usually have requirements like:
I want to create at least a hundred of them, or
I want their properties determined randomly from an acceptable set, or
I want to create a web of relationships shaped exactly like this
You could try to hard-code that stuff into your algorithm, but that's the hard way. What I like to do is declare "configuration" at the top of the seeder, to capture the skeleton of the desired seed data. Then, within the tuple-generation function, I use that config to procedurally generate real rows. That configuration can obviously be expressed however you like. I try to put it all into a single CONFIG object so it all stays together and so I can easily locate all the references within the seeder implementation.
Your configuration will probably imply reasonable limit values for your findAll calls. It will also probably specify all the factors that should be used to calculate the number of seed rows to generate (either by explicitly stating quantity: 30, or through a combinatoric algorithm).
As food for thought, here is an example of a very simple config that I used with this DriverCar system to ensure that I had 2 drivers who each owned one overlapping car (with the specific cars to be chosen randomly at runtime):
const CONFIG = {
ownership: [
[ 'a', 'b', 'c', 'd' ], // driver 1 linked to cars a, b, c, and d
[ 'b' ], // driver 2 linked to car b
[ 'b', 'b' ] // driver 3 has two of the same kind of car
]
};
I actually used those letters, too. At runtime, the seeder implementation would determine that only 3 unique Driver rows and 4 unique Car rows were needed, and apply limit: 3 to Driver.findAll, and limit: 4 to Car.findAll. Then it would assign a real, randomly-chosen Car instance to each unique string. Finally, when generating association tuples, it uses the string to look up the chosen Car from which to pull foreign keys and other values.
There are undoubtedly fancier ways of specifying a template for seed data. Skin that cat however you like. Hopefully this makes it clear how you'd marry your chosen algorithm to your actual sqlz implementation to generate coherent seed data.
Why the above is bad
If you use your sequelize models in migration or seeder files, you will inevitably create a situation in which the application will not build successfully from a clean slate.
How to avoid madness:
Never use seeders, only use migrations
(Anything you can do in a seeder, you can do in a migration. Bear that in mind as I enumerate the problems with seeders, because that means none of these problems gain you anything.)
By default, sequelize does not keep records of which seeders have been run. Yes, you can configure it to keep records, but if the app has already been deployed without that setting, then when you deploy your app with the new setting, it'll still re-run all your seeders one last time. If that's not safe, your app will blow up. My experience is that seed data can't and shouldn't be duplicated: if it doesn't immediately violate uniqueness constraints, it'll create duplicate rows.
Running seeders is a separate command, which you then need to integrate into your startup scripts. It's easy for that to lead to a proliferation of npm scripts that make app startup harder to follow. In one project, I converted the only 2 seeders into migrations, and reduced the number of startup-related npm scripts from 13 to 5.
It's been hard to pin down, but it can be hard to make sense of the order in which seeders are run. Remember also that the commands are separate for running migrations and seeders, which means you can't interleave them efficiently. You'll have to run all migrations first, then run all seeders. As the database changes over time, you'll run into the problem I describe next:
Never use your sequelize models in your migrations
When you use a sequelize model to fetch records, it explicitly fetches every column it knows about. So, imagine a migration sequence like this:
M1: create tables Car & Driver
M2: use Car & Driver models to generate seed data
That will work. Fast-forward to a date when you add a new column to Car (say, isElectric). That involves: (1) creating a migraiton to add the column, and (2) declaring the new column on the sequelize model. Now your migration process looks like this:
M1: create tables Car & Driver
M2: use Car & Driver models to generate seed data
M3: add isElectric to Car
The problem is that your sequelize models always reflect the final schema, without acknowledging the fact that the actual database is built by ordered accretion of mutations. So, in our example, M2 will fail because any built-in selection method (e.g. Car.findOne) will execute a SQL query like:
SELECT
"Car"."make" AS "Car.make",
"Car"."isElectric" AS "Car.isElectric"
FROM
"Car"
Your DB will throw because Car doesn't have an isElectric column when M2 executes.
The problem won't occur in environments that are only one migration behind, but you're boned if you hire a new developer or nuke the database on your local workstation and build the app from scratch.
Instead of using different seeds for Users and Profiles you could seed them together in one file using sequelizes create-with-association feature.
And additionaly, when using a series of create() you must wrap those in a Promise.all(), because the seeding interface expects a Promise as return value.
up: function (queryInterface, Sequelize) {
return Promise.all([
models.Profile.create({
data: 'profile stuff',
users: [{
name: "name",
...
}, {
name: 'another user',
...
}]}, {
include: [ model.users]
}
),
models.Profile.create({
data: 'another profile',
users: [{
name: "more users",
...
}, {
name: 'another user',
...
}]}, {
include: [ model.users]
}
)
])
}
Not sure if this is really the best solution, but thats how I got around maintaining foreign keys myself in seeding files.
Related
I'm new to Sequelize and try to achieve the following:
Assume I have a very simple database with 3 Models/Tables:
Person, Group and Category.
Person has a Many-To-One relation to Group (1 Person can be in 1 Group, 1 Group holds multiple people) & Group has a Many-To-One relation to Category (1 Group has 1 Category, 1 Category can be applied to multiple Groups).
Because I don't want to save the whole Category in my database, but only a short string, I have a mapper in the backend in my app.
Let's say my Category-Mapper looks like this:
//category.mapper.js
module.exports = Object.freeze({
cat1: "Here is the String that should be sent to and displayed by the FrontEnd",
cat2: ".....",
});
So basically, in my database I save "cat1" as the category and every time I get one or more Categories via Sequelize from the database, I want to go into my mapper, resolve the short string to the long string and send it to the Frontend, so I wrote the following code:
//category.model.js
const categoryMapper = require("../mapper/category.mapper");
Category.afterFind((models) => {
if(!Array.isArray(models)) {
models = [models];
}
models.forEach(model => {
model.name = categoryMapper[model.name];
});
});
This works great when I call Category.findAll()..., but does not trigger when I include the Category as in this example:
Group.findAll({
include: [Category]
})
There is this rather old GitHub Issue referencing this behavior, where someone published some code to make sure the hooks run on include. See here.
I tried implementing the referenced code into my project, but when I do, the hook for Category runs twice in my following code:
Person.findAll({
include: [{
model: Group,
include: [Category]
}]
})
My assumption is, that, with the code from the GitHub-Issue comment, my hook gets triggered every time the relationship is detected and the code runs. Therefore the hook runs once after including Group, because Group has a relationship to Category and a second time when Category is actually included, which breaks my mapping function because the second time it tries to resolve the long string, which doesn't work.
I'm looking for a solution that basically runs my hooks once and only once, namely when the actual include for my model triggers, regardless of on what level the include happens.
Sorry for the lengthy post, but I did not find any solution to my problem online, but don't believe what I am trying to achieve is very exotic or specific to my project only.
If there is a better solution I am not seeing, I'm open to suggestions and new approaches.
Thanx in advance!
It seems i have misunderstood sequelize .hasMany() and .belongsTo() associations and how to use them in service. I have two models:
const User = db.sequelize.define("user", {
uid: { /*...*/ },
createdQuestions: {
type: db.DataTypes.ARRAY(db.DataTypes.UUID),
unique: true,
allowNull: true,
},
});
const Question = db.sequelize.define("question", {
qid: { /*...*/ },
uid: {
type: db.DataTypes.TEXT,
},
});
Given that one user can have many questions and each question belongs to only one user I have the following associatons:
User.hasMany(Question, {
sourceKey: "createdQuestions",
foreignKey: "uid",
constraints: false,
});
Question.belongsTo(User, {
foreignKey: "uid",
targetKey: "createdQuestions",
constraints: false,
});
What I want to achieve is this: After creation of a question object, the qid should reside in the user object under "createdQuestions" - just as the uid resides in the question object under uid. What I thought sequelize associations would do for me is to save individual calling and updating the user object. Is there a corresponding method? What I have so far is:
const create_question = async (question_data) => {
const question = { /*... question body containing uid and so forth*/ };
return new Promise((resolve, rejected) => {
Question.sync({ alter: true }).then(
async () =>
await db.sequelize
.transaction(async (t) => {
const created_question = await Question.create(question, {
transaction: t,
});
})
.then(() => resolve())
.catch((e) => rejected(e))
);
});
};
This however only creates a question object but does not update the user. What am I missing here?
Modelling a One-to-many relationship in SQL
SQL vs NoSQL
In SQL, contrary to how it is in NoSQL, every attribute has a fixed data type with a fixed limit of bits. That's manifested by the SQL command when creating a new table:
CREATE TABLE teachers (
name VARCHAR(32),
department VARCHAR(64),
age INTEGER
);
The reason behind this is to allow us to easily access any attribute from the database by knowing the length of each row. In our case, each row will need the space needed to store:
32 bytes (name) + 64 bytes (department) + 4 bytes (age) = 100 byes
This is a very powerful feature in Relation Databases as it minimizes the time needed to retrieve data to Constant time since we knew where each piece of data is located in the memory.
One-to-Many Relationship: Case Study
Now, let's consider we have these 3 tables
Let's say we want to create a one-to-many relation between classes and teachers where a Teacher can give many classes.
We can think of it this way. But, this model is not possible for 2 main reasons:
It will make us lose our constant-time retrieval since we don't know the size of the list anymore
We fear that the amount of space given to the list attribute won't be enough for future data. Let's say we allocate space needed for 10 classes and we end up with a teacher giving 11 classes. This will push us to recreate our database to increase the column size.
Another way would be this:
While this approach will fix the limited column size problem, we no longer have a single source of truth. The same data is duplicated and stored multiple times.
That's why for this one-to-many relationship, we'll need to store the Id of the teacher inside this class table.
This way, we still can find all the classes a teacher can teach by running
SELECT *
FROM classes
WHERE teacherID = teacher_id
And we'll avoid all the problems discussed earlier.
Your relation is a oneToMany relation. One User can have multiple Questions. In SQL, this kind of relation is modelled by adding an attribute to Question called userId or Uid as you did. In Sequelize, this would be achieved through a hasMany or BelongsTo like this:
User.hasMany(Question)
Question.belongsTo(User, {
foreignKey: 'userId',
constraints: false
})
In other words, I don't think you need the CreatedQuestions attribute under User. Only one foreign key is needed to model the oneToMany relation.
Now, when creating a new question, you just need to add the userId this way
createNewQuestion = async (userId, title, body) => {
const question = await Question.create({
userId: userId, // or just userId
title: title, // or just title
body: body // or just body
})
return question
}
Remember, we do not store arrays in SQL. Even if we can find a way to do it, it is not what we need. There must be always a better way.
How do I return a plain JavaScript object from TypeORM Repository functions like find, create, update or delete?
const user = User.findOne({ where: { id: 1 } });
Actual:
User | undefined
{
User: {
id: 1
}
}
Instead I would like to receive a plain object independent from any library / framework
Expected:
{
id: 1
}
This way it would be easier to be framework independent and your repository calls could be interchangeable at any point of time without rewriting the entire application and not to depend on third party scripts that heavily.
Sequelize for example has an option { plain: true }
You can easily use the class-transformer library to produce plain objects from entity instances:
https://github.com/typestack/class-transformer
const user = repository.findOne({ where: { id: 1 } });
const plain = classToPlain(user);
Another way is to use queryBuilder API and returns raw results from the database
const rawResult = repository.createQueryBuilder("user")
.select("user")
.where("user.id = :id", {id: 1})
.getRawOne();
The point is why would you want to do so? Entity is first citizen in TypeOrm so you want that any change on its structure should generate compile errors if something is not back compatible. Using plain objects would hide contract changes, and you would discover errors only at runtime, not sure if it is an optimal way to work with TypeOrm to my mind.
I'm to get an answer on whether or not the way my database is setup will cause too many recursive reads and therefore exponentially increase the number of read operations.
I currently have a collection of users, inside of each user doc, I have 3 other catalogs, goods, bundles and parts. Each user has a list of parts and a list of bundles, for example.
Each doc in the bundles catalog has an array of maps with a reference to a doc in the parts catalog in each map.
When I query the bundles, I want to also get the details of each part in the bundle. Does this require that I run another onSnapshot?
Here's an example:
database:
users (catalog)
userID
parts
partID1
partID2
partID3
bundles
bundleID1
title: "string",
parts: [
part:"/users/userID/parts/partID1,
qty: 1
]
parts: [
part:"/users/userID/parts/partID2,
qty: 1
]
parts: [
part:"/users/userID/parts/partID3,
qty: 1
]
getting the bundles
initBundle(bid) {
const path = this.database.collection('users').doc('userID').collection('bundles').doc(bid);
path.ref.onSnapshot(bundle => {
const partsArr = [];
bundle.data().parts.forEach(part => {
part.part.onSnapshot(partRef => {
const partObj = {
data: partRef.data(),
qty: part.qty
};
partsArr.push(partObj);
});
});
const bundleObj = {
title: bundle.data().title,
parts: partsArr
};
this.bundle.next(bundleObj);
});
return this.bundle;
}
I'm using Ionic/Angular for this, so when I return the item, it needs to be an array of objects. I'm sort of recreating the object to include each part on this init. As you can see, for each part in the bundles return, I am doing another onSnapshot. This seems incorrect to me.
Something that is dawning on me is that I probably should be making a single call to the user, which in turn returns everything? But how do I get the sub catalogs at that point? I'm not sure how to proceed without racking up a bill!
If you do a nested part.part.onSnapshot(partRef => { listener, be sure to manage those listeners. I know of three common approaches:
Once your outer onSnapshot listener disappears, the nested ones should probably also be stopped (as their data likely isn't needed anymore). This is a fairly simple approach, since you just need one list of listeners for the entire bundle
Alternatively you can manage each "part" listener based on the status of that part in the outer listener, removing the listener for "part1" when that disappears from the bundle. This can be made into a highly efficient solution, but does require (quite some) additional code.
Many developers use get()s for the nested document reads, as that means there is nothing to manage.
I'm, in the process of learning BookshelfJS/KnexJS (switching from SequelizeJS), and I'm running into an issue with importing data into multiple tables that were created via the migrations feature within KnexJS. There's 4 tables:
servers
operating_systems
applications
applications_servers
With the following constraints:
servers.operating_system_id references operating_systems.id
applications_servers.server_id references servers.id
applications_servers.application_id references applications.id
The tables get created just fine when I run knex migrate:latest --env development servers, it's when I import seeded data into the tables I get an error.
Originally, I organized the seed data for the 4 tables into 4 different files within the directory ./seeds/dev, which is just ${table_name}.js:
operating_systems.js
servers.js
applications.js
applications_servers.js
After a bit of debugging, I came to the realization that the error is being generated when the seed data within the file applications_servers.js, since when I take that one out, the other 3 run just fine. Then when I remove the 3 seed files and move applications_servers.js to the ./seeds/dev/ directory and execute knex seed:run, the applications_servers table gets populated just fine. However when I try to import the seeded data of all 4 files at once, I receive the following error:
# knex seed:run
Using environment: development
Error: ER_NO_REFERENCED_ROW_2: Cannot add or update a child row: a foreign key constraint fails (`bookshelf_knex_lessons`.`applications_servers`, CONSTRAINT `applications_servers_server_id_foreign` FOREIGN KEY (`server_id`) REFERENCES `servers` (`id`) ON DELETE CASCADE)
at Query.Sequence._packetToError (/Users/me/Documents/scripts/js/node/bookshelf_knex/node_modules/mysql/lib/protocol/sequences/Sequence.js:48:14)
at Query.ErrorPacket (/Users/me/Documents/scripts/js/node/bookshelf_knex/node_modules/mysql/lib/protocol/sequences/Query.js:83:18)
And not a single row gets inserted, into any table.
I thought that maybe it had something to do with the order that they were being imported. (Since they have to be imported in the order that the files were listed above). So thinking that maybe they were referenced in alpha-numeric order, I renamed them to:
1-operating_systems.js
2-servers.js
3-applications.js
4-applications_servers.js
However, nothing changed, no rows were inserted into any table, so just to be sure, I reversed the sequence of the number prefixes on the files, and again, no changes, not a single row was inserted, and same error returned.
Note: The migration script for creating the tables, as well as all of the seed data, was copy and pasted from a JS file I created which creates the same tables and imports the same data using BookshelfJS/KnexJS, but instead of using the migration features, it just does it manually when executed via node. You can view this file here
Any help would be appreciated!
Edit: When I combine all of the seed files within ./seeds/dev into one file, ./seeds/dev/servers.js, everything gets imported just fine. This makes me think that it may be caused by the knex migrations being executed asynchronously, thus the ID's inserted in the pivot table and the servers.operating_system_id may not be inserted yet into the associated tables... If this is the case, is there a way to setup dependencies in the seed files?
Knex.js's seed functionality does not provide any order of execution guarantees. Each seed should be written such that it can be executed in isolation - ie. your single file approach is correct.
If you want to break your individual seed files into submodules, then you might try the following:
// initial-data.js
var operatingSystems = require('./initial-data/operating-systems.js');
var servers = require('./initial-data/servers.js');
exports.seed = function(knex, Promise) {
return operatingSystems.seed(knex, Promise)
.then(function () {
return servers.seed(knex, Promise);
}).then(function() {
// next ordered migration...
});
}
I use the sql-fixtures module to handle FK relation dependencies in my seed file(s).
Imaginary implementation:
const dataSpec = {
applications_servers: [{
name: 'My ASP.Net thingie',
application_id: 'applications:0',
server_id: 'servers:0'
}],
servers: [{
name: 'My Windows server',
operating_system_id: 'operating_systems:0'
}],
operating_systems: [{
name: 'Windows Server 2k10'
}],
applications: [{
name: 'My fab web guestbook',
description: '...'
}]
}
but it might be superfluous if you declare your dependencies by means of the ORM.
Sometimes however, it is desirable to avoid coupling the DB seeds to your exact current Model implementation.