I'm new to Sequelize and try to achieve the following:
Assume I have a very simple database with 3 Models/Tables:
Person, Group and Category.
Person has a Many-To-One relation to Group (1 Person can be in 1 Group, 1 Group holds multiple people) & Group has a Many-To-One relation to Category (1 Group has 1 Category, 1 Category can be applied to multiple Groups).
Because I don't want to save the whole Category in my database, but only a short string, I have a mapper in the backend in my app.
Let's say my Category-Mapper looks like this:
//category.mapper.js
module.exports = Object.freeze({
cat1: "Here is the String that should be sent to and displayed by the FrontEnd",
cat2: ".....",
});
So basically, in my database I save "cat1" as the category and every time I get one or more Categories via Sequelize from the database, I want to go into my mapper, resolve the short string to the long string and send it to the Frontend, so I wrote the following code:
//category.model.js
const categoryMapper = require("../mapper/category.mapper");
Category.afterFind((models) => {
if(!Array.isArray(models)) {
models = [models];
}
models.forEach(model => {
model.name = categoryMapper[model.name];
});
});
This works great when I call Category.findAll()..., but does not trigger when I include the Category as in this example:
Group.findAll({
include: [Category]
})
There is this rather old GitHub Issue referencing this behavior, where someone published some code to make sure the hooks run on include. See here.
I tried implementing the referenced code into my project, but when I do, the hook for Category runs twice in my following code:
Person.findAll({
include: [{
model: Group,
include: [Category]
}]
})
My assumption is, that, with the code from the GitHub-Issue comment, my hook gets triggered every time the relationship is detected and the code runs. Therefore the hook runs once after including Group, because Group has a relationship to Category and a second time when Category is actually included, which breaks my mapping function because the second time it tries to resolve the long string, which doesn't work.
I'm looking for a solution that basically runs my hooks once and only once, namely when the actual include for my model triggers, regardless of on what level the include happens.
Sorry for the lengthy post, but I did not find any solution to my problem online, but don't believe what I am trying to achieve is very exotic or specific to my project only.
If there is a better solution I am not seeing, I'm open to suggestions and new approaches.
Thanx in advance!
I am working on a application in which a ship can be configured using rudders and other stuff. The database structure is sort of nested, and so far I have been keeping my GraphQL queries in correspondence with the database.
That means: I could fetch a ship using some query ship(projectId, shipId), but instead I am using a nested query:
query {
project(id:1) {
id
title
ship(id:1) {
id
name
rudders {
id
position
}
}
}
}
Such a structure of course leads to a lot of nested arrays. For example, if I have just added a new rudder, I would have to retrieve using cache.readQuery, which gives me the project object rather than the rudder list. To add the rudder to the cache, I'd get a long line with nested, destructured objects, making the code hard to read.
So I thought of using GraphQL fragments. On the internet, I see them being used a lot to prevent having to re-type several fields on extensive objects (which I personally find very useful as well!). However, there are not so many examples where a fragment is used for an array.
Fragments for arrays could save all the object destructuring when appending some data to an array that is nested in some cached query. Using Apollo's readFragment and writeFragment, I managed to get something working.
The fragment:
export const FRAGMENT_RUDDER_ARRAY = gql`
fragment rudderArray on ShipObject {
rudders {
id
position
}
}
`
Used in the main ship query:
query {
project(id: ...) {
id
title
ship(id: ...) {
id
name
...rudderArray
}
}
}
${RUDDER_FRAGMENT_ARRAY}
Using this, I can write a much clearer update() function to update Apollo's cache after a mutation. See below:
const [ createRudder ] = useMutation(CREATE_RUDDER_MUTATION, {
onError: (error) => { console.log(JSON.stringify(error))},
update(cache, {data: {createRudder}}) {
const {rudders} = cache.readFragment({
id: `ShipObject:${shipId}`,
fragment: FRAGMENT_RUDDER_ARRAY,
fragmentName: 'rudderArray'
});
cache.writeFragment({
id: `ShipObject:${shipId}`,
fragment: FRAGMENT_RUDDER_ARRAY,
fragmentName: 'rudderArray',
data: {rudders: rudders.concat(createRudder.rudder)}
});
}
});
Now what is my question? Well, since I almost never see fragments being used for this end, I find this working well, but I am wondering if there's any drawbacks to this.
On the other hand, I also decided to share this because I could not find any examples. So if this is a good idea, feel free to use the pattern!
I am using node.js with bookshelf as an ORM. I am a serious novice with this technology.
I have a situation where I have several columns in a database table. For the sake of this question, these columns shall be named 'sold_by_id', 'signed_off_by_id' and 'lead_developer_id', and are all columns that will reference a User table with an ID.
In other words, different User's in the system would at any point be associated with three different roles, not necessarily uniquely.
Going forward, I would need to be able to retrieve information in such ways as:
let soldByLastName = JobTicket.soldBy.get('last_name');
I've tried searching around and reading the documentation but I'm still very uncertain about how to achieve this. Obviously the below doesn't work and I'm aware that the second parameter is meant for the target table, but it illustrates the concept of what I'm trying to achieve.
// JobTicket.js
soldBy: function() {
return this.belongsTo(User, 'sold_by_id');
},
signedOffBy: function() {
return this.belongsTo(User, 'signed_off_by_id');
},
leadDeveloper: function() {
return this.belongsTo(User, 'lead_developer_id');
}
Obviously I would need a corresponding set of methods in User.js
I'm not sure where to start, can anyone point me in the right direction??
Or am I just a total idiot? ^_^
Your definitions look right. For using them it will be something like:
new JobTicket({ id: 33 })
.fetch({ withRelated: [ 'soldBy', 'signedOffBy' ] })
.then(jobTicket => {
console.log(jobTicket.related('soldBy').get('last_name');
});
Besides that I would recommend you to use the Registry plugin for referencing other models. That eases the pains of referencing models not yet loaded.
I'm currently seeding data with Sequelize.js and using hard coded values for association IDs. This is not ideal because I really should be able to do this dynamically right? For example, associating users and profiles with a "has one" and "belongs to" association. I don't necessarily want to seed users with a hard coded profileId. I'd rather do that in the profiles seeds after I create profiles. Adding the profileId to a user dynamically once profiles have been created. Is this possible and the normal convention when working with Sequelize.js? Or is it more common to just hard code association IDs when seeding with Sequelize?
Perhaps I'm going about seeding wrong? Should I have a one-to-one number of seeds files with migrations files using Sequelize? In Rails, there is usually only 1 seeds file you have the option of breaking out into multiple files if you want.
In general, just looking for guidance and advice here. These are my files:
users.js
// User seeds
'use strict';
module.exports = {
up: function (queryInterface, Sequelize) {
/*
Add altering commands here.
Return a promise to correctly handle asynchronicity.
Example:
return queryInterface.bulkInsert('Person', [{
name: 'John Doe',
isBetaMember: false
}], {});
*/
var users = [];
for (let i = 0; i < 10; i++) {
users.push({
fname: "Foo",
lname: "Bar",
username: `foobar${i}`,
email: `foobar${i}#gmail.com`,
profileId: i + 1
});
}
return queryInterface.bulkInsert('Users', users);
},
down: function (queryInterface, Sequelize) {
/*
Add reverting commands here.
Return a promise to correctly handle asynchronicity.
Example:
return queryInterface.bulkDelete('Person', null, {});
*/
return queryInterface.bulkDelete('Users', null, {});
}
};
profiles.js
// Profile seeds
'use strict';
var models = require('./../models');
var User = models.User;
var Profile = models.Profile;
module.exports = {
up: function (queryInterface, Sequelize) {
/*
Add altering commands here.
Return a promise to correctly handle asynchronicity.
Example:
return queryInterface.bulkInsert('Person', [{
name: 'John Doe',
isBetaMember: false
}], {});
*/
var profiles = [];
var genders = ['m', 'f'];
for (let i = 0; i < 10; i++) {
profiles.push({
birthday: new Date(),
gender: genders[Math.round(Math.random())],
occupation: 'Dev',
description: 'Cool yo',
userId: i + 1
});
}
return queryInterface.bulkInsert('Profiles', profiles);
},
down: function (queryInterface, Sequelize) {
/*
Add reverting commands here.
Return a promise to correctly handle asynchronicity.
Example:
return queryInterface.bulkDelete('Person', null, {});
*/
return queryInterface.bulkDelete('Profiles', null, {});
}
};
As you can see I'm just using a hard coded for loop for both (not ideal).
WARNING: after working with sequelize for over a year, I've come to realize that my suggestion is a very bad practice. I'll explain at the bottom.
tl;dr:
never use seeders, only use migrations
never use your sequelize models in migrations, only write explicit SQL
My other suggestion still holds up that you use some "configuration" to drive the generation of seed data. (But that seed data should be inserted via migration.)
vv DO NOT DO THIS vv
Here's another pattern, which I prefer, because I believe it is more flexible and more readily understood. I offer it here as an alternative to the accepted answer (which seems fine to me, btw), in case others find it a better fit for their circumstances.
The strategy is to leverage the sqlz models you've already defined to fetch data that was created by other seeders, use that data to generate whatever new associations you want, and then use bulkInsert to insert the new rows.
In this example, I'm tracking a set of people and the cars they own. My models/tables:
Driver: a real person, who may own one or more real cars
Car: not a specific car, but a type of car that could be owned by someone (i.e. make + model)
DriverCar: a real car owned by a real person, with a color and a year they bought it
We will assume a previous seeder has stocked the database with all known Car types: that information is already available and we don't want to burden users with unnecessary data entry when we can bundle that data in the system. We will also assume there are already Driver rows in there, either through seeding or because the system is in-use.
The goal is to generate a whole bunch of fake-but-plausible DriverCar relationships from those two data sources, in an automated way.
const {
Driver,
Car
} = require('models')
module.exports = {
up: async (queryInterface, Sequelize) => {
// fetch base entities that were created by previous seeders
// these will be used to create seed relationships
const [ drivers , cars ] = await Promise.all([
Driver.findAll({ /* limit ? */ order: Sequelize.fn( 'RANDOM' ) }),
Car.findAll({ /* limit ? */ order: Sequelize.fn( 'RANDOM' ) })
])
const fakeDriverCars = Array(30).fill().map((_, i) => {
// create new tuples that reference drivers & cars,
// and which reflect the schema of the DriverCar table
})
return queryInterface.bulkInsert( 'DriverCar', fakeDriverCars );
},
down: (queryInterface, Sequelize) => {
return queryInterface.bulkDelete('DriverCar');
}
}
That's a partial implementation. However, it omits some key details, because there are a million ways to skin that cat. Those pieces can all be gathered under the heading "configuration," and we should talk about it now.
When you generate seed data, you usually have requirements like:
I want to create at least a hundred of them, or
I want their properties determined randomly from an acceptable set, or
I want to create a web of relationships shaped exactly like this
You could try to hard-code that stuff into your algorithm, but that's the hard way. What I like to do is declare "configuration" at the top of the seeder, to capture the skeleton of the desired seed data. Then, within the tuple-generation function, I use that config to procedurally generate real rows. That configuration can obviously be expressed however you like. I try to put it all into a single CONFIG object so it all stays together and so I can easily locate all the references within the seeder implementation.
Your configuration will probably imply reasonable limit values for your findAll calls. It will also probably specify all the factors that should be used to calculate the number of seed rows to generate (either by explicitly stating quantity: 30, or through a combinatoric algorithm).
As food for thought, here is an example of a very simple config that I used with this DriverCar system to ensure that I had 2 drivers who each owned one overlapping car (with the specific cars to be chosen randomly at runtime):
const CONFIG = {
ownership: [
[ 'a', 'b', 'c', 'd' ], // driver 1 linked to cars a, b, c, and d
[ 'b' ], // driver 2 linked to car b
[ 'b', 'b' ] // driver 3 has two of the same kind of car
]
};
I actually used those letters, too. At runtime, the seeder implementation would determine that only 3 unique Driver rows and 4 unique Car rows were needed, and apply limit: 3 to Driver.findAll, and limit: 4 to Car.findAll. Then it would assign a real, randomly-chosen Car instance to each unique string. Finally, when generating association tuples, it uses the string to look up the chosen Car from which to pull foreign keys and other values.
There are undoubtedly fancier ways of specifying a template for seed data. Skin that cat however you like. Hopefully this makes it clear how you'd marry your chosen algorithm to your actual sqlz implementation to generate coherent seed data.
Why the above is bad
If you use your sequelize models in migration or seeder files, you will inevitably create a situation in which the application will not build successfully from a clean slate.
How to avoid madness:
Never use seeders, only use migrations
(Anything you can do in a seeder, you can do in a migration. Bear that in mind as I enumerate the problems with seeders, because that means none of these problems gain you anything.)
By default, sequelize does not keep records of which seeders have been run. Yes, you can configure it to keep records, but if the app has already been deployed without that setting, then when you deploy your app with the new setting, it'll still re-run all your seeders one last time. If that's not safe, your app will blow up. My experience is that seed data can't and shouldn't be duplicated: if it doesn't immediately violate uniqueness constraints, it'll create duplicate rows.
Running seeders is a separate command, which you then need to integrate into your startup scripts. It's easy for that to lead to a proliferation of npm scripts that make app startup harder to follow. In one project, I converted the only 2 seeders into migrations, and reduced the number of startup-related npm scripts from 13 to 5.
It's been hard to pin down, but it can be hard to make sense of the order in which seeders are run. Remember also that the commands are separate for running migrations and seeders, which means you can't interleave them efficiently. You'll have to run all migrations first, then run all seeders. As the database changes over time, you'll run into the problem I describe next:
Never use your sequelize models in your migrations
When you use a sequelize model to fetch records, it explicitly fetches every column it knows about. So, imagine a migration sequence like this:
M1: create tables Car & Driver
M2: use Car & Driver models to generate seed data
That will work. Fast-forward to a date when you add a new column to Car (say, isElectric). That involves: (1) creating a migraiton to add the column, and (2) declaring the new column on the sequelize model. Now your migration process looks like this:
M1: create tables Car & Driver
M2: use Car & Driver models to generate seed data
M3: add isElectric to Car
The problem is that your sequelize models always reflect the final schema, without acknowledging the fact that the actual database is built by ordered accretion of mutations. So, in our example, M2 will fail because any built-in selection method (e.g. Car.findOne) will execute a SQL query like:
SELECT
"Car"."make" AS "Car.make",
"Car"."isElectric" AS "Car.isElectric"
FROM
"Car"
Your DB will throw because Car doesn't have an isElectric column when M2 executes.
The problem won't occur in environments that are only one migration behind, but you're boned if you hire a new developer or nuke the database on your local workstation and build the app from scratch.
Instead of using different seeds for Users and Profiles you could seed them together in one file using sequelizes create-with-association feature.
And additionaly, when using a series of create() you must wrap those in a Promise.all(), because the seeding interface expects a Promise as return value.
up: function (queryInterface, Sequelize) {
return Promise.all([
models.Profile.create({
data: 'profile stuff',
users: [{
name: "name",
...
}, {
name: 'another user',
...
}]}, {
include: [ model.users]
}
),
models.Profile.create({
data: 'another profile',
users: [{
name: "more users",
...
}, {
name: 'another user',
...
}]}, {
include: [ model.users]
}
)
])
}
Not sure if this is really the best solution, but thats how I got around maintaining foreign keys myself in seeding files.
I'm, in the process of learning BookshelfJS/KnexJS (switching from SequelizeJS), and I'm running into an issue with importing data into multiple tables that were created via the migrations feature within KnexJS. There's 4 tables:
servers
operating_systems
applications
applications_servers
With the following constraints:
servers.operating_system_id references operating_systems.id
applications_servers.server_id references servers.id
applications_servers.application_id references applications.id
The tables get created just fine when I run knex migrate:latest --env development servers, it's when I import seeded data into the tables I get an error.
Originally, I organized the seed data for the 4 tables into 4 different files within the directory ./seeds/dev, which is just ${table_name}.js:
operating_systems.js
servers.js
applications.js
applications_servers.js
After a bit of debugging, I came to the realization that the error is being generated when the seed data within the file applications_servers.js, since when I take that one out, the other 3 run just fine. Then when I remove the 3 seed files and move applications_servers.js to the ./seeds/dev/ directory and execute knex seed:run, the applications_servers table gets populated just fine. However when I try to import the seeded data of all 4 files at once, I receive the following error:
# knex seed:run
Using environment: development
Error: ER_NO_REFERENCED_ROW_2: Cannot add or update a child row: a foreign key constraint fails (`bookshelf_knex_lessons`.`applications_servers`, CONSTRAINT `applications_servers_server_id_foreign` FOREIGN KEY (`server_id`) REFERENCES `servers` (`id`) ON DELETE CASCADE)
at Query.Sequence._packetToError (/Users/me/Documents/scripts/js/node/bookshelf_knex/node_modules/mysql/lib/protocol/sequences/Sequence.js:48:14)
at Query.ErrorPacket (/Users/me/Documents/scripts/js/node/bookshelf_knex/node_modules/mysql/lib/protocol/sequences/Query.js:83:18)
And not a single row gets inserted, into any table.
I thought that maybe it had something to do with the order that they were being imported. (Since they have to be imported in the order that the files were listed above). So thinking that maybe they were referenced in alpha-numeric order, I renamed them to:
1-operating_systems.js
2-servers.js
3-applications.js
4-applications_servers.js
However, nothing changed, no rows were inserted into any table, so just to be sure, I reversed the sequence of the number prefixes on the files, and again, no changes, not a single row was inserted, and same error returned.
Note: The migration script for creating the tables, as well as all of the seed data, was copy and pasted from a JS file I created which creates the same tables and imports the same data using BookshelfJS/KnexJS, but instead of using the migration features, it just does it manually when executed via node. You can view this file here
Any help would be appreciated!
Edit: When I combine all of the seed files within ./seeds/dev into one file, ./seeds/dev/servers.js, everything gets imported just fine. This makes me think that it may be caused by the knex migrations being executed asynchronously, thus the ID's inserted in the pivot table and the servers.operating_system_id may not be inserted yet into the associated tables... If this is the case, is there a way to setup dependencies in the seed files?
Knex.js's seed functionality does not provide any order of execution guarantees. Each seed should be written such that it can be executed in isolation - ie. your single file approach is correct.
If you want to break your individual seed files into submodules, then you might try the following:
// initial-data.js
var operatingSystems = require('./initial-data/operating-systems.js');
var servers = require('./initial-data/servers.js');
exports.seed = function(knex, Promise) {
return operatingSystems.seed(knex, Promise)
.then(function () {
return servers.seed(knex, Promise);
}).then(function() {
// next ordered migration...
});
}
I use the sql-fixtures module to handle FK relation dependencies in my seed file(s).
Imaginary implementation:
const dataSpec = {
applications_servers: [{
name: 'My ASP.Net thingie',
application_id: 'applications:0',
server_id: 'servers:0'
}],
servers: [{
name: 'My Windows server',
operating_system_id: 'operating_systems:0'
}],
operating_systems: [{
name: 'Windows Server 2k10'
}],
applications: [{
name: 'My fab web guestbook',
description: '...'
}]
}
but it might be superfluous if you declare your dependencies by means of the ORM.
Sometimes however, it is desirable to avoid coupling the DB seeds to your exact current Model implementation.