First of all excuse me since I don't know how it is called in computer since:
For each of my document types in my mongo app I want to define a structure, with every field defined with its constraints, validation patterns and, generally, roles that can view modify and delete this document.
For example: Book:
{
name: "Book",
viewRoles: ["Admin","User"],
createRoles: ["Admin"],
modifyRoles: ["Admin", "User"],
fields: [
{
id:"title",
name:"Book Title",
validation: "",
maxLength: 50,
minLength: 3,
required: true
},
{
id:"authorEmail",
name:"Email of the Author",
validation: "email",
maxLength: 50,
minLength: 3,
required: false
}
]
}
Then if I have this "schema" for all of my documents, I can have one view for creating modifying and showing this "entities".
I also want to have the ability to create new document types, modify their fields through admin panel of my application.
When I google "mongo dynamic schema", "mongo document meta design" I get useless information.
My question is how it is called -- when I want to have predefined schema of my documents and have the ability to modify it. Where I can get more information about how to design such systems?
Since you tagged this as having a Meteor connection, I'll point you to Simple Schema: https://github.com/aldeed/meteor-simple-schema/. I use it, along with the related collection2 package. I find it's a nice way to document and enforce schema design. When used with the autoform package, it also provides a way to create validated forms directly from your schema.
I think you are looking for how to model your data. The below link might be helpful:
http://docs.mongodb.org/manual/data-modeling/
I also want to have the ability to create new document types, modify
their fields through admin panel of my application.
For Administrative activities you may look into the options given in:
http://docs.mongodb.org/ecosystem/tools/administration-interfaces/
And once you are done, you might want to read this as a kick off:
https://blog.serverdensity.com/mongodb-schema-design-pitfalls/
In Mongo DB you don't create collections. You just start using them. So you can't define schemas before hand. The collection is created on the first insert you make to the collection. Just make sure to ensure Index on the collection before inserting documents into it:
db.collection.ensureIndex({keyField: 1})
So it all depends on maintaining the structure of the documents inserted to the collection rather than defining the collection.
Related
I am trying to model a database for a fitness app. Currently the 4 main entities are as follows:
Exercise
User
Workout
UserWorkout
id
id
id
id
name
email
name
userId (fk)
body_part
name
description
workoutId (fk)
category
password
level
date
age
exerciseIds (fk)
time_taken
The app will have default workouts as well as default exercises.
I would like the user to be able to add their own custom workouts/exercises that only they can see (in addition to the default ones) but I'm not sure on how to best structure the data?
Kris, MongoDB is a schemaless database, what makes it really flexible when it comes down to data modelling. There are different ways of achieving what you described, the one I would recommend would be adding nested documents to the user document if they belong to it. You would have something like this:
User {
firstName: ...,
lastName: ...,
age: ...,
weight: ...,
exercises: [
// User's exercise objects
],
workout: [
// User's workout objects
],
}
This way you can easily have access to information related to the user and avoid using expensive operations like $lookup or querying the database multiple times.
To handle the default exercises/workouts you can have a property in the respective objects like isDefault: true.
What is the best way to structure many-to-many models in a mongoose schema?
I have two models that have a many-to-many relationship with eachother. Users can belong to many organistaions and Organisations can have many users.
Options:
Define the relationship in each model by referencing the other model
Define the relationship in one model by referencing the other model
Option 1
const UserSchema = new Schema({
organiations: { type: Schema.Types.ObjectId, ref: "Organiation" }, // references organisation
})
mongoose.model("User", UserSchema)
const OrganiationSchema = new Schema({
users: { type: Schema.Types.ObjectId, ref: "User" }, // references users
})
mongoose.model("Organiation", OrganiationSchema)
This seems like a good idea at first and it means I can query the Organisation model to get all users and I can also query the user model to get all relative organisations.
The only problem with this is I have to maintain 2 sources of truth. If I create a organisation, I must update the user witht the orgs it belongs to and I must update the organisation with the users it has.
This leads me to option 2 which is to have one source of truth by only defining the relationship in one model.
option 2:
const UserSchema = new Schema({
organiations: { type: Schema.Types.ObjectId, ref: "Organiation" }, // references organistion
})
mongoose.model("User", UserSchema)
const OrganiationSchema = new Schema({}) // no referencces
mongoose.model("Organiation", OrganiationSchema)
This means when I create a new organisation I only need to update user with the organisations they belong to. There is no risk of the 2 sources getting out of sync. However, it does mean when it comes to querying data, it makes it more tricky. If I want to get all users that belong to an organisation, I have to query the user document. This would mean my organisation controller has to then be aware of both user and organisation models and as I start adding more relationships and models, I get tight coupling between all of these modules which I want to avoid.
How would you recommend handling many-to-many relationship in a mongoose schema?
There is no fixed solution to this.
If the organization can have orders of magnitudes more users than users can have organizations, option 2 might be a better solution.
Performance wise, populating the referenced data would be about the same as long as the referenced ids are indexed.
Having said that you might still go for option 1, even if you organization collection has the potential to have "huge" arrays. Especially if you want to make simple computation such as number of organizations users or use the "organiztion's current userIds to some other collection". In this case option 1 would be way better.
But if you opt of option 1 and if your array has the potential to become very large, consider bucket design pattern. Basically you limit the max length of your nested array. If the array reaches its max length, you make another document that holds newly added ids(or nested documents). Think of it as pagination.
So I have a problem with understanding of how Mongo .create and .findAndUpdate operation works. I have mongoose 5.4.2X and a model with schema which has lot of key:value pairs (without any nested objects) in the exact order (in the code below I use 1. 2. 3. etc to show you the right order) like this:
let schema = new mongoose.Schema({
1.code: {
type: String,
required: true
},
2.realm: {
type: String,
required: true,
},
3.type: {
type: String,
required: true,
enum: ['D', 'M'],
},
4.open: Number,
5.open_size: Number,
6.key: typeof value,..
7...another one key: value like previous one,
8.VaR_size: Number,
9.date: {
type: Date,
default: Date.now,
required: true
}
});
and a class object which have absolutely the same properties in the same order like schema above.
When I form data for Mongo via const contract = new Class_name (data) and using console.log(contract) I have a necessary object with properties in the exact right order like:
Contract {1.code: XXX, 2.realm: YYY, 3.type: D, .... 8.VaR_size: 12, 9.date: 327438}
but when I'm trying to create/update document to the DB via findOneAndUpdate or (findByID) it writes in alphabetical order but not the necessary 1->9, for example:
_id:5ca3ed3f4a547d4e88ee55ec
1.code:"RVBD-02.J"
7.VaR:0.9
(not the 1->9)...:...
8.VaR_size:0.22
__v:0
5.avg:169921
The full code snippet for writing is:
let test = await contracts.findOneAndUpdate(
{
code: `${item_ticker}-${moment().subtract(1, 'day').format('DD.MMM')}` //how to find
},
contract, //document for writinng and options below
{
upsert : true,
new: true,
setDefaultsOnInsert: true,
runValidators: true,
lean: true
}
).exec();
Yes, I have read the mongoose docs and don't find any option param for
solving my problem or probably some optional params are matter but
there are no description for this.
It's not my first project, the funny thing is, that when I'm inserting
tons-of-docs via .insertMany docs are inserted according to schema
order (not alphabetical)
My only question is:
How do I fix it? Is there any option param or it's a part of findAnd....
operations? If there is not solution, what should I do if for me it's
necessary the right ordering and check existence of document before
inserting it?
Update #1 after some time I rephrase my search query for google and find a relevant question on SW here: MongoDB field order and document position change after update
Guess I found the right answer according to the link that I post earlier. So yes, it's part of MongoDB:
MongoDB allocates space for a new document based on a certain padding
factor. If your update increases the size of the document beyond the
size originally allocated the document will be moved to the end of the
collection. The same concept applies to fields in a document.
by #Bernie Hackett
But in this useful comment still no solution, right? Wrong. It seems that the only way to evade this situation is using additions optional params during Model.find stage. The same ordering using during project stage via .aggregate and it looks like this:
Model.find({query},{
"field_one":1,
"field_two":1,
.............
"field_last":1
});
I'm trying to make a real-time vote application using SailsJS, and am currently having troubles with MongoDB. I am completely new to this, and have been just using SailsJS mimic easy calls to access MongoDB.
module.exports = {
attributes: {
selectOptions:{
type: 'Object'
},
question:{
type:'string',
required: true
},
password:{
type:'string',
required: true
}
}
};
The above code is for the model that I have, and I am the selectOptions should have an array of objects, like [{id: 1,result: 0},{id:2,result 0},...] and would like to know how to do this, as I cannot seem to find any documentation about the array of objects. Only thing I found was something about collection, or making another model and link that to the original model, but when i tried it, sails gave me some foreign key error that I have never faced before. I really really appreciate your time, and look forward for the response.
P.S - I tried making the array into either JSON or Object or nothing(Like not put any type under selectOptions) and made change to the model as well to see if it works, but both JSON and Object didn't work, but selectOptions did. However, I think it was returning a string, as it the length was longer than what the array was supposed to be.
I successfully implemented loading and showing relations with 'Backbone Relational' from an API I created. I get how things work by trial and error. I do think the docs are lacking some clarity though since it took a lot of time to figure out how things work. Especially on how to map things to the API I think the docs are lacking a bit.
Problem
Adding a bookmark works, it's the editing and deletion that don't work. The PUT becomes a POST and the DELETE simply doesn't fire at all. When I set an id to the model hardcoded it does work. So the id is missing which makes sense for the PUT becoming a POST.
The problem seems to be that the id doesn't hold an actual id, but a collection. The view where the problem occurs does not requires the BookmarkBinding, it's used somewhere else. Simply the fact that it has Bookmark as a relation makes the DELETE and PUT break.
BookmarkBinding model:
App.Model.BookmarkBinding = Backbone.RelationalModel.extend({
defaults: {
set_id: null,
bookmark_id: null
},
relations: [{
type: Backbone.HasOne,
key: 'bookmark',
relatedModel: 'App.Model.Bookmark',
reverseRelation: {
type: Backbone.HasOne,
key: 'id'
}
}],
urlRoot: 'http://api.testapi.com/api/v1/bookmark-bindings'
});
Bookmark model:
App.Model.Bookmark = Backbone.RelationalModel.extend({
defaults: {
url: 'undefined',
description: 'undefined',
visits: 0,
},
relations: [{
type: Backbone.HasMany,
key: 'termbindings',
relatedModel: 'App.Model.TermBinding',
reverseRelation: {
key: 'bookmark_id',
}
}],
urlRoot: 'http://api.testapi.com/api/v1/bookmarks'
});
From Backbonejs.org
The default sync handler maps CRUD to RESTful HTTP methods like so:
create → POST /collection
read → GET /collection[/id]
update → PUT /collection/id
delete → DELETE /collection/id
Your question suggests that you're making an HTTP PUT request, and therefore a Backbone update. If you want to make an HTTP POST, use Backbone create. The PUT request maps onto update, and requires that an id be sent in the URL, which isn't happening according to your server log. If your're creating a new object, then most server-side frameworks such as Rails / Sinatra / Zend will create an id for the object
Another possible source of error is the keys that you chose for the relations, like you suspected.
A Bookmark has many BookmarkBindings, and it seems that Backbone-relational will store them in the field that you specify in BookmarkBindings.relations.reverseRelation.key, which is currently defined as 'id'.
So the collection of related BookmarkBindings ids will to be stored on the same attribute as the Bookmark.id, creating a collision. Backbone.sync will send an undefined value to the server (which you see in your logs), because it finds a collection there instead of an integer.
First suggestion - You may not need a bidirectional relation, in which case drop it from the BookmarkBinding model.
Second suggestion - define the reverse relation on another key, so that it doesn't collide with Bookmark.id, such as BookmarkBindings.relations.reversRelation.key : 'binding_ids'
due disclosure - I've never used Backbone-relational.js, only Backbone.js.
The problem was that on editing or deleting the bookmark model, the bookmark binding model wanted to do it's work too since it is related too the bookmark from it's side. I already tried to remove the reverse relation which didn't prove to be a solution since in the other part of my application where I used the bookmark bindings things wouldn't work anymore.
Solution
I did end up removing the reverse relation (#jarede +1 for that!), but the crux was how to implement the foreign key to fetch relations from the API without a reverse relation. I ended up adding the keySource and keyDestination which made everything work out.
Sidenote
Backbone Ralational cannot handle identical foreign keys either, this gave me some problems too. This will make the lastly declared foreign key overwrite all the previous ones. This can be quite impractical since within an API it's not uncommon that model's are related to a column named id. So the idAttribute can be set with idAttribute: '_id' for example, but the foreign key has to be unique across your application.
BookmarkBinding model:
App.Model.BookmarkBinding = Backbone.RelationalModel.extend({
defaults: {
set_id: null,
bookmark_id: null
},
relations: [{
type: Backbone.HasOne,
key: 'bookmark',
keySource: 'id',
keyDestination: 'bookmark',
relatedModel: 'App.Model.Bookmark'
}],
urlRoot: 'http://api.testapi.com/api/v1/bookmark-bindings'
});