Manage relations among users in db - javascript

I am creating a mock app with user creation/auth/friend in a node js learning exercise. Having spent my time mostly at the front end of things, I am a n00b as far as DBs are concerned. I want to create a user database where I want to keep track of user profiles and their connections/friends.
Primary objective is to load/store users connections in the database.
Fetch this information and give it to the user most efficiently in least number of queries.
I'd really appreciate some help with a DB structure I should be using that can accomplish this. I am using mongodb and node.
Off the top of my head: I can store the user's connections in an object in the "connections" field. But this will involve making a lot of queries to fetch connections' details like their "about me" information - which I can also store in the same object as well.
Confused. Would really appreciate some pointers.

Take a look at the Mongoose ORM. It has a populate method that grabs foreign documents. Lots of other great stuff too.
You could say
Users.find({}).populate('connections').exec(function(err,users) { ... });
Before popualte the users' array of connections was an array of IDs, after, its an array of user documents.

Related

Auto generate sub database firestore

I have a firestore collection with a bunch of documents, each with plenty subfields. On a web page I need a list of a specific subfields from each document.
Currently I load the the entire database when you load the page and then loop through and get the wanted values. This uses way to many reads to get very little data.
Is there a way to solve this e.g. a autogenerated a collection that contains field from other collection in an array or something.
Many thanks in advance
Auto-creating such a subcollection with just the fields you need is a great way to reduce the bandwidth needed to load the data.
There is nothing built into Firestore to create those derived documents, but it's fairly easy to build something using Cloud Functions. Create a function that responds to a Firestore onWrite trigger, and write the subset of the data to its destination there. It's common to have a separate Cloud Function for each such use-case, and I regularly see projects with 100+ such functions.
I expect we'll also start seeing Firebase Extensions for this type of thing, but right now no-one seems to have built one.

firestore: arrays vs sub collection of documents performance

i would like to ask if there is a best practice for firestore, when one develops a chat app, and what is the best practice to store messages for chat-rooms.
The assumption here is that every chatroom has its own document.
I started using an array to store the messages from the users. The problem with that approach is that there is no way to add, a insert(append) a new entry everytime a new message is submitted to the chat room. One has to save a new copy of the array with the new message appended. This seems like something that would scale really bad, unless the chat history is split in sub-arrays etc..
In the official documents, they suggest a structure, where one should store the messages of a specific chatroom as separate documents in a sub collection of that chatroom. I wonder if this approach is the best, and what would be some drawbacks, or if there is another preferred way to do this.
I would generally go with the approach of "Every chat room has a subcollection of messages. And every new message is a separate document in this subcollection." This has several advantages: It's easy to add or edit individual messages, and you can perform a number of different queries (like "Grab the 20 most recent messages")
The biggest drawback, I suppose, is that if you find that new users are frequently going to be entering your chat and will want to see the entire chat history of the room up until they joined, that would result in a large number of database reads. Realistically, though, I don't know how often that would happen in real life, and you could mitigate this by using pagination to grab your historical chat in batches.
To add to what Todd said:
In arrays you cannot store Timestamps - a big downside for your case, as you'll want the time the message was sent.

How to store user actions in MeteorJs using MongoDB?

I'm using Meteor JS for a project so inherently I'm using MongoDB. I'm storing a user's check in and out actions. I'm currently storing them as individual docs in the collection. Each action contains 3 fields; in or out, time of action and userid. Is the best way to go though? Should I just have one doc per members and then store each action in an array? Is there another way? I anticipate several hundred members, but hopefully several thousands of members in the future. Thanks.
From experience, I can say that storing records instead of arrays is a better choice in the long run.
As far as Meteor is concerned, its reactivity handles collection records, but not individual fields in arrays. In other words, if one element gets added to the checkins array of a user object, the entire user object needs to be synchronized with the clients. If you store records instead, only the newly added record will be sent by the publication.
As far as MongoDB is concerned, there is a document size limit of 16MB. Not sure how frequent your checkins and checkouts are, but if you store them in an array, you might run into that limitation at some point.
Records are also easier to access than arrays.
For more details, see MongoDB data modeling and Database modeling in Bulletproof Meteor.

Meteor.js - Should you denormalize data?

This question has been driving me crazy and I can't get my head around it. I come from a MySQL relational background and have been using Meteorjs and Mongo. For the purposes of this question take the example of posts and authors. One Author to Many Posts. I have come up with two ways in which to do this:
Have a single collection of posts - Each post has the author information embedded into the document. This of course leads to denormalization and issues such as if the author name changes how do you keep the data correct.
Have two collections: posts and authors - Each post has an author ID which references the authors collection. I then attempt to do a "join" on a non relational database while trying to maintain reactivity.
It seems to me with MongoDB degrees of denormalization is acceptable and I am tempted to embed as implementing joins really does feel like going against the ideals of Mongo.
Can anyone shed any light on what is the right approach especially in terms of wanting my app data to scale well and be manageable?
Thanks
Denormalisation is useful when you're scaling your application and you notice that some queries are taking too much time to complete. I also noticed that most Mongodb developers tend to forget about data normalisation but that's another topic.
Some developers say things like: "Don't use observe and observeChanges because it's slow". We're building real-time applications so that a normal thing to happen, it's a CPU intensive app design.
In my opinion, you should always aim for a normalised database design and then you have to decide, try and test which fields, that duplicated/denormalised, could improve your app's performance. Example: You remove 1 query per user. The UI need an extra field and it's fast to duplicated it, etc.
With the denormalisation you've an extra price to pay. You've to update the denormalised fields according to the main collection.
Example:
Let's say that you Authors and Articles collections. On each article you have the author name. The author might change his name. With a normalised scenario, it works fine. With a denormalised scenario you have to update the Author document name AND every single article, owned by this author, with the new name.
Keeping a normalised design makes you life easier but denormalisation, eventually, becomes necessary.
From a MeteorJs perspective: With the normalised scenario you're sending data from 2 Collections to the client. With the denormalised scenario, you only send 1 collection. You can also reactively join on the server and send 1 collection to the client, although it increases the RAM usage because of MergeBox on the server.
Denormalisation is something that it's very specify for you application needs. You can use Kadira to find ways of making your application faster. The database design is only 1 factor out of many that you play with when trying to improve performance.

MongoDB create a new collection from the client side?

Intro:
I have a collection of players in my app, this is built using a schema and then passing some stuff into my node.js configuration, perhaps it's a bit more complex than that but I mean this is the basic idea of how mongoDB for nodejs works with models and collections.
So now in my database I have my DB name then collections then a collection of players, great now I can build an api and start making requests to GET and PUT from that collection.
Question:
My problem is in my stats app after week one's game was tracked I need to clear all the player attributes so that in week 2 it's fresh, but I would like to save all the stats for the players from week 1.
So due to my basic knowledge of how a collection is built I am thinking in order to build a new collection I need to define a schema, but perhaps there is another way, if not then the schema would have to be dynamic right? Well how do I build a dynamic schema to suit this problem?
Ideas:
Idea: The user clicks a button saying complete week 1, then the server takes all the data from the players collection and builds a
new collection named week 1 players?
Question: If the above is a good idea, how do I build a function to create a new mongoDB collection and save players_week1 into its
own branch?
Final Word:
So hopefully someone has an optimal solution and understands what I am talking about, in the mean time I am going to try and follow my idea and see if I can search the docs to find some answers. Thanks guys.

Categories