EnsureIndex for likes in MongoDB - javascript

well, i am creating a network that allows users creating posts and like them.
Asking on stackoverflow i've understood how to structure my database:
A collection which includes a document for each post.
A collection which includes a document for each like, in each of these documents there is a reference to post is referenced to.
When i want to get ALL likes about a post i can query the like collection looking for the reference to that post.
And till here i am ok. But assuming i'll have millions documents in like collection, i wondered how could i query and search among them in not too long time.
And i was advised of ensureIndex, in this case, i have to ensureindex of the field which contains reference to a post.
But when do i have to create this index? is enough to create it once (for example when i set up my database) and it will be as default in mongodb or do i have to do it during application life-time? thank you

But assuming i'll have millions documents in like collection, i wondered how could i query and search among them in not too long time.
I assume you would most likely want to do a count on the likes as an example?
You can't, instead you use optimizations to combat this. A count on millions of rows might get a bit slow.
A typical scenario are counters in SQL techs that you use to amend the parent row with a sum figure of its children.
Same applies to MongoDB.
You would aggregate important data to the top.
If you require to actually query the likes to show some who have liked it then you limit those likes. Google+ and other networks tend to limit the amount of likes they show to about 1,000.
And i was advised of ensureIndex,
Adding indexes to a database does help with actually searching for documents.
But when do i have to create this index? is enough to create it once
Yes, MongoDB will manage the index itself. You only need to ensure it once.

Related

What is the best way to reduce firestore document reads in chat app on page load?

I have a chat app that is charging me a large number of reads for each page load, 1 for each message to show. I'm trying to figure out a way to reduce that number and optimize for cost, as refreshing the page a few times costs hundreds of reads.
The firestore pricing documentation says
For queries other than document
reads, such as a request for a list of collection IDs, you are billed
for one document read. If fetching the complete set of results
requires more than one request (for example, if you are using
pagination), you are billed once per request.
I considered that maybe if I fetch an entire collection without a query like shown here in the docs, a cost difference might be remotely possible. I'm sure that's probably wrong, but I can't find anything specifying what the exceptions are that only cost 1 read. It also crossed my mind to create an array to hold the most recent messages in the parent document of the collection, but the security rules for updating that array seem overly complex and not practical. I also read about using the firebase cache, but that doesn't seem useful here.
Here is code to demonstrate how I'm currently loading messages. I'm using the react-firebase-hooks library to snapshot this data with useCollectionData:
const q = query(messagesRef, orderBy("createdAt", "desc"), limit(100))
const [messages] = useCollectionData(q)
In researching, I found this question where I'm pretty sure the accepted answer is wrong. It did make me question the rules. Are there any strategies to reduce the number of reads for this common use case?
Pagination still incurs charges on a per-document read, right?
Yes, it does, but only when you load more pages.
I'm not trying to load the entire collection, but rather wondering if loading the collection without a query has a different cost than with.
Loading a collection without a query that is limiting the results, means that you're reading the entire collection. And, yes, the cost will be much higher if you're not using a query. Remember, that the cost of reading a collection/query in Firestore is equal to the number of documents that are actually returned. For example, if you have a collection of 1 million documents, and your query returns 100, you'll have to pay only 100 document reads.
I'm overall trying to figure out if there's a strategy that can improve the read cost of the example query I gave.
No. If you need to get the newest 100 messages, that's the best query you can have. The only change you can make to decrease the number of reads would be to change the value that you pass to the limit() function. And maybe it makes sense since a user might not be interested in reading 100 messages at once. Always try to display data that fits into a screen, and load any other data progressively.

How to validate read and secure documents in CouchDB?

In 10 Common Misconceptions about CouchDB, Joan Touzet is asked (30:16) if CouchDB will have a way to secure/validate reads on specific documents and/or specific fields of a document.
Joan says that if someone has access to the database, he/she can access all documents in that database.
So she says that there are a few ways to accomplish that:
(30:55) Cloudant was working on field level security access. Have they implemented it yet? Is it open-sourced?
(32:10) You should create separate document in a separate database.
(32:20) Filtered replications. She mentions that it slows 'things' down. She means that the filter slows the replication, correct?
Also, according to rcouch wiki (https://github.com/rcouch/rcouch/wiki/Validate-documents-on-read), it implements a validate_doc_read function (I haven't tested it, though). Does CouchDB has anything like it?
As far as I can see, the best approach is to model the database according to my problem (one database for this, another for that, one for this person, another for that person) and do filtered replications when necessary. Any suggestions?

Followers - mongodb database design

So I'm using mongodb and I'm unsure if I've got the correct / best database collection design for what I'm trying to do.
There can be many items, and a user can create new groups with these items in. Any user may follow any group!
I have not just added the followers and items into the group collection because there could be 5 items in the group, or there could be 10000 (and the same for followers) and from research I believe that you should not use unbound arrays (where the limit is unknown) due to performance issues when the document has to be moved because of its expanding size. (Is there a recommended maximum for array lengths before hitting performance issues anyway?)
I think with the following design a real performance issue could be when I want to get all of the groups that a user is following for a specific item (based off of the user_id and item_id), because then I have to find all of the groups the user is following, and from that find all of the item_groups with the group_id $in and the item id. (but I can't actually see any other way of doing this)
Follower
.find({ user_id: "54c93d61596b62c316134d2e" })
.exec(function (err, following) {
if (err) {throw err;};
var groups = [];
for(var i = 0; i<following.length; i++) {
groups.push(following[i].group_id)
}
item_groups.find({
'group_id': { $in: groups },
'item_id': '54ca9a2a6508ff7c9ecd7810'
})
.exec(function (err, groups) {
if (err) {throw err;};
res.json(groups);
});
})
Are there any better DB patterns for dealing with this type of setup?
UPDATE: Example use case added in comment below.
Any help / advice will be really appreciated.
Many Thanks,
Mac
I agree with the general notion of other answers that this is a borderline relational problem.
The key to MongoDB data models is write-heaviness, but that can be tricky for this use case, mostly because of the bookkeeping that would be required if you wanted to link users to items directly (a change to a group that is followed by lots of users would incur a huge number of writes, and you need some worker to do this).
Let's investigate whether the read-heavy model is inapplicable here, or whether we're doing premature optimization.
The Read Heavy Approach
Your key concern is the following use case:
a real performance issue could be when I want to get all of the groups that a user is following for a specific item [...] because then I have to find all of the groups the user is following, and from that find all of the item_groups with the group_id $in and the item id.
Let's dissect this:
Get all groups that the user is following
That's a simple query: db.followers.find({userId : userId}). We're going to need an index on userId which will make the runtime of this operation O(log n), or blazing fast even for large n.
from that find all of the item_groups with the group_id $in and the item id
Now this the trickier part. Let's assume for a moment that it's unlikely for items to be part of a large number of groups. Then a compound index { itemId, groupId } would work best, because we can reduce the candidate set dramatically through the first criterion - if an item is shared in only 800 groups and the user is following 220 groups, mongodb only needs to find the intersection of these, which is comparatively easy because both sets are small.
We'll need to go deeper than this, though:
The structure of your data is probably that of a complex network. Complex networks come in many flavors, but it makes sense to assume your follower graph is nearly scale-free, which is also pretty much the worst case. In a scale free network, a very small number of nodes (celebrities, super bowl, Wikipedia) attract a whole lot of 'attention' (i.e. have many connections), while a much larger number of nodes have trouble getting the same amount of attention combined.
The small nodes are no reason for concern, the queries above, including round-trips to the database are in the 2ms range on my development machine on a dataset with tens of millions of connections and > 5GB of data. Now that data set isn't huge, but no matter what technology you choose you, will be RAM bound because the indices must be in RAM in any case (data locality and separability in networks is generally poor), and the set intersection size is small by definition. In other words: this regime is dominated by hardware bottlenecks.
What about the supernodes though?
Since that would be guesswork and I'm interested in network models a lot, I took the liberty of implementing a dramatically simplified network tool based on your data model to make some measurements. (Sorry it's in C#, but generating well-structured networks is hard enough in the language I'm most fluent in...).
When querying the supernodes, I get results in the range of 7ms tops (that's on 12M entries in a 1.3GB db, with the largest group having 133,000 items in it and a user that follows 143 groups.)
The assumption in this code is that the number of groups followed by a user isn't huge, but that seems reasonable here. If it's not, I'd go for the write-heavy approach.
Feel free to play with the code. Unfortunately, it will need a bit of optimization if you want to try this with more than a couple of GB of data, because it's simply not optimized and does some very inefficient calculations here and there (especially the beta-weighted random shuffle could be improved).
In other words: I wouldn't worry about the performance of the read-heavy approach yet. The problem is often not so much that the number of users grows, but that users use the system in unexpected ways.
The Write Heavy Approach
The alternative approach is probably to reverse the order of linking:
UserItemLinker
{
userId,
itemId,
groupIds[] // for faster retrieval of the linker. It's unlikely that this grows large
}
This is probably the most scalable data model, but I wouldn't go for it unless we're talking about HUGE amounts of data where sharding is a key requirement. The key difference here is that we can now efficiently compartmentalize the data by using the userId as part of the shard key. That helps to parallelize queries, shard efficiently and improve data locality in multi-datacenter-scenarios.
This could be tested with a more elaborate version of the testbed, but I didn't find the time yet, and frankly, I think it's overkill for most applications.
I read your comment/use-case. So I update my answer.
I suggest to change the design as per this article: MongoDB Many-To-Many
The design approach is different and you might want to remodel your approach to this. I'll try to give you an idea to start with.
I make the assumption that a User and a Follower are basically the same entities here.
I think the point you might find interesting is that in MongoDB you can store array fields and this is what I will use to simplify/correct your design for MongoDB.
The two entities I would omit are: Followers and ItemGroups
Followers: It is simply a User who can follow Groups. I would add an
array of group ids to have a list of Groups that the User follows. So instead of having an entity Follower, I would only have User with an array field that has a list of Group Ids.
ItemGroups: I would remove this entity too. Instead I would use an array of Item Ids in the Group entity and an array of Group Ids in the Item entity.
This is basically it. You will be able to do what you described in your use case. The design is simpler and more accurate in the sense that it reflects the design decisions of a document based database.
Notes:
You can define indexes on array fields in MongoDB. See Multikey Indexes for example.
Be wary about using indexes on array fields though. You need to understand your use case in order to decide whether it is reasonable or not. See this article. Since you only reference ObjectIds I thought you could try it, but there might be other cases where it is better to change the design.
Also note that the ID field _id is a MongoDB
specific field type of ObjectID used as primary key. To access the ids you can refer to it e.g. as user.id, group.id, etc. You can use an index to ensure uniqueness as per this question.
Your schema design could look like this:
As to your other question/concerns
Is there a recommended maximum for array lengths before hitting performance issues anyway?
the answer is in MongoDB the document size is limited to 16 MB and there is now way you can work around that. However 16 MB is considered to be sufficient; if you hit the 16 MB then your design has to be improved. See here for info, section Document Size Limit.
I think with the following design a real performance issue could be when I want to get all of the groups that a user is following for a specific item (based off of the user_id and item_id)...
I would do this way. Note how "easier" it sounds when using MongoDB.
get the item of the user
get groups that reference that item
I would be rather concerned if the arrays get very large and you are using indexes on them. This could overall slow down write operations on the respective document(s). Maybe not so much in your case, but not entirely sure.
You're on the right track to creating a performant NoSQL schema design, and I think you're asking the right questions as to how to properly lay things out.
Here's my understanding of your application:
It looks like Groups can both have many Followers (mapping users to groups) and many Items, but Items may not necessarily be in many Groups (although it is possible). And from your given use-case example, it sounds like retrieving all the Groups an Item is in and all the Items in a Group will be some common read operations.
In your current schema design, you've implemented a model between mapping users to groups as followers and items to groups as item_groups. This works alright until you mention the problem with more complex queries:
I think with the following design a real performance issue could be when I want to get all of the groups that a user is following for a specific item (based off of the user_id and item_id)
I think a few things could help you out in this situation:
Take advantage of MongoDB's powerful indexing capabilities. In particular, I think you should consider creating compound indexes on your Follower objects covering your Group and User, and your Item_Groups on Item and Group, respectively. You'll also want to make sure this kind of relationship is unique, in that a user can only follow a group once and an item can only be added to a group once. This would best be achieved in some pre-save hooks defined in your schema, or using a plugin to check for validity.
FollowerSchema.index({ group: 1, user: 1 }, { unique: true });
Item_GroupsSchema.index({ group: 1, item: 1 }, { unique: true });
Using an index on these fields will create some overhead when writing to the collection, but it sounds like reading from the collection will be a more common interaction so it'll be worth it (I'd suggest reading more up on index performance).
Since a User probably won't be following thousands of groups, I think it'd be worthwhile to include in the user model an array of groups the user is following. This will help you out with that complex query when you want to find all instances of an item in groups that a user is currently following, since you'll have the list of groups right there. You'll still have the implementation where your using $in: groups, but it'll be with one less query to the collection.
As I mentioned before, it seems like items may not necessarily be in that many groups (just like users won't necessarily be following thousands of groups). If the case may commonly be that an item is in maybe a couple hundred groups, I'd consider just adding an array to the item model for each group that it gets added to. This would increase your performance when reading all the groups an item is in, a query you mentioned would be a common one. Note: You'd still use the Item_Groups model to retrieve all the items in a group by querying on the (now indexed) group_id.
Unfortunately NoSQL databases aren't eligible in this case. Your data model seems exact relational. According to MongoDB documentation we can do only these and can perform only these.
There are some practices. MongoDB advises to us using Followers collection to get which user follows which group and vice versa with good performance. You may find the closest case to your situation on this page on slide 14th. But I think the slides can eligible if you want to get each result on different page. For instance; You are a twitter user and when you click the followers button you'll see the all your followers. And then you click on a follower name you'll see the follower's messages and whatever you can see. As we can see all of those work step-by-step. No needed a relational query.
I believe that you should not use unbound arrays (where the limit is unknown) due to performance issues when the document has to be moved because of its expanding size. (Is there a recommended maximum for array lengths before hitting performance issues anyway?)
Yes, you're right. http://askasya.com/post/largeembeddedarrays .
But if you have about a hundred items in your array there is no problem.
If you have fixed size some data thousands you may embed them into your relational collections as array. And you can query your indexed embedded document fields rapidly.
In my humble opinion, you should create hundreds of thousands test data and check performances of using embedded documents and arrays eligible to your case. Don't forget creating indexes appropriate your queries. You may try to using document references on your tests. After tests if you like performance of results go ahead..
You had tried to find group_id records that are followed by a specific user and then you've tried to find a specific item with those group_id. Would it be possible Item_Groups and Followers collections have a many-to-many relation?
If so, many-to-many relation isn't supported by NoSQL databases.
Is there any chance you can change your database to MySQL?
If so you should check this out.
briefly MongoDB pros against to MySQL;
- Better writing performance
briefly MongoDB cons against to MySQL;
- Worse reading performance
If you work on Node.js you may check https://www.npmjs.com/package/mysql and https://github.com/felixge/node-mysql/
Good luck...

How to store user actions in MeteorJs using MongoDB?

I'm using Meteor JS for a project so inherently I'm using MongoDB. I'm storing a user's check in and out actions. I'm currently storing them as individual docs in the collection. Each action contains 3 fields; in or out, time of action and userid. Is the best way to go though? Should I just have one doc per members and then store each action in an array? Is there another way? I anticipate several hundred members, but hopefully several thousands of members in the future. Thanks.
From experience, I can say that storing records instead of arrays is a better choice in the long run.
As far as Meteor is concerned, its reactivity handles collection records, but not individual fields in arrays. In other words, if one element gets added to the checkins array of a user object, the entire user object needs to be synchronized with the clients. If you store records instead, only the newly added record will be sent by the publication.
As far as MongoDB is concerned, there is a document size limit of 16MB. Not sure how frequent your checkins and checkouts are, but if you store them in an array, you might run into that limitation at some point.
Records are also easier to access than arrays.
For more details, see MongoDB data modeling and Database modeling in Bulletproof Meteor.

Easiest, Most Efficient way of associating data from different tables Without Associations

This is something that has intrigued me a lot recently. It's a general SQL/Relational Database problem coming from a guy who prefers Mongo.
What I want is, to, as such, associate data from different tables in the most efficient, easiest way, without using associations and assuming I can't restructure or re-model the db.
So, for example, with FQL (which doesn't have associations), if I asked for the name and eid of all the events my current user has been invited to, I'd also like to know whether my current user is going, but that info is in the 'event_member' table.
In this instance I've an interest in another column (rsvp_status) in event_member, one that I'd like to be associated with the columns from event, i.e eid and name.
In this case the instinct may be to say that since every event has a name, an eid and a rsvp_status then we could say sort by eid and then match each nth item (for n=1 to whatever), because there's guaranteed to be the same number, but there are many cases when we can't do that.
And I know I could do separate queries and then iterate through and match them by eid, but basically I'm looking for a generic, simple,efficient solution for the associations idea if one exists. Preferably in javascript.
What you are looking for here is a simple JOIN of two or more tables. http://www.w3schools.com/sql/sql_join.asp
You do not have to have any relations between tables in order to perform JOINS. The relations are just a constraint to ensure that bad/invalid data can't propagete to the tables. For example an event_member with eid of unexisting user. Anyway you are free to JOIN tables as you like :)
Here is a way to connect to Sql Server using javascript How to connect to SQL Server database from JavaScript in the browser?

Categories