Meteor & Backbone Sync - javascript

I search best way to sync my Backbone.Collection and Meteor MongoDB Collection.
What a best way for it? I want make POST/GET request thru the client.

Meteor already keeps your server collections and client-side cached collections in sync. If you wish to use backbone.js collections, you can do it in two ways.
First way: use an observe on MongoDB cursor, look for all 'added', 'changed' and 'removed' events and deal with them however you want. More in docs.
Second way: to keep memory consumption lower and not keep two copies of every object in both Minimongo and Backbone collections, you can go one level lower and use registerStore - client-side DDP API.
For API docs look at the source code. Also DDP specs can help, look at "Managing Data" section.

Related

React-Flux stores: collections vs specialized singletons

I've found surprisingly little information about how to define your stores when building an app with react & flux (specifically Fluxxor). My co-workers, most of whom have experience with Backbone and other MVC frameworks, tend to create stores that are basically collections (ie. BlogPostStore, CommentStore). My gut is that stores are very intentionally not called collections or models. In the flux docs, stores are explained to be both collections and singletons representing a logical domain, without really explaining when you should use one form or the other.
Are there best practices or rules of thumb for deciding when you should be defining stores that are basically collections or more specific singletons?
The best way to divide stores when using flux is, in my opinion, around domain concepts. Need to manage blog posts and comments? A blog post store and a comment store (which each act as containers for a collection of their type of data) makes sense. Have a session with authentication data that you need to manage via client side actions? A session store — which wouldn't manage a collection at all, just a bunch of session-related data — is perfectly legitimate. The TimeStore mentioned in the documentation you linked to is another great example.
Basically, any time you're dealing with a new "type" of information that doesn't fit in with an existing store, create a new one. (There was a talk a couple years ago where someone from Yahoo! said they had an app with more than a hundred stores.) Then, in the stores themselves, implement the action handlers for whatever type of data management you want; if it's a store that handles a collection of something, then the action types and handlers you define will reflect that. If not, that's fine too.

Best practice for node - mongo - angular

I have an app I am designing using node/mongo/angular, what I am not getting is how is the best way to get my data from mongo into my pages? I can use node, and thru my routes send back data from mongo with my template(hogan in this case), and bind using mustachejs. That works fine for most things. I have one screen that has a decent amount of drop down lists, to bind them for an edit scenario now seems a challenge. I would like to get them bound to an angular model and go about it that way. Is it better to get the data thru the route in node, then use something like ng-init and get it into angular? Or would I be better off not getting the data thru the route in node, and then using angular to perform a "get" request and bind that way?
From the documentation of ng-init, more precisely from the red warning alert at the top of the page...:
The only appropriate use of ngInit is for aliasing special properties of ngRepeat, as seen in the demo below. Besides this case, you should use controllers rather than ngInit to initialize values on a scope.
So no, do not use ng-init. While that can be a good strategy for lazy migrations from regular applications to single page applications, it's a bad idea from an architectural point of view.
Most importantly, you lose two things:
An API. The benefit of SPAs is that you have an API and that you're constantly developing and maintaining it, even before it has external users
A clean separation of concerns. Views are strictly limited to presentation, can be cached by the client and all data is transferred through JSON API endpoints.
I would say that the best way to get data from Mongo into your page, is as mnemosyn said, using an API.
Basicly, you can have your API route, f.ex '/api/data' configured and then it can be used by a angular service, (which can use ngResource to make things easier). Any controller that wishes to access this data can use the angular service to get it, do some stuff with it, and then update it using the same angular service.

Triggers (or something similar) in MongoDB?

Supposed I have a MongoDB, and I am storing data in it.
Is there any possibility to get triggered by MongoDB when data is inserted, updated or deleted? I know that there are tailable cursors, but they only work with capped collections.
Anything else?
Basically, is there some kind of "event" in the JavaScript API I could listen to?
MongoDB does not have a concept of "triggers" and instead defers to you to work your own API to handle the tasks you typically associate with SQL Database triggers. The general premise is that the typical tasks of updating related collections and other such things are best handled by changing your schema design approach to include embedded lists and documents within the documents you are dealing with.
Beyond that the design preference is that you wrap your "trigger" logic into your own API from the program's point of view and give that control of such functions.
In the event that you really need to hook into every update/insert/delete you can look at tracking the oplog which is a special capped collection containing all operations from all sources processing on your MongoDB.

Is Mongoose not scalable with document array editing and version control?

I am developing a web application with Node.js and MongoDB/Mongoose. Our most used Model, Record, has many subdocument arrays. Some of these, for instance, include "Comment", "Bookings", and "Subscribers".
In the client side application, whenever the user hits the "delete" button it fires off an AJAX request to the delete route for that specific comment. The problem I am running into is that, when many of these AJAX calls come in at once, Mongoose fails with a "Document not found" error on some (but not all) of the calls.
This only happens when the calls are made rapidly and many at a time. I think this is due to the version in Mongoose causing document conflicts. Our current process for a delete is:
Fetch the document using Record.findById()
Remove the subdocument from the appropriate array (using, say, comment.remove())
Call record.save()
I have found a solution where I can manually update the collection using Record.findByIdAndUpdate and then using the $pull operator. However, this means we can't use any of mongoose's middleware and loose the version control entirely. And the more I think about it, the more I realize situations where this would happen and I would have to use Mongoose's wrapper functions like findByIdAndUpdate or findAndRemove. The only other solution I can think of would be to put the removal attempt into a while loop and hope it works, which seems like a very poor fix.
Using the Mongoose wrappers doesn't really solve my problem as it won't allow me to use any sort of Middleware or hooks at all then, which is basically one of the huge benefits of using Mongoose.
Does this mean that Mongoose is essentially useless for anything of with rapid editing and I might as well just use native MongoDB drivers? Am I misunderstanding Mongoose's limitations?
How could I solve this problem?
Mongoose's versioned document array editing is not scalable for the simple reason that it's not an atomic operation. As a result, the more array edit activity you have, the more likely it is that two edits will collide and you'll suffer the overhead of retry/recovery from that in your code.
For scalable document array manipulation, you have to use update with the atomic array update operators: $pull[All], $push[All], $pop, $addToSet, and $. Of course, you can also use these operators with the atomic findAndModify-based methods of findByIdAndUpdate and findOneAndUpdate if you also need the original or resulting doc.
As you mentioned, the big downside of using update instead of findOne+save is that none of your Mongoose middleware and validation is executed during an update. But I don't see that you have any choice if you want a scalable system. I'd much rather manually duplicate some middleware and validation logic for the update case than have to suffer the scalability penalties of using Mongoose's versioned document array editing. Hey, at least you still get the benefits of Mongoose's schema-based type casting on updates!
I think, from our own experiences, the answer to your question is "yes". Mongoose is not scalable for rapid array-based updates.
Background
We're experiencing the same issue at HabitRPG. After a recent surge in user growth (bringing our DB to 6gb), we started experiencing VersionError for many array-based updates (background on VersionError). ensureIndex({_id:1,__v1:1}) helped a bit, but that tapered as yet more users joined. It would appear to me Mongoose is indeed not scalable for array-based updates. You can see our whole investigation process here.
Solution
If you can afford moving from an array to an object, do that. Eg, comments: Schema.Types.Array => comments: Schema.Types.Mixed, and sort by post.comments.{ID}.date, or even a manual post.comments.{ID}.position if necessary.
If you're stuck with arrays:
db.collection.ensureIndex({_id:1,__v:1})
Use your methods described above. You won't benefit from hooks and validations, but there are worse things.
I would strongly suggest pulling those arrays out into new collections. For example, a Comments collection where each document has a record ID to indicate where it belongs. This is a much more scalable solution.
You are correct, Mongoose's array operations are not atomic and therefore do not scale well.
I thought of another idea, which I'm not certain about but seems worth offering: soft-delete.
Mongoose is very concerned about array-structure changes because they make future changes ambiguous. But if you were to just tag a comment subdocument with comment.deleted=true then you might be able to do more such operations without encountering conflicts. Then you could have a cron task that goes through and actually removes those comments.
Oh, an additional idea is to use some sort of memory cache, so if an record has been accessed/edited in the last few minutes, it's available without having to pull it from the server, which means that two requests coming in at the same time are going to be modifying the same object.
Note: I'm not actually sure that either of these are good ideas in general or that they'll solve your problem, so go ahead and edit/comment/downvote if they're bad :)

Ember.js Data Conflict Resolution / Failing on Conflict

If using Ember.js with the ember-data REST adapter, is there some sort of conflict resolution strategy for handling persisting data to the server?
At the very least, for my case, failing and rolling back would be sufficient in the case of conflicts, if the user can be informed of this. So, would sort of data/structure would be required for this? Some sort of "version" id on the models, where the server can check the submitted versions, and make sure that the client had the most recent data. Is there anything in Ember.js to make this a bit less manual? And if so, what?
Edit: Also, is there anything that helps with conflicts of bulk commits of models? Say we have a parent model with a "hasMany" relationship to several child models, and all of them are to be persisted to the database at the same time. If just dealing with server-side code, I feel I could wrap this up on a transaction in whatever database I'm using, and fail if something is out of date. How does this translate to Ember.js transactions?
I see a flag bulkCommit in the Adapter class. This seems to be able to bulk commit objects of the same type, in one request. However, if I'm persisting records of more than one type, then this would result in multiple requests to the server. Is there a way to either a) make this happen in one request to the server, or b) match up ember-data's transactions with transactions on the server, so if the transaction on the server fails, and needs to be rolled back, the ember-data transaction fails as well?
[I'm evaluating Ember.js for an upcoming project, and testing a few features and what it's like to develop in. I'm actually considering more real-time updates using socket.io or similar. I see derby.js has made some movements towards automatic conflict resolution]
As you can see in the Ember Data source code here, you can return 422 HTTP status code with errors as dictionary. Errors will be added to the model by Ember Data library by key as model's property and the model itself will be considered invalid. Model will automatically leave this state once each property with errors on them changed.
You could watch for errors on version property and reloadRecord once concurrency error appears.

Categories