Can't find any docs or posts for this, which may indicate I'm trying to do something incorrect.
Is it possible to use a Mongoose schema that is entirely virtual, i.e. not persisted to the db?
I have a number of models, most of which are persisted to db, but would like to consistently include models that are only retained in memory, not persisted?
The closest I can come up with is along these lines, but it will still persist objects with only an id attribute in the database. Simplified here:
// access_token.js
var schema = mongoose.Schema({});
schema.virtual('token').get(function() {
return 'abcde12345';
});
module.exports = mongoose.model('AccessToken', schema);
The idea in doing this is to abstract models so that the consuming part of the app does not need to be aware of whether a model is persisted to the database or only held in memory. Of course this could be achieved by creating the same object and methods as a plain object, but that approach would quickly become repetitive.
You could override (monkey patch) the Mongoose methods which save data (e.g. .save) but I suspect what you are trying to do is difficult/impossible.
You could take a look at sift.js, which is a query library to do in-memory querying.
https://github.com/crcn/sift.js
You can set a pre middleware for this model which always fails.
schema.pre('save', function (next) {
next(new Error('This can't be saved!');
});
So you will know when you are doing wrong.
Related
UPDATE 1: 5 votes have been received, so I have submitted a feature request: https://github.com/LearnBoost/mongoose/issues/2637
Please cast your +1 votes there to let the core team know you want this feature.
UPDATE 2: See answer below...
ORIGINAL POST:
Lets say I do a "lean" query on a collection OR receive some data from a REST service and I get an array of objects (not mongoose documents).
These objects already exist in the database, but I need to convert some/all of those objects to mongoose documents for individual editing/saving.
I have read through the source and there is a lot going on once mongoose has data from the database (populating, casting, initializing, etc), but there doesn't seem to be a method for 'exposing' this to the outside world.
I am using the following, but it just seems hacky ($data is a plain object):
// What other properties am I not setting? Is this enough?
var doc = new MyModel( $data );
doc.isNew = false;
// mimicking mongoose internals
// "init" is called internally after a document is loaded from the database
// This method is not documented, but seems like the most "proper" way to do this.
var doc = new MyModel( undefined );
doc.init( $data );
UPDATE: After more searching I don't think there is a way to do this yet, and the first method above is your best bet (mongoose v3.8.8). If anybody else is interested in this, I will make a feature request for something like this (leave a comment or upvote please):
var doc = MyModel.hydrate( $data );
Posting my own answer so this doesn't stay open:
Version 4 models (stable released on 2015-03-25) now exposes a hydrate() method. None of the fields will be marked as dirty initially, meaning a call to save() will do nothing until a field is mutated.
https://github.com/LearnBoost/mongoose/blob/41ea6010c4a84716aec7a5798c7c35ef21aa294f/lib/model.js#L1639-1657
It is very important to note that this is intended to be used to convert a plain JS object loaded from the database into a mongoose document. If you are receiving a document from a REST service or something like that, you should use findById() and update().
For those who live dangerously:
If you really want to update an existing document without touching the database, I suppose you could call hydrate(), mark fields as dirty, and then call save(). This is not too different than the method of setting doc.isNew = false; as I suggested in my original question. However, Valeri (from the mongoose team) suggested not doing this. It could cause validation errors and other edge case issues and generally isn't good practice. findById is really fast and will not be your bottleneck.
If you are getting a response from REST service and say you have a User mongoose model
var User = mongoose.model('User');
var fields = res.body; //Response JSON
var newUser = new User(fields);
newUser.save(function(err,resource){
console.log(resource);
});
In other case say you have an array of user JSON objects from User.find() that you want to query or populate
var query = User.find({});
query.exec(function(users){
//mongoose deep-populate ref docs
User.deeppopulate users 'email_id phone_number'.exec({
//query through populated users objects
});
});
MongoDB doesn't support Joins and Transfers. So for now you can't cast values to an object directly. Although you can work around it with forEach.
I'm building a node.js app and I'm evaluating Sequelize.js for persistent objects. One thing I need to do is publish new values when objects are modified. The most sensible place to do this would seem to be using the afterUpdate hook.
It almost works perfectly, but when I save an object the hook is passed ALL the values of the saved object. Normally this is desirable, but to keep the publish/subscribe chatter down, I would rather not republish fields that weren't saved.
So for instance, running the following
tasks[0].updateAttributes({assignee: 10}, ['assignee']);
Would automagically publish the new value for the assignee for that task on the appropriate channel, but not republish any of the other fields, which didn't change.
The closest I've come is with an afterUpdate hook:
Task.hook('afterUpdate', function(task, fn) {
Object.keys(task).forEach(function publishValue(key) {
pubSub.publish('Task:'+task.id+'#'+key, task[key]);
});
return fn();
});
which is pretty straightforward, but since the 'task' object has all the fields, I'm being unnecessarily noisy. (The pubSub system is ignorant of previous values and I'd like to keep it that way.)
I could override the setters in the task object (and all my other objects), but I would prefer not to publish until the object is saved. The object to be saved doesn't seem to have the old values (that I can find), so I can't base my publish on that.
So far the best answer I've come up with from a design standpoint is to tweak one line of dao.js to add the saved values to the returned object, and use that in the hook:
self.__factory.runHooks('after' + hook, _.extend({}, result.values, {savedVals: args[2]} ), function(err, newValues) {
Task.hook('afterUpdate', function(task, fn) {
Object.keys(task.savedVals).forEach(function publishValue(key) {
pubSub.publish('Task:'+task.id+'#'+key, task[key]);
});
return fn();
});
Obviously changing the Sequelize library is not ideal from a maintenance standpoint.
So my question is twofold: is there a better way to get the needed information to my hook without modifying dao.js, or is there a better way to attack my fundamental requirement?
Thanks in advance!
There is not currently. In the implementation for exactly what you describe we simply had to implement logic to compare old and new values, and if they differed, assume that they have changed.
I have a Breeze web api controller, with methods that accept parameters and do some work, filtering, sorting, etc, on the server.
On the querySucceeded, I'd like to do further querying to data.results. Is there a way to accomplish this? I got this working by exporting/importing data.results to a local manager, and do the projection from there. The projection is needed in order to use the observable collection in a vendor grid control.
var query = datacontext.EntityQuery.from("GetActiveCustomers")
.withParameters({ organizationID: "1" })
.toType("Customer")
.expand("Organization")
.orderBy('name');
var queryProjection = query
.select("customerID, organizationID, name, isActive, organization.name");
return manager.executeQuery(query)
.then(querySucceeded)
.fail(queryFailed);
function querySucceeded(data) {
var exportData = manager.exportEntities(data.results);
var localManager = breeze.EntityManager.importEntities(exportData);
var resultProjection = localManager.executeQueryLocally(queryProjection);
//This is the way I came up with to query data.results (exporting/importing the result entities to a local manager)
//Is there a better way to do this? Querying directly data.results. Example: data.results.where(...).select("customerID, organizationID...)
if (collectionObservable) {
collectionObservable(resultProjection);
}
log('Retrieved Data from remote data source',
data, true);
}
You've taken an interesting approach. Normally a projection returns uncacheable objects, not entities. But you casted this to Customer (with the toType clause) which means you've created PARTIAL Customer entities with missing data.
I must hope you know what you are doing and have no intention of saving changes to these customer entities while they remain partial else calamity may ensue.
Note that when you imported the selected Customers to the "localManager" you did not bring along their related Organization entities. That means an expression such as resultProjection[0].organization will return null. That doesn't seem correct.
I understand that you want to hold on to a subset of the Customer partial entities and that there is no local query that could select that subset from cache because the selection criteria are only fully known on the server.
I think I would handle this need differently.
First, I would bury all of this logic inside the DataContext itself; the purpose of a DataContext is to encapsulate the details of data access so that callers (such as ViewModels) don't have to know internals. The DataContext is an example of the UnitOfWork (UoW) pattern, an abstraction that helps isolate the data access/manipulation concerns from ViewModel concerns.
Then I would store it either in a named array of the DataContext (DC) or of the ViewModel (VM), depending upon whether this subset was of narrow or broad interest in the application.
If only the VM instance cares about this subset, then the DC should return the data.results and let the VM hold them.
I do not understand why you are re-querying a local EntityManager for this set nor for why your local query is ALSO appling a projection ... which would return non-entity data objects to the caller. What is wrong with returning the (partial) Customer entities.
It seems you intend to further filter the subset on the client. Hey ... it's a JavaScript array. You can call stuffArray.filter(filterFunction).
Sure that doesn't give you the Breeze LINQ-like query syntax ... but do you really need that? Why do you need ".select" over that set?
If that REALLY is your need, then I guess I understand why you're dumping the results into a separate EntityManager for local use. In that case, I believe you'll need more code in your query callback method to import the related Organization entities into that local EM so that someCustomer.organization returns a value. The ever-increasing trickiness of this approach makes me uncomfortable but it is your application.
If you continue down this road, I strongly encourage you to encapsulate it either in the DC or in some kind of service class. I wouldn't want my VMs to know about any of these shenanigans.
Best of luck.
Update 3 Oct 2013: Local cache query filtering on unmapped property
After sleeping on it, I have another idea for you that eliminates your need for a second EM in this use case.
You can add an unmapped property to the client-side Customer entity and set that property with a subset marker after querying the "GetActiveCustomers" endpoint on the server; you'd set the marker in the query callback.
Then you can compose a local query that filters on the marker value to ensure you only consider Customer objects from that subset.
Reference the marker value only in local queries. I don't know if a remote query filtering on the marker value will fail or simply ignore that criterion.
You won't need a separate local EntityManager; the Customer entities in your main manager carry the evidence of the server-side filtering. Of course the server will never have to deal with your unmapped property value.
Yes, a breeze local query can target unmapped properties as well as mapped properties.
Here's a small demonstration. Register a custom constructor like this:
function Customer() { /* Custom constructor ... which you register with the metadataStore*/
// Add unmapped 'subset' property to be queried locally.
this.subset = Math.ceil(Math.random() * 3); // simulate values {1..3}
}
Later you query it locally. Here are examples of queries that do and do not reference that property:
// All customers in cache
var x = breeze.EntityQuery.from("Customers").using(manager).executeLocally();
// All customers in cache whose unmapped 'subset' property === 1.
var y = breeze.EntityQuery.from("Customers")
.where("subset", 'eq', 1) // more criteria; knock yourself out
.using(manager).executeLocally();
I trust you'll know how to set the subset property appropriately in your callback to our "GetActiveCustomers" query.
HTH
Once you queried for some data breeze stores those entities on the local memory.
All you have to do is query locally when you need to filter the data some more.
You do this by specifying for the manager to query locally :
manager.executeQueryLocally(query);
Because querying from the database is done asynchronously you have to make sure that you retrieve from the local memory only if there is something there. Follow the promises.
ya'll I have a bit of a structural/procedural question for ya.
So I have a pretty simple ember app, trying to use ember-data and I'm just not sure if I'm 'doing it right'. So the user hits my index template, I grab their location coordinates and encode a hash of it (that part works). Then on my server I have a db that stores 'tiles' named after there hash'd coords (if i hit my #/tiles/H1A2S3H4E5D route I get back properly formatted JSON).
What I would like to happen next, if to display each of the returned tiles to the user on the bottom of the first page (like in a partial maybe? if handlebars does that).
I have a DS.Model for the tiles, if I hard code the Hash'd cords into a App.find(H1A2S3H4E5D); I can see my server properly responding to the query. However, I cannot seem to be able to figure out how to access the returned JSON object, or how to display it to the user.
I did watch a few tutorial videos but they all seem to be outdated with the old router.
Mainly I would like to know:
1. Where does the information returned by App.find(); live & how to access it?
2. what is the 'correct' way to structure my templates/views to handle this?
3. how should I pass that id (the hash'd coords) to App.find? as a global variable? or is there a better way?
the biggest problem(to me) seems to be that the id I search by doesn't exist until the user hit the page tho first time. (since its dynamically generated) so I can't just grab it when the page loads.
I can post a fiddle if required, but I'm looking for more of a conceptual/instructional answer rather then some one to just write my code for me
I'm still learning a lot with Ember as well, but this is my understanding. When you follow the guides and the tutorials out there, you'll have something like this:
App.TileController = Ember.ObjectController.extend();
App.TileRoute = Ember.Route.extend({
setupController: function(controller) {
controller.set('content', App.Tile.find(MYHASH));
}
});
What it does is set the special content object to the result. So since we're declaring an object controller, and calling find with a parameter, it knows that a single result is expected. So a view & template that follow the naming convention of Tile will be loaded. And in there you can access properties on the Tile object:
<p>{{lat}}</p><p>{{lng}}</p>
I have to admit that this feels a bit mystical at times. The core to it is all in the naming convention. You need to be pretty specific in how you name all your various controllers, routes, etc. Once that's nailed down, it's a matter of binding what data you want to the controller's content.
1) Aside from the generic answer of "in memory", the .find() calls live where ever you return it to. Generally speaking, this is meant to be set on a 'content' property of a controller.
2) I more or less answered this, but generally speaking you take the name of your route, and base it off that. So for a route TileRoute, you have:
TileController = Ember.ObjectController.extend
Tile = DS.Model.extend
TileView = Ember.View.extend
tile.handlebars
I generally store all my handlebars files in a templates/ folder. If you nest them deeper, just specify the path in your view object:
App.TileView = Ember.View.extend({
templateName: "tiles/show"
});
3) This really depends on your app. Generally speaking its better for the id to be either obtained from the URL, or constructed locally in a function. Since you are encoding a hash, i imagine you're doing this in a function, and then calling find. I do something a bit similar for an Array controller.
I don't know at what point you are generating a hash, so let's say it's onload. You should be able to generate the hash just in the setupController function.
App.TileRoute = Ember.Route.extend({
generateHashBasedOnCoords: function() {
// ...
},
setupController: function(controller) {
var MYHASH = this.generateHashBasedOnCoords();
controller.set('content', App.Tile.find(MYHASH));
}
});
I hope that helps.
I believe that you can make use of the data binding in ember and basically have an array controller for tiles and set the content initially to an empty array. Then we you get back your response do a App.find() and set the content of the tiles controller with the data that is returned. This should update the view through the data binding. (Very high level response)
The data itself is stored in a store that is setup with ember data. You access it with the same method you are using the model methods App.Tile.find() ect. It checks to see if the data that is needed is in the store if so it returns the data otherwise it makes a call to the api to get the data.
I've run into a headache with Backbone. I have a collection of specified records, which have subrecords, for example: surgeons have scheduled procedures, procedures have equipment, some equipment has consumable needs (gasses, liquids, etc). If I have a Backbone collection surgeons, then each surgeon has a model-- but his procedures and equipment and consumables will all be plain ol' Javascript arrays and objects after being unpacked from JSON.
I suppose I could, in the SurgeonsCollection, use the parse() to make new ProcedureCollections, and in turn make new EquipmentCollections, but after a while this is turning into a hairball. To make it sensible server-side there's a single point of contact that takes one surgeon and his stuff as a POST-- so propagating the 'set' on a ConsumableModel automagically to trigger a 'save' down the hierarchy also makes the whole hierarchical approach fuzzy.
Has anyone else encountered a problem like this? How did you solve it?
This can be helpful in you case: https://github.com/PaulUithol/Backbone-relational
You specify the relations 1:1, 1:n, n:n and it will parse the JSON accordingly. It also create a global store to keep track of all records.
So, one way I solved this problem is by doing the following:
Have all models inherit from a custom BaseModel and put the following function in BaseModel:
convertToModel: function(dataType, modelType) {
if (this.get(dataType)) {
var map = { };
map[dataType] = new modelType(this.get(dataType));
this.set(map);
}
}
Override Backbone.sync and at first let the Model serialize as it normally would:
model.set(response, { silent: true });
Then check to see if the model has an onUpdate function:
if (model.onUpdate) {
model.onUpdate();
}
Then, whenever you have a model that you want to generate submodels and subcollections, implement onUpdate in the model with something like this:
onUpdate: function() {
this.convertToModel('nameOfAttribute1', SomeCustomModel1);
this.convertToModel('nameOfAttribute2', SomeCustomModel2);
}
I would separate out the different surgeons, procedures, equipment, etc. as different resources in your web service. If you only need to update the equipment for a particular procedure, you can update that one procedure.
Also, if you didn't always need all the information, I would also lazy-load data as needed, but send down fully-populated objects where needed to increase performance.