Accessing a sub-document ID after save - mongoose - javascript

When a user is created on my app their details are saved on the MongoDB using mongoose. The user schema contains sub-documents and I am trying to access the _id if the sub-document after using the user.save function.
The schema is below:
{
name: String,
email: String,
address: String,
phone:[
{landLine: Number,
mobile: Number}
]
}
I can access the name, email and address easily like so:
console.log(user.name + user.email + user.address)
I tried user.phone._id but it returns undefined. I think because phone is an array of objects.
user.save(function(err) {
if (err)
throw err;
else {
console.log("user ID " + user._id); // SUCCESS!!
console.log("user sub-document ID " + user.phone._id); // UNDEFINED!!
return (null, user);
}
});
How can I access the _id of the sub-document inside the save function right after the user is created and saved into mongoDB?

There are a couple of approaches to getting this information, but personally I prefer the "atomic" modification method using $push.
The actual implementation here is helped by mongoose automatically including an ObjectId value which is "monotonic" and therefore always increasing in value. So this means that my method for handling this even works with a $sort modifier applied to the $push.
For example:
// Array of objects to add
var newNumbers = [
{ "landline": 55555555, "mobile": 999999999 },
{ "landline": 44455555, "mobile": 888888888 }
];
User.findOneAndUpdate(
{ "email": email },
{ "$push": { "phone": { "$each": newNumbers } } },
{ "new": true },
function(err,user) {
// The trick is to sort() on `_id` and just get the
// last added equal to the length of the input
var lastIds = user.phone.concat().sort(function(a,b) {
return a._id > b._id
}).slice(-newnumbers.length);
}
)
And even if you used a $sort modifier:
User.findOneAndUpdate(
{ "email": email },
{ "$push": { "phone": { "$each": newNumbers, "$sort": { "landline": 1 } } } },
{ "new": true },
function(err,user) {
var lastIds = user.phone.concat().sort(function(a,b) {
return a._id > b._id
}).slice(-newnumbers.length);
}
)
That little trick of "sorting" a temporary copy on the _id value means that the "newest" items are always at the end. And you just need to take as many off the end as you added in the update.
The arguable point here is that it's actually mongoose that is inserting the _id values in the first place. So in fact those are being submitted in the request made to the server for each array item.
You "could" get fancy and use "hooks" to record those ObjectId values that were actually added to the new array members in the update statement. But it's really just a simple process of returning the last n "greatest" _id values from the array items anyway, so the more complex approach is not needed.

Related

Mongoose - Deleting documents is unresponsive

I'm trying to use Mongoose (MongoDB JS library) to create a basic database, but I can't figure out how to delete the documents / items, I'm not sure what the technical term for them is.
Everything seems to work fine, when I use Item.findById(result[i].id), it returns a valid id of the item, but when I use Item.findByIdAndDelete(result[i].id), the function doesn't seem to start at all.
This is a snippet the code that I have: (Sorry in advance for bad indentation)
const testSchema = new schema({
item: {
type: String,
required: true
},
detail: {
type: String,
required: true
},
quantity: {
type: String,
required: true
}
})
const Item = mongoose.model("testitems", testSchema)
Item.find()
.then((result) => {
for (i in result) {
Item.findByIdAndDelete(result[i].id), function(err, result) {
if (err) {
console.log(err)
}
else {
console.log("Deleted " + result)
}
}
}
mongoose.connection.close()
})
.catch((err) => {
console.log(err)
})
I'm not sure what I'm doing wrong, and I haven't been able to find anything on the internet.
Any help is appreciated, thanks.
_id is a special field on MongoDB documents that by default is the type ObjectId. Mongoose creates this field for you automatically. So a sample document in your testitems collection might look like:
{
_id: ObjectId("..."),
item: "xxx",
detail: "yyy",
quantity: "zzz"
}
However, you retrieve this value with id. The reason you get a value back even though the field is called _id is because Mongoose creates a virtual getter for id:
Mongoose assigns each of your schemas an id virtual getter by default which returns the document's _id field cast to a string, or in the case of ObjectIds, its hexString. If you don't want an id getter added to your schema, you may disable it by passing this option at schema construction time.
The key takeaway is that when you get this value with id it is a string, not an ObjectId. Because the types don't match, MongoDB will not delete anything.
To make sure the values and types match, you should use result[i]._id.

Model.updateOne() correct syntax [duplicate]

Consider I have this document in my MongoDB collection, Workout:
{
_id: ObjectId("60383b491e2a11272c845749") <--- Workout ID
user: ObjectId("5fc7d6b9bbd9473a24d3ab3e") <--- User ID
exercises: [
{
_id: ObjectId("...") <--- Exercise ID
exerciseName: "Bench Press",
sets: [
{
_id: ObjectId("...") <--- Set ID
},
{
_id: ObjectId("...") <--- Set ID
}
]
}
]
}
The Workout object can include many exercise objects in the exercises array and each exercise object can have many set objects in the sets array. I am trying to implement a delete functionality for a certain set. I need to retrieve the workout that the set I want to delete is stored in. I have access to the user's ID (stored in a context), exercise ID and the set ID that I want to delete as parameters for the .findOne() function. However, I'm not sure whether I can traverse through the different levels of arrays and objects within the workout object. This is what I have tried:
const user = checkAuth(context) // Gets logged in user details (id, username)
const exerciseID, setID // Both of these are passed in already and are set to the appropriate values
const workoutLog = Workout.findOne({
user: user.id,
exercises: { _id: exerciseID }
});
This returns an empty array but I am expecting the whole Workout object that contains the set that I want to delete. I would like to omit the exerciseID from this function's parameters and just use the setID but I'm not sure how to traverse through the array of objects to access it's value. Is this possible or should I be going about this another way? Thanks.
When matching against an array, if you specify the query like this:
{ exercises: { _id: exerciseID } }
MongoDB tries to do an exact match on the document. So in this case, MongoDB would only match documents in the exercises array of the exact form { _id: ObjectId("...") }. Because documents in the exercises have other fields, this will never produce a match, even if the _ids are the same.
What you want to do instead is query a field of the documents in the array. The complete query document would then look like this:
{
user: user.id,
"exercises._id": exerciseID
}
You can perform both find and update in one step. Try this:
db.Workout.updateOne(
{
"user": ObjectId("5fc7d6b9bbd9473a24d3ab3e"),
},
{
$pull: {
"exercises.$[exercise].sets": {
"_id": ObjectId("6039709fe0c7d52970d3fa30") // <--- Set ID
}
}
},
{
arrayFilters: [
{
"exercise._id" : ObjectId("6039709fe0c7d52970d3fa2e") // <--- Exercise ID
}
]
}
);

What is the best way to keep track of changes of a document's property in MongoDB?

I would like to know how to keep track of the values of a document in MongoDB.
It's a MongoDB Database with a Node and Express backend.
Say I have a document, which is part of the Patients collection.
{
"_id": "4k2lK49938d82kL",
"firstName": "John",
"objective": "Burn fat"
}
Then I edit the "objective" property, so the document results like this:
{
"_id": "4k2lK49938d82kL",
"firstName": "John",
"objective": "Gain muscle"
}
What's the best/most efficient way to keep track of that change? In other words, I would like to know that the "objective" property had the value "Burn fat" in the past, and access it in the future.
Thanks a lot!
Maintaining/tracking history in the same document is not all recommended. As the document size will keep on increasing leading to
probably if there are too many updates, 16mb document size limit
Performance degrades
Instead, you should maintain a separate collection for history. You might have use hibernates' Javers or envers for auditing for your relational databases. if not you can check how they work. A separate table (xyz_AUD) is maintained for each table (xyz). For each row (with primary key abc) in xyz table, there exist multiple rows in xyz_AUD table, where each row is version of that row.
Moreover, Javers also support MongoDB auditing. If you are using java you can directly use it. No need to write your own logic.
Refer - https://nullbeans.com/auditing-using-spring-boot-mongodb-and-javers/
One more thing, Javers Envers Hibernate are java libraries. But I'm sure for other programming languages also, similar libraries will be present.
There is a mongoose plugin as well -
https://www.npmjs.com/package/mongoose-audit (quite oudated 4 years)
https://github.com/nassor/mongoose-history#readme (better)
Maybe you can change the type of "objective" to array and track the changes in it. the last one of the array is the latest value.
Maintain it as a sub-document like below
{
"_id": "4k2lK49938d82kL",
"firstName": "John",
"objective": {
obj1: "Gain muscle",
obj2: "Burn fat"
}
}
You can also maintain it as an array field but remember, mongodb doesn't allow you to maintain uniqueness in an array field and if you plan to index the "objective" field, you'll have to create a multi key index
I think the simplest solution would be to use and update an array:
const patientSchema = new Schema({
firstName: { type: String, required: true },
lastName: { type: String, required: true },
objective: { type: String, required: true }
notes: [{
date: { type: Date, default: Date.now() },
note: { type: String, required: true }
}],
});
Then when you want to update the objective...
const updatePatientObjective = async (req, res) => {
try {
// check if _id and new objective exist in req.body
const { _id, objective, date } = req.body;
if (!_id || !objective) throw "Unable to update patient's objective.";
// make sure provided _id is valid
const existingPatient = await Patient.findOne({ _id });
if (!existingPatient) throw "Unable to locate that patient.";
// pull out objective as previousObjective
const { objective: previousObjective } = existingPatient;
// update patient's objective while pushing
// the previous objective into the notes sub document
await existingPatient.updateOne({
// update current objective
$set { objective },
// push an object with a date and note (previouseObjective)
// into a notes array
$push: {
notes: {
date,
note: previousObjective
},
},
}),
);
// send back response
res
.status(201)
.json({ message: "Successfully updated your objective!" });
} catch (err) {
return res.status(400).json({ err: err.toString() });
}
};
Document will look like:
firstName: "John",
lastName: "Smith",
objective: "Lose body fat.",
notes: [
{
date: 2019-07-19T17:45:43-07:00,
note: "Gain muscle".
},
{
date: 2019-08-09T12:00:38-07:00,
note: "Work on cardio."
}
{
date: 2019-08-29T19:00:38-07:00,
note: "Become a fullstack web developer."
}
...etc
]
Alternatively, if you're worried about document size, then create a separate schema for patient history and reference the user's id (or just store the patient's _id as a string instead of referencing an ObjectId, whichever you prefer):
const patientHistorySchema = new Schema({
_id: { type: Schema.Types.ObjectId, ref: "Patient", required: true },
objective: { type: String, required: true }
});
Then create a new patient history document when the objective is updated...
PatientHistory.create({ _id, objective: previousObjective });
And if you need to access to the patient history documents...
PatientHistory.find({ _id });

Push "programmatic" array to an Object's array property

I'm building a Thesaurus app, and for this question, the key note is that i'm adding a list of synonyms(words that have the same meaning) for a particular word(eg - "feline", "tomcat", "puss" are synonyms of "cat")
I have a Word object, with a property - "synonyms" - which is an array.
I'm going to add an array of synonyms to the Word synonyms property.
According to the MongoDb documentation see here, the only way to append all the indexes of an array to a document's array property at once is to try the following:
db.students.update(
{ _id: 5 },
{
$push: {
quizzes: {
$each: [ { wk: 5, score: 8 }, { wk: 6, score: 7 }, { wk: 7, score: 6 } ],
}
}
}
)
Let's re-write that solution to suit my data, before we venture further.
db.words.update(
{ baseWord: 'cat' },
{
$push: {
synonyms: {
$each: [ { _id: 'someValue', synonym: 'feline' }, { _id: 'someValue', synonym: 'puss' }, { _id: 'someValue', synonym: 'tomcat' } ],
}
}
}
)
Nice and concise, but not what i'm trying to do.
What if you don't know your data beforehand and have a dynamic array which you'd like to feed in?
My current solution is to split up the array and run a forEach() loop, resulting in an array being appended to the Word object's synonyms array property like so:
//req.body.synonym = 'feline,tomcat,puss';
var individualSynonyms = req.body.synonym.split(',');
individualSynonyms.forEach(function(synonym) {
db.words.update(
{ "_id": 5 },
{ $push: //this is the Word.synonyms
{ synonyms:
{
$each:[{ //pushing each synonym as a Synonym object
uuid : uuid.v4(),
synonym:synonym,
}]
}
}
},{ upsert : true },
function(err, result) {
if (err){
res.json({ success:false, message:'Error adding base word and synonym, try again or come back later.' });
console.log("Error updating word and synonym document");
}
//using an 'else' clause here will flag a "multiple header" error due to multiple json messages being returned
//because of the forEach loop
/*
else{
res.json({ success:true, message:'Word and synonyms added!' });
console.log("Update of Word document successful, check document list");
}
*/
});
//if each insert happen, we reach here
if (!err){
res.json({ success:true, message:'Word and synonyms added!.' });
console.log("Update of Word document successful, check document list");
}
});
}
This works as intended, but you may notice and issue at the bottom, where there's a commented out ELSE clause, and a check for 'if(!err)'.
If the ELSE clause is executed, we get a "multiple headers" error because the loop causes multiple JSON results for a single request.
As well as that, 'if(!err)' will throw an error, because it doesn't have scope to the 'err' parameter in the callback from the .update() function.
- If there was a way to avoid using a forEach loop, and directly feed the array of synonyms into a single update() call, then I can make use of if(!err) inside the callback.
You might be thinking: "Just remove the 'if(!err)' clause", but it seems unclean to just send a JSON response without some sort of final error check beforehand, whether an if, else, else if etc..
I could not find this particular approach in the documentation or on this site, and to me it seems like best practice if it can be done, as it allows you to perform a final error check before sending the response.
I'm curious about whether this can actually be done.
I'm not using the console, but I included a namespace prefix before calling each object for easier reading.
There is not need to "iterate" since $each takes an "array" as the argument. Simply .map() the produced array from .split() with the additional data:
db.words.update(
{ "_id": 5 },
{ $push: {
synonyms: {
$each: req.body.synonym.split(',').map(synonym =>
({ uuid: uuid.v4, synonym })
)
}
}},
{ upsert : true },
function(err,result) {
if (!err){
res.json({ success:true, message:'Word and synonyms added!.' });
console.log("Update of Word document successful, check document list");
}
}
);
So .split() produces an "array" from the string, which you "transform" using .map() into an array of the uuid value and the "synonym" from the elements of .split(). This is then a direct "array" to be applied with $each to the $push operation.
One request.

Is there a way to find a document matching two different populates and get his document in findOne()?

I'm using mongoose with the combo mongoDb/nodejs. I would like to findOne() a doc with some conditions.
There is my Schema :
var prognosticSchema = new Schema({
userRef : { type : Schema.Types.ObjectId, ref : 'users'},
matchRef : { type : Schema.Types.ObjectId, ref : 'match'},
...
});
Model schema 'users' contain a String 'email' and model 'match' contain a Number 'id_match' like this:
var userSchema = new Schema({
email: String,
...
});
then
var matchSchema = new Schema({
id_match: {type: Number, min: 1, max: 51},
...
});
My goal is to findOne() one doc which contains an id_match = id_match and an email = req.headers['x-key'].
I tried this:
var prognoSchema = require('../db_schema/prognostic'); // require prognostics
require('../db_schema/match'); // require match to be able to populate
var prognoQuery = prognoSchema.find()
.populate({path: 'userRef', // populate userRef
match : {
'email' : req.headers['x-key'] // populate where email match with email in headers of request (I'm using Express as node module)
},
select : 'email pseudo'
});
prognoQuery.findOne() // search for only one doc
.populate({path: 'matchRef', // populate match
match: {
'id_match': id_match // populate match where id_match is correct
}})
.exec(function(err, data) {
... // Return of value as response ...
}
When I run this code and try to get the right document knowing that there much of other prognosticSchema with such others users and match in my dataBase, i'll get userRef at null and correct matchRef in my data document.
In my dataBase, there is others users and others id_match but I would like to get the right document in findOne() helped by this two objectId in my Schema.
Is there a way to findOne() a document matching two different populates and get his document in findOne() ?
Well you can include "both" populate expressions in the same query, but of course since you actually want to "match" on the properties contained in "referenced" collections this does mean that the actual data returned from the "parent" would need to look at "all parents" first in order to populate the data:
prognoSchema.find()
.populate([
{
"path": "userRef",
"match": { "email": req.headers['x-key'] }
},
{
"path": "matchRef",
"match": { "id_match": id_match }
}
]).exec(function(err,data) {
/*
data contains the whole collection since there was no
condition there. But populated references that did not
match are now null. So .filter() them:
*/
data = data.filter(function(doc) {
return ( doc.userRef != null && doc.matchRef != null );
});
// data now contains only those item(s) that matched
})
That is not ideal, but it's just how using "referenced" data works.
A better approach would be to search the other collections "indiviually" for there single match, and then supply the found _id values to the "parent" collection. A little help from async.parallel here to facilitate waiting on the results of the other queries before executing on the parent with the matched values. Can be done in various ways, but this looks relatively clean:
async.parallel(
{
"userRef": function(callback) {
User.findOne({ "email": req.headers['x-key'] },callback);
},
"id_match": function(callback) {
Match.findOne({ "id_match": id_match },callback);
}
},
function(err,result) {
prognoSchema.findOne({
"userRef": result.userRef._id,
"matchRef": result.id_match._id
}).populate([
{ "path": "userRef", "match": { "email": req.headers['x-key'] } },
{ "path": "matchRef", "match": { "id_match": id_match } }
]).exec(function(err,progno) {
// Matched and populated data only
})
}
)
As an alternate, in modern MongoDB releases from 3.2 and onwards you could use the $lookup aggregation operator instead:
prognoSchema.aggregate(
[
// $lookup the userRef data
{ "$lookup": {
"from": "users",
"localField": "userRef",
"foreignField": "_id",
"as": "userRef"
}},
// target is an array always so $unwind
{ "$unwind": "$userRef" },
// Then filter out anything that does not match
{ "$match": {
"userRef.email": req.headers['x-key']
}},
// $lookup the matchRef data
{ "$lookup": {
"from": "matches",
"localField": "matchRef",
"foreignField": "_id",
"as": "matchRef"
}},
// target is an array always so $unwind
{ "$unwind": "$matchRef" },
// Then filter out anything that does not match
{ "$match": {
"matchRef.id_match": id_match
}}
],
function(err,prognos) {
}
)
But again similarly ugly since the "source" is still selecting everything and you are only gradually filtering out results after each $lookup operation.
The basic premise here is "MongoDB does not 'really' perform joins", and neither is .populate() a "JOIN", but just additional queries on the related collections. Since this is "not" a "join" there is no way to filter out the "parent" until the actual related data is retrieved. Even if it's done on the "server" via $lookup rather than on the "client" via .populate()
So if you "must" query this way, it's generally better to query the other collections for results "first" and then match the "parent" based on the matching _id property values as references.
But the other case here is that you "should" consider "embedding" the data instead, where it is your intent to "query" on those properties. Only when that data resides in the "single collection" is is possible for MongoDB to query and match those conditions with a single query and a performant operation.

Categories