Is it possible to remove elements from multiple subarrays inside of one big array? My structure looks something like this:
{
"_id": {
"$oid": ""
},
"users": [
{
"friends": [
"751573404103999569"
]
},
{
"friends": [
"220799458408005633"
]
}
]
}
I have a friend id and I need to remove it from all the "friends" arrays in the "users" array
You can do with the positional operator $[] as follow:
db.no_more_friends.update({ "users.friends":"the_friend_id" },{ $pull:{"users.$[].friends":"the_friend_id"}} ,{multi:true})
Just take in consideration that with {multi:true} it will perform the user id removal to all friends sub-arrays in all documents where "the_friend_id" is found
Related
I have a document that holds lists containing nested objects. The document simplified looks like this:
{
"username": "user",
"listOne": [
{
"name": "foo",
"qnty": 5
},
{
"name": "bar",
"qnty": 3
},
],
"listTwo": [
{
"id": 1,
"qnty": 13
},
{
"id": 2,
"qnty": 9
},
]
}
And I need to update quantity in lists based on an indentifier. For list one it was easy. I was doing something like this:
db.collection.findOneAndUpdate(
{
"username": "user",
"listOne.name": name
},
{
$inc: {
"listOne.$.qnty": qntyChange,
}
}
)
Then I would catch whenever find failed because there was no object in the list with that name and nothing was updated, and do a new operation with $push. Since this is a rarer case, it didn't bother me to do two queries in the database collection.
But now I had to also add list two to the document. And since the identifiers are not the same I would have to query them individually. Meaning four searches in the database collection, in the worst case scenario, if using the same strategy I was using before.
So, to avoid this, I wrote an update using an aggregation pipeline. What it does is:
Look if there is an object in the list one with the queried identifier.
If true, map through the entire array and:
2.1) Return the same object if the identifier is different.
2.2) Return object with the quantity changed when identifier matches.
If false, push a new object with this identifier to the list.
Repeat for list two
This is the pipeline for list one:
db.coll1.updateOne(
{
"username": "user"
},
[{
"$set": {
"listOne": {
"$cond": {
"if": {
"$in": [
name,
"$listOne.name"
]
},
"then": {
"$map": {
"input": "$listOne",
"as": "one",
"in": {
"$cond": {
"if": {
"$eq": [
"$$one.name",
name
]
},
"then": {
"$mergeObjects": [
"$$one",
{
"qnty": {
"$add": [
"$$one.qnty",
qntyChange
]
}
}
]
},
"else": "$$one"
}
}
}
},
"else": {
"$concatArrays": [
"$listOne",
[
{
"name": name,
"qnty": qntyChange
}
]
]
}
}
}
}
}]
);
Entire pipeline can be foun on this Mongo Playgorund.
So my question is about how efficient is this. As I am paying for server time, I would like to use an efficient solution to this problem. Querying the collection four times, or even just twice but at every call, seems like a bad idea, as the collection will have thousands of entries. The two lists, on the other hand, are not that big, and should not exceed a thousand elements each. But the way it's written it looks like it will iterate over each list about two times.
And besides, what worries me the most, is if when I use map to change the list and return the same object, in cases where the identifier does not match, does MongoDB rewrite these elements too? Because not only would that increase my time on the server rewriting the entire list with the same objects, but it would also count towards the bytes size of my write operation, which are also charged by MongoDB.
So if anyone has a better solution to this, I'm all ears.
According to this SO answer,
What you actually do inside of the document (push around an array, add a field) should not have any significant impact on the total cost of the operation
So, in your case, your array operations should not be causing a heavy impact on the total cost.
I'm creating a query but i came a section when i don't have idea that how do it. I have one array that have for example two items
//filter array
const filterArray=r.expr(['parking', 'pool'])
and also i have one table with follows records:
[
{
"properties": {
"facilities": [
"parking"
],
"name": "Suba"
}
},
{
"properties": {
"facilities": [
"parking",
"pool",
"pet friendly"
],
"name": "Kennedy",
}
},
{
"properties": {
"facilities": [
"parking",
"pool"
],
"name": "Soacha"
}
},
{
"properties": {
"facilities": [
"parking",
"pet friendly",
"GYM"
],
"name": "Sta Librada"
}
},
]
I need filter the records with the array but i need that record has all items of array filter. If the record has more item of array filter not is problem, i need if contains all items of array filter get that record. On this case I need all records that have the facilities "pool" and "parking"
Current query
Current query but it also return records with one or two items of the filter array
r.db('aucroom').table('hosts')
.filter(host=>
host('properties')('facilities').contains(val=>{
return filterArray.contains(val2=>val2.eq(val))
})
)
.orderBy('properties')
.pluck(['properties'])
results that I desire wait
Like the image example:
If you want a strict match of two arrays (same number of elements, same order), then use .eq()
array1.eq(array2)
If you want the first array to contain all elements of the second array, then use .setIntersection(), just note array2 should contain distinct elements (a set):
array1.setIntersection(array2).eq(array2)
I'm trying to write a way to update a whole MongoDB document including subdocuments using Mongoose and a supplied update object. I want to supply an object from my client in the shape of the schema and then iterate over it to update each property in the document including those in nested subdocuments in arrays.
So if my schema looked like this:
const Person = new Schema({
name: String,
age: Number,
addresses: [
{
label: String,
fullAddress: String,
},
],
bodyMeasurements: {
height: Number,
weight: Number,
clothingSizes: {
jacket: Number,
pants: Number
},
},
})
I would want to supply an object to update an existing document that looked something like this:
{
"_id": "217a7f84685f49642635dff0",
"name": "Dan",
"addresses": [
{
"_id": "5f49647f84f02635df217a68",
"label": "Home 2"
},
{
"label": "Work",
"fullAddress": "6 Elm Street"
}
],
"bodyMeasurements": {
"_id": "2635df217a685f49647f84f0",
"weight": 90,
"clothingSizes": {
"_id": "217a685f4962635df49647f84f0",
"pants": 32
}
}
}
The code would need to iterate through all the keys and values of the object entries, and where it found a Mongo ID it would know to update that specific item (like the "Home 2" address label), and where it didn't it would know to add it (like the second "Work" address here), or replace it if it was a property on the top level. (like the "name" property with "Dan")
For a one dimensional Schema without taking into account _id's this would work:
for (const [key, value] of Object.entries(req.body)) {
person[key] = value;
}
But not for any nested subdocuments or documents in arrays. I don't want to have to specify the name of the subdocuments for each case. I am trying to find a way to update any Schema's documents and subdocuments generically.
I imagine there might be some recursion and the Mongoose .set() method needed to handle deeply nested documents.
I have an array of strings
users: ['user1', 'user2']
If I run a search looking for exactly ['user1', 'user2'] in that order, it will find that entry. However if they are back to front, the query returns nothing.
What's the best way to compare an input array against the list in the database to determine if it is a unique entry?
You can identify an unique array in a collection, by below query.
db.getCollection('mycollection').find({users: { $size: 2, $all: [ "user1" , "user2" ] }})
You need to mention the no. of elements in array you are checking, and check all elements in it by $all operator.
Using the aggregation framework with the $redact pipeline operator allows you to proccess the logical condition with the $cond operator and uses the special operations $$KEEP to "keep" the document where the logical condition is true or $$PRUNE to "remove" the document where the condition was false.
This operation is similar to having a $project pipeline that selects the fields in the collection and creates a new field that holds the result from the logical condition query and then a subsequent $match, except that $redact uses a single pipeline stage which is more efficient.
As for the logical condition, there are Set Operators that you can use since they allows expression that perform set operations on arrays, treating arrays as sets. Set expressions ignores the duplicate entries in each input array and the order of the elements, which is a suitable property in your case since you
want to disregard the order of the elements.
There are a couple of these operators that you can use to perform the logical condition, namely $setIsSubset and $setDifference.
Consider the following examples which demonstrate the above concept:
Populate Test Collection
db.collection.insert([
{ users: ['user1', 'user2'] },
{ users: ['user1', 'user2', 'user2'] },
{ users: ['user1', 'user2', 'user3'] },
{ users: ['user1', 'user3'] },
])
Example 1: $redact with $setEquals
var arr = [ "user2", "user1" ];
db.collection.aggregate([
{
"$redact": {
"$cond": [
{ "$setEquals": [ "$users", arr ] },
"$$KEEP",
"$$PRUNE"
]
}
}
])
Sample Output
/* 1 */
{
"_id" : ObjectId("5804902900ce8cbd028523d1"),
"users" : [
"user1",
"user2"
]
}
/* 2 */
{
"_id" : ObjectId("5804902900ce8cbd028523d2"),
"users" : [
"user1",
"user2",
"user2"
]
}
Example 2: $redact with $setDifference
var arr = [ "user2", "user1" ];
db.collection.aggregate([
{
"$redact": {
"$cond": [
{
"$eq": [
{ "$setDifference": [ "$users", arr ] },
[]
]
},
"$$KEEP",
"$$PRUNE"
]
}
}
])
Sample Output
/* 1 */
{
"_id" : ObjectId("5804902900ce8cbd028523d1"),
"users" : [
"user1",
"user2"
]
}
/* 2 */
{
"_id" : ObjectId("5804902900ce8cbd028523d2"),
"users" : [
"user1",
"user2",
"user2"
]
}
Another approach, though only recommended when $redact is not available, would be to use the $where operator as:
db.collection.find({
"$where": function() {
var arr = ["user2", "user1"];
return !(this.users.sort() > arr.sort() || this.users.sort() < arr.sort());
}
})
However, bear in mind that this won't perfom very well since a query operation with the $where operator calls the JavaScript engine to evaluate Javascript code on every document and checks the condition for each.
This is very slow as MongoDB evaluates non-$where query operations before $where expressions and non-$where query statements may use an index.
It is advisable to combine with indexed queries if you can so that the query may be faster. However, it's recommended to use JavaScript expressions and the $where operator as a last resort when you can't structure the data in any other way, or when you are dealing with a small subset of data.
I am currently developing a website to do with cooking and will store recipes.
At the moment I am planning to store the recipes in a JS nested array. The array will contain all the recipes and then within each of the recipes will be another array containing all the ingredients for that recipe.
What would be the best way to structure this nested array?
Currently I have the following but I'm not entirely sure this is the best/correct way to do it...
Any help is much appreciated.
var recipes = [
{
name:"pizza",
ingredients: {
"tomato",
"cheese",
"meat"
}
}
]
I agree with #smakateer regarding associated array. However I would improve it a bit to:
var recipes = {
"pizza": {
"ingredients": ["tomato", "cheese", "meat" ], //or "ingredients": [ {"name":"tomato", "howMany": 3} ]
//thanks to this it will be easier extendable, i.e.
"description": "Some description",
imageUrl: URL
}
}
EDIT:
You'll probably have many receips for pizza, so you could store them in array of objects under one key.
...
"pizza": [ {...}, {...} ],
"dumplings": []
...
I would suggest moving the name to the top level of the array and switching it to an associated array of strings to arrays:
var recipes = {
"pizza": [
"tomato",
"cheese",
"meat"
]
}
Then you can call each recipe by name and get the iterable list of ingredients.