I just started learning mongodb so my pick on the queries is not really good at the moment.
So I will get straight to the problem. Following is my document for every user
{
id:"14198959",
user_name:"kikStart2X"
friends:[
{
friend_id:"1419897878",
friend_name:"nitpick",
profile_picture:"some image data",
},
{
friend_id:"14198848418",
friend_name:"applePie",
profile_picture:"some image data",
}, //etc
],
games:[
{
game_id:"1" ,
game_name:"Bunny Hop"
},
{
game_id:"2" ,
game_name:"Racing cars",
},
],
}
Now the collection has all the documents with same structure
1) friends array represents the users that are my friends
2) games array represents the games that I have played
My friends would have the same document structure with games array containing the games they have played
What I want is to list most common games between me and my friends in ascending/descending or any order.
The result should look like the following
{
result:
[
{
game_id:"1" ,
game_name:"Bunny Hop",
friends:
[
{
friend_id:"1419897878",
friend_name:"nitpick",
profile_picture:"some image data",
},
{
friend_id:"14198848418",
friend_name:"applePie",
profile_picture:"some image data",
},
]
},
{
game_id:"2" ,
game_name:"Racing cars",
friends:
[
{
friend_id:"71615343",
friend_name:"samuel",
profile_picture:"some image data",
},
]
}
]
}
I know this is a bit tough to achieve but I don't know how to do it and have searched the internet for hours.
Thanks in advance to all you MongoDB champs.
You can try below aggregation query.
The query will $unwind friends array followed up with $lookup for each friends games.
Next step is $unwind friendsgames followed by comparison using $setIntersection in $project stage to find the common games between the input document games and each of friendsgames.
Final step is to $group by games to collect friends with same games.
db.collection.aggregate( [
{ $unwind:"$friends" },
{
$lookup: {
from: collectionname,
localField: "friends.friend_id",
foreignField: "id",
as: "friendsgames"
}
},
{ $unwind:"$friendsgames" },
{ $project:{commongames:{$setIntersection:["$games", "$friendsgames.games"]}, friends:1 }},
{ $unwind:"$commongames" },
{ $group:{_id:"$commongames", friends:{$push:"$friends"} } }
] )
Related
I'm trying to fetch timeserie data from PostgreSQL & after successful queries and parsing of data, I have some problem in indexing it. This mistake is probably quite small, but I just cant find it.
After I get data from PostgreSQL, it looks like this:
[
{ id: 2,
time: 2019-09-12T03:36:04.433Z,
value: 0.311303124694538
},
{ id: 2,
time: 2019-09-12T03:36:03.434Z,
value: 0.13233108292117
},
{ id: 3,
time: 2019-09-12T03:36:03.434Z,
value: 0.13233108292117 }
]
After this step I'm reducing data by id:
let results = sqlresult.rows.reduce(function(results, row) {
(results[row.id] = results[row.id] || []).push([row.time,row.value]);
return results;
}, {})
let clonedObj = { ...results };
After this step data is formatted like in below:
{ '2':
[ [ 2019-09-12T03:36:04.433Z, 0.311303124694538 ],
[ 2019-09-12T03:36:03.434Z, 0.13233108292117 ],
[ 2019-09-12T03:36:02.432Z, 0.171794173529729 ]
]
}
But once I'm about to drop it into Highchart it won't work. My problem is probably that I didn't fully understand how does that reduce function work and now I'm trying to copy it. If some of you could show me how to avoid this step and to do all in data reduce step, I'd be thankful.
for(let i=0; i< Object.keys(clonedObj).length; i++){
highchart[i] = {
name: Object.keys(clonedObj)[i],
data: clonedObj[i]
}
}
I'm expecting result like this below:
[{"name":1,"data":[["2019-09-12T03:36:00.433Z",20],["2019-09-12T03:35:38.433Z",-20]]},{"name":2,"data":[["2019-09-12T03:36:04.433Z",0.311303124694538]}]]
From your nicely formatted data listings, it looks like you're using Postgres to package rows of data already. This is something I do all the time, but without some pretty narrow limits. I'd like to get better at this, so I figured I'd give your question a bit of time. To start with, I created a table named "reading" with your data:
CREATE TABLE IF NOT EXISTS reading (
id integer,
"time" text,
"value" real
);
I get back a listing like your top one with this query:
select array_to_json(array_agg(row_to_json(reading_row))) as reading_object
from (select id, time, value from reading) as reading_row
Your target output example doesn't parse right for me, I think you're after this:
[
{
"name":1,
"data":[
[
"2019-09-12T03:36:00.433Z",
20
],
[
"2019-09-12T03:35:38.433Z",
-20
]
]
},
{
"name":2,
"data":[
"2019-09-12T03:36:04.433Z",
0.311303124694538
]
}
]
Fair warning: Yeah, I don't really know how to do that, and I'm hoping someone answers with a simple script to generate exactly the format you want on the Postgres side. But I made a start. Check this out:
select id, json_object_agg(time, value order by time)
from reading
group by id
Here's what I get:
2 "{ ""2019-09-12T03:36:03.434Z"" : 0.132331, ""2019-09-12T03:36:04.433Z"" : 0.311303 }"
3 "{ ""2019-09-12T03:36:03.434Z"" : 0.132331 }"
Here's something that's...not right..but getting closer:
select array_to_json(array_agg(row_to_json(reading_row))) as reading_object
from (
select id, json_object_agg(time, value order by time) as data
from reading
group by id
) as reading_row
Which returns:
[
{
"id":2,
"data":{
"2019-09-12T03:36:03.434Z":0.132331,
"2019-09-12T03:36:04.433Z":0.311303
}
},
{
"id":3,
"data":{
"2019-09-12T03:36:03.434Z":0.132331
}
}
]
I took another crack at it here, this might be what you're after, or close. I noticed you're renaming 'id' as 'name', so that's in the final query:
select array_to_json(array_agg(row_to_json(subquery)))
from (
select id as name,
array_to_json(array_agg(json_build_object('time', time, 'value', value))) as data
from reading
group by id
) subquery
The output, pretty-printed, looks like this:
[
{
"name":2,
"data":[
{
"time":"2019-09-12T03:36:04.433Z",
"value":0.311303
},
{
"time":"2019-09-12T03:36:03.434Z",
"value":0.132331
}
]
},
{
"name":3,
"data":[
{
"time":"2019-09-12T03:36:03.434Z",
"value":0.132331
}
]
}
]
This variant has the same structure, but without labels on the elements within the array:
select array_to_json(array_agg(row_to_json(subquery)))
from (
select id as name,
array_to_json(array_agg(array[time, value::text])) as data
from reading
group by id
) subquery
Apart from the numeric value being cast as text, I think this is what you asked for:
select array_to_json(array_agg(row_to_json(subquery)))
from (
select id as name,
array_to_json(array_agg(array[time, value::text])) as data
from reading
group by id
) subquery
[
{
"name":2,
"data":[
[
"2019-09-12T03:36:04.433Z",
"0.311303"
],
[
"2019-09-12T03:36:03.434Z",
"0.132331"
]
]
},
{
"name":3,
"data":[
[
"2019-09-12T03:36:03.434Z",
"0.132331"
]
]
}
]
Note: I don't see where you're getting your output of 20, -20 in your example.
Between array_to_json(), row(), array_agg(), and json_build_object(), it looks like you can get most any format you need.
Here's hoping that someone who actually knows what they're doing chimes in.
I am trying to compare a large number of documents in two collections. To give you an estimate, I have around 1300 documents in each of the two collections.
I want to generate a diff comparison report after comparing the two collections. I do not need to point out exactly what is missing or what new content has been added, I just need to be able to identify that there is in fact some difference between the two documents. Yes, I do have a unique identifier for each documents other than Mongo's ObjectId ("_id").
Note: I have implemented the database using the denormalized data model, which means I have embedded documents (documents within documents).
What would you say is the best way to go about implementing a solution for the same?
Thank you in advance for your time samaritans!
You should use $lookup and $eq on all the fields you care about.
db.collection1.aggregate([
{
$lookup:
{
from: "collection2",
let: { unique_id: "$unique_id", field1: "$field", field2: "$field", ... },
pipeline: [
{ $match:
{ $expr:
{ $and:
[
{ $eq: [ "$unique_id_in_2", "$$unique_id" ] }
{ $eq: [ "$field_to_match", "$$field1" ] },
{ $eq: [ "$field_to_match.2", "$$field2" ] }
]
}
}
},
],
as: "matches"
}
},
{
$match: {
'matches.0': {$exists: false}
}
}
])
** mongo 3.6+ syntax for lookup.
Scenario: Members can choose (yes/no) from 4 different activities available.
Based on the following input,
[
{
name:"member1",
activity:"activity1",
selected:true
},
{
name:"member1",
activity: "activity3",
selected:false
},
{
name:"member2",
activity:"activity2",
selected:true
},
{
name:"member2",
activity: "activity4",
selected:false
}
]
need a result as follows, showing member's choice on all the 4 activities in the order of activity 1 to 4 (including the activities which the user has not made a decision yet)
[
{
name:"member1",
activities:[true,null,false,null]
},
{
name:"member2",
activities:[null,true,null,false]
}
]
I tried the following code,
db.collection("MemberActivities").aggregate(
[
{
$group:
{
_id: "$MemberName",
activities: { $push: "$selected"}
}
}
]
but, it contain only the activities the user has made a decision (yes/no).
[
{
_id:"member1",
activities:[true,false]
},
{
_id:"member2",
activities:[true,false]
} ]
Please guide on how to get desired result.
I'm creating a recipe-database (commonly known as a cookbook) where I need to have a many-to-many relationship between ingredients and recipes and I'm using sequelize.js in combination with postgresql.
When an ingredient is added to a recipe I need to declare the correct amount of that ingredient that goes into the recipe.
I've declared (reduced example)
var Ingredient = sequelize.define('Ingredient', {
name: Sequelize.STRING
}, {
freezeTable: true
});
var Recipe = sequelize.define('Recipe', {
name: Sequelize.STRING
}, {
freezeTable: true
});
var RecipeIngredient = sequelize.define('RecipeIngredient', {
amount: Sequelize.DOUBLE
});
Ingredient.belongsToMany(Recipe, { through: RecipeIngredient });
Recipe.belongsToMany(Ingredient, {
through: RecipeIngredient,
as: 'ingredients'
});
My problem is with how data is returned when one my REST endpoints do
router.get('/recipes', function(req, res) {
Recipe.findAll({
include: [{
model: Ingredient,
as: 'ingredients'
}]
}).then(function(r) {
return res.status(200).json(r[0].toJSON());
})
});
The resulting JSON that gets sent to the client looks like this (timestamps omitted):
{
"id": 1,
"name": "Carrots",
"ingredients": [
{
"id": 1,
"name": "carrot",
"RecipeIngredient": {
"amount": 12,
"RecipeId": 1,
"IngredientId": 1
}
}
]
}
While all I wanted was
{
"id": 1,
"name": "Carrots",
"ingredients": [
{
"id": 1,
"name": "carrot",
"amount": 12,
}
]
}
That is, I want the amount field from the relation-table to be included in the result instead of the entire RecipeIngredient object.
The database generated by sequelize looks like this:
Ingredients
id name
1 carrot
Recipes
id name
1 Carrots
RecipeIngredients
amount RecipeId IngredientId
12 1 1
I've tried to provide an attributes array as a property to the include like this:
include: [{
model: Ingredient,
as: 'ingredients',
attributes: []
}]
But setting either ['amount'] or ['RecipeIngredient.amount'] as the attributes-value throws errors like
Unhandled rejection SequelizeDatabaseError: column ingredients.RecipeIngredient.amount does not exist
Obviously I can fix this in JS using .map but surely there must be a way to make sequelize do the work for me?
I am way late to this one, but i see it been viewed quite a bit so here is my answer on how to merge
attributes
Some random examples in this one
router.get('/recipes', function(req, res) {
Recipe.findAll({
include: [{
model: Ingredient,
as: 'ingredients',
through: {
attributes: ['amount']
}
}]
})
.then(docs =>{
const response = {
Deal: docs.map(doc =>{
return{
cakeRecipe:doc.recipe1,
CokkieRecipe:doc.recipe2,
Apples:doc.ingredients.recipe1ingredient
spices:[
{
sugar:doc.ingredients.spice1,
salt:doc.ingredients.spice2
}
]
}
})
}
})
res.status(200).json(response)
})
You can use sequelize.literal. Using Ingredient alias of Recipe, you can write as follows. I do not know if this is the right way. :)
[sequelize.literal('`TheAlias->RecipeIngredient`.amount'), 'amount'],
I tested with sqlite3. Received result with alias "ir" is
{ id: 1,
name: 'Carrots',
created_at: 2018-03-18T04:00:54.478Z,
updated_at: 2018-03-18T04:00:54.478Z,
ir: [ { amount: 10, RecipeIngredient: [Object] } ] }
See the full code here.
https://github.com/eseom/sequelize-cookbook
I've gone over the documentation but I couldn't find anything that seems like it would let me merge the attributes of the join-table into the result so it looks like I'm stuck with doing something like this:
router.get('/recipes', function(req, res) {
Recipe.findAll({
include: [{
model: Ingredient,
as: 'ingredients',
through: {
attributes: ['amount']
}
}]
}).then(function(recipes) {
return recipes[0].toJSON();
}).then(function(recipe) {
recipe.ingredients = recipe.ingredients.map(function(i) {
i.amount = i.RecipeIngredient.amount;
delete i.RecipeIngredient;
return i;
});
return recipe;
}).then(function(recipe) {
return res.status(200).json(recipe);
});
});
Passing through to include lets me filter out which attributes I want to include from the join-table but for the life of me I could not find a way to make sequelize merge it for me.
The above code will return the output I wanted but with the added overhead of looping over the list of ingredients which is not quite what I wanted but unless someone comes up with a better solution I can't see another way of doing this.
I am trying to implement a function that collects unread messages from an articles collection. Each article in the collection has a "discussions" entry with discussion comment subdocuments. An example of such a subdocument is:
{
"id": NumberLong(7534),
"user": DBRef("users", ObjectId("...")),
"dt_create": ISODate("2015-01-26T00:10:44Z"),
"content": "The discussion comment content"
}
The parent document has the following (partial) structure:
{
model: {
id: 17676,
title: "Article title",
author: DBRef("users", ObjectId(...)),
// a bunch of other fields here
},
statistics: {
// Statistics will be stored here (pageviews, etc)
},
discussions: [
// Array of discussion subdocuments, like the one above
]
}
Each user also has a last_viewed entry which is a document, an example is as follows:
{
"17676" : "2015-01-10T00:00:00.000Z",
"18038" : "2015-01-10T00:00:00.000Z",
"18242" : "2015-01-20T00:00:00.000Z",
"18325" : "2015-01-20T00:00:00.000Z"
}
This means that the user has looked at discussion comments for the last time on January 10th 2015 for articles with IDs 17676 and 18038, and on January 20th 2015 for articles with IDs 18242 and 18325.
So I want to collect discussion entries from the article documents, and for article with ID 17676, I want to collect the discussion entries that were created after 2015-01-10, and for article with ID 18242, I want to show the discussion entries created after 2015-01-20.
UPDATED
Based on Neil Lunn's reply, the function I have created so far is:
function getUnreadDiscussions(userid) {
user = db.users.findOne({ 'model.id': userid });
last_viewed = [];
for(var i in user.last_viewed) {
last_viewed.push({
'id': parseInt(i),
'dt': user.last_viewed[i]
});
}
result = db.articles.aggregate([
// For now, collect just articles the user has written
{ $match: { 'model.author': DBRef('users', user._id) } },
{ $unwind: '$discussions' },
{ $project: {
'model': '$model',
'discussions': '$discussions',
'last_viewed': {
'$let': {
'vars': { 'last_viewed': last_viewed },
'in': {
'$setDifference': [
{ '$map': {
'input': '$$last_viewed',
'as': 'last_viewed',
'in': {
'$cond': [
{ '$eq': [ '$$last_viewed.id', '$model.id' ] },
'$$last_viewed.dt',
false
]
}
} },
[ false ]
]
}
}
}
}
},
// To get a scalar instead of a 1-element array:
{ $unwind: '$last_viewed' },
// Match only those that were created after last_viewed
{ $match: { 'discussions.dt_create': { $gt: '$last_viewed' } } },
{ $project: {
'model.id': 1,
'model.title': 1,
'discussions': 1,
'last_viewed': 1
} }
]);
return result.toArray();
}
The whole $let thing, and the $unwind after that, transforms the data into the following partial projection (with the last $match commented out):
{
"_id" : ObjectId("54d9af1dca71d8054c8d0ee3"),
"model" : {
"id" : NumberLong(18325),
"title" : "Article title"
},
"discussions" : {
"id" : NumberLong(7543),
"user" : DBRef("users", ObjectId("54d9ae24ca71d8054c8b4567")),
"dt_create" : ISODate("2015-01-26T00:10:44Z"),
"content" : "Some comment here"
},
"last_viewed" : ISODate("2015-01-20T00:00:00Z")
},
{
"_id" : ObjectId("54d9af1dca71d8054c8d0ee3"),
"model" : {
"id" : NumberLong(18325),
"title" : "Article title"
},
"discussions" : {
"id" : NumberLong(7554),
"user" : DBRef("users", ObjectId("54d9ae24ca71d8054c8b4567")),
"dt_create" : ISODate("2015-01-26T02:03:22Z"),
"content" : "Another comment here"
},
"last_viewed" : ISODate("2015-01-20T00:00:00Z")
}
So far so good here. But the problem now is that the $match to select only the discussions created after the last_viewed date is not working. I am getting an empty array response. However, if I hard-code the date and put in $match: { 'discussions.dt_create': { $gt: ISODate("2015-01-20 00:00:00") } }, it works. But I want it to take it from last_viewed.
I found another SO thread where this issue has been resolved by using the $cmp operator.
The final part of the aggregation would be:
[
{ /* $match, $unwind, $project, $unwind as before */ },
{ $project: {
'model': 1,
'discussions': 1,
'last_viewed': 1,
'compare': {
$cmp: [ '$discussions.dt_create', '$last_viewed' ]
}
} },
{ $match: { 'compare': { $gt: 0 } } }
]
The aggregation framework is great, but it takes quite a different approach in problem-solving. Hope this helps anyone!
I'll keep the question unanswered in case anyone else has a better answer/method. If this answer has been upvoted enough times, I'll accept this one.