Related
I am trying to check If a field exists in a sub-document of an array and if it does, it will only provide those documents in the callback. But every time I log the callback document it gives me all values in my array instead of ones based on the query.
I am following this tutorial
And the only difference is I am using the findOne function instead of find function but it still gives me back all values. I tried using find and it does the same thing.
I am also using the same collection style as the example in the link above.
Example
In the image above you can see in the image above I have a document with a uid field and a contacts array. What I am trying to do is first select a document based on the inputted uid. Then after selecting that document then I want to display the values from the contacts array where contacts.uid field exists. So from the image above only values that would be displayed is contacts[0] and contacts[3] because contacts1 doesn't have a uid field.
Contact.contactModel.findOne({$and: [
{uid: self.uid},
{contacts: {
$elemMatch: {
uid: {
$exists: true,
$ne: undefined,
}
}
}}
]}
You problems come from a misconception about data modeling in MongoDB, not uncommon for developers coming from other DBMS. Let me illustrate this with the example of how data modeling works with an RDBMS vs MongoDB (and a lot of the other NoSQL databases as well).
With an RDBMS, you identify your entities and their properties. Next, you identify the relations, normalize the data model and bang your had against the wall for a few to get the UPPER LEFT ABOVE AND BEYOND JOIN™ that will answer the questions arising from use case A. Then, you pretty much do the same for use case B.
With MongoDB, you would turn this upside down. Looking at your use cases, you would try to find out what information you need to answer the questions arising from the use case and then model your data so that those questions can get answered in the most efficient way.
Let us stick with your example of a contacts database. A few assumptions to be made here:
Each user can have an arbitrary number of contacts.
Each contact and each user need to be uniquely identified by something other than a name, because names can change and whatnot.
Redundancy is not a bad thing.
With the first assumption, embedding contacts into a user document is out of question, since there is a document size limit. Regarding our second assumption: the uid field becomes not redundant, but simply useless, as there already is the _id field uniquely identifying the data set in question.
The use cases
Let us look at some use cases, which are simplified for the sake of the example, but it will give you the picture.
Given a user, I want to find a single contact.
Given a user, I want to find all of his contacts.
Given a user, I want to find the details of his contact "John Doe"
Given a contact, I want to edit it.
Given a contact, I want to delete it.
The data models
User
{
"_id": new ObjectId(),
"name": new String(),
"whatever": {}
}
Contact
{
"_id": new ObjectId(),
"contactOf": ObjectId(),
"name": new String(),
"phone": new String()
}
Obviously, contactOf refers to an ObjectId which must exist in the User collection.
The implementations
Given a user, I want to find a single contact.
If I have the user object, I have it's _id, and the query for a single contact becomes as easy as
db.contacts.findOne({"contactOf":self._id})
Given a user, I want to find all of his contacts.
Equally easy:
db.contacts.find({"contactOf":self._id})
Given a user, I want to find the details of his contact "John Doe"
db.contacts.find({"contactOf":self._id,"name":"John Doe"})
Now we have the contact one way or the other, including his/her/undecided/choose not to say _id, we can easily edit/delete it:
Given a contact, I want to edit it.
db.contacts.update({"_id":contact._id},{$set:{"name":"John F Doe"}})
I trust that by now you get an idea on how to delete John from the contacts of our user.
Notes
Indices
With your data model, you would have needed to add additional indices for the uid fields - which serves no purpose, as we found out. Furthermore, _id is indexed by default, so we make good use of this index. An additional index should be done on the contact collection, however:
db.contact.ensureIndex({"contactOf":1,"name":1})
Normalization
Not done here at all. The reasons for this are manifold, but the most important is that while John Doe might have only have the mobile number of "Mallory H Ousefriend", his wife Jane Doe might also have the email address "janes_naughty_boy#censored.com" - which at least Mallory surely would not want to pop up in John's contact list. So even if we had identity of a contact, you most likely would not want to reflect that.
Conclusion
With a little bit of data remodeling, we reduced the number of additional indices we need to 1, made the queries much simpler and circumvented the BSON document size limit. As for the performance, I guess we are talking of at least one order of magnitude.
In the tutorial you mentioned above, they pass 2 parameters to the method, one for filter and one for projection but you just passed one, that's the difference. You can change your query to be like this:
Contact.contactModel.findOne(
{uid: self.uid},
{contacts: {
$elemMatch: {
uid: {
$exists: true,
$ne: undefined,
}
}
}}
)
The agg framework makes filtering for existence of a field a little tricky. I believe the OP wants all docs where a field exists in an array of subdocs and then to return ONLY those subdocs where the field exists. The following should do the trick:
var inputtedUID = "0"; // doesn't matter
db.foo.aggregate(
[
// This $match finds the docs with our input UID:
{$match: {"uid": inputtedUID }}
// ... and the $addFields/$filter will strip out those entries in contacts where contacts.uid does NOT exist. We wish we could use {cond: {$zz.name: {$exists:true} }} but
// we cannot use $exists here so we need the convoluted $ifNull treatment. Note we
// overwrite the original contacts with the filtered contacts:
,{$addFields: {contacts: {$filter: {
input: "$contacts",
as: "zz",
cond: {$ne: [ {$ifNull:["$$zz.uid",null]}, null]}
}}
}}
,{$limit:1} // just get 1 like findOne()
]);
show(c);
{
"_id" : 0,
"uid" : 0,
"contacts" : [
{
"uid" : "buzz",
"n" : 1
},
{
"uid" : "dave",
"n" : 2
}
]
}
I will start off by saying while I am not new to CouchDB, I am new to querying the views using JavaScript and the web.
I have looked at multiple other questions on here, including CouchDB - Queries with params, couchDB queries, Couchdb query with AND operator, CouchDB Querying Dates, and Basic CouchDB Queries, just to list a few.
While all have good information in them, I haven't found one that has my particular problem in it.
I have a view set up like so:
function (docu) {
if(docu.status && docu.doc && docu.orgId.toString() && !docu.deleted){
switch(docu.status){
case "BASE":
emit(docu.name, docu);
break;
case "AIR":
emit(docu.eta, docu);
break;
case "CHECK":
emit(docu.checkTime, docu);
break;
}
}
}
with all documents having a status, doc, orgId, deleted, name, eta, and checkTime. (I changed doc to docu because of my custom doc key.
I am trying to query and emit based on a set of keys, status, doc, orgId, where orgId is an integer.
My jQuery to do this looks like so:
$.couch.db("myDB").view("designDoc/viewName", {
keys : ["status","doc",orgId],
success: function(data) {
console.log(data);
},
error: function(status) {
console.log(status);
}
});
I receive
{"total_rows":59,"offset":59,"rows":[
]}
Sometimes the offset is 0, sometimes it is 59. I feel I must be doing something wrong for this not to be working correctly.
So for my questions:
I did not mention this, but I had to set docu.orgId.toString() because I guess it parses the URL as a string, is there a way to use this number as a numeric value?
How do I correctly view multiple documents based on multiple keys, i.e. if(key1 && key2) emit(doc.name, doc)
Am I doing something obviously wrong that I lack the knowledge to notice?
Thank you all.
You're so very close. To answer your questions
When you're using docu.orgId.toString() in that if-statement you're basically saying: this value must be truthy. If you didn't convert to string, any number, other than 0, would be true. Since you are converting to a string, any value other than an empty string will be true. Also, since you do not use orgId as the first argument in an emit call, at least not in the example above, you cannot query by it at all.
I'll get to this.
A little.
The thing to remember is emit creates a key-value table (that's really all a view is) that you can use to query. Let's say we have the following documents
{type:'student', dept:'psych', name:'josh'},
{type:'student', dept:'compsci', name:'anish'},
{type:'professor', dept:'compsci', name:'kender'},
{type:'professor', dept:'psych', name:'josh'},
{type:'mascot', name:'owly'}
Now let's say we know that for this one view, we want to query 1) everything but mascots, 2) we want to query by type, dept, and name, all of the available fields in this example. We would write a map function like this:
function(doc) {
if (doc.type === 'mascot') { return; } // don't do anything
// allow for queries by type
emit(doc.type, null); // the use of null is explained below
// allow queries by dept
emit(doc.dept, null);
// allow for queries by name
emit(doc.name, null);
}
Then, we would query like this:
// look for all joshs
$.couch.db("myDB").view("designDoc/viewName", {
keys : ["josh"],
// ...
});
// look for everyone in the psych department
$.couch.db("myDB").view("designDoc/viewName", {
keys : ["psych"],
// ...
});
// look for everyone that's a professor and everyone named josh
$.couch.db("myDB").view("designDoc/viewName", {
keys : ["professor", "josh"],
// ...
});
Notice the last query isn't and in the sense of a logical conjunction, it's in the sense of a union. If you wanted to restrict what was returned to documents that were only professors and also joshs, there are a few options. The most basic would be to concatenate the key when you emit. Like
emit('type-' + doc.type + '_name-' + doc.name, null);
You would then query like this: key : ["type-professor_name-josh"]
It doesn't feel very proper to rely on strings like this, at least it didn't to me when I first started doing it, but it is a quite common method for querying key-value stores. The characters - and _ have no special meaning in this example, I simply use them as delimiters.
Another option would be what you mentioned in your comment, to emit an array like
emit([ doc.type, doc.name ], null);
Then you would query like
key: ["professor", "josh"]
This is perfectly fine, but generally, the use case for emitting arrays as keys, is for aggregating returned rows. For example, you could emit([year, month, day]) and if you had a simple reduce function that basically passed the records through:
function(keys, values, rereduce) {
if (rereduce) {
return [].concat.apply([], values);
} else {
return values;
}
}
You could query with the url parameter group_level set to 1 or 2 and start querying by year and month or just year on the exact same view using arrays as keys. Compared to SQL or Mongo it's mad complicated and convoluted, but hey, it's there.
The use of null in the view is really for resource saving. When you query a view, the rows contain an _id that you can use in a second ajax call to get all the documents from, for example, _all_docs.
I hope that makes sense. If you need any clarification you can use the comments and I'll try my best.
I need to format SQL query with default option for missing object fields. I can do it with an external call to pgp.as.format:
let formattedQuery = pgp.as.format('INSERT INTO some_table (a,b,c) VALUES ($(a), $(b), $(c))', object, {default: null});
db.none(formattedQuery);
Is it possible to pass default option directly without pre-formatting the query? Basically, i would like to do something like this:
db.none('INSERT INTO some_table (a,b,c) VALUES ($(a), $(b), $(c))', object, {default: null})
I'm the author of pg-promise.
All query methods in pg-promise rely on the default query formatting, for better reliability, i.e. when a query template refers to a property, the property must exist, or else an error is thrown. It is logical to keep it that way, because a query cannot execute correctly while having properties in it that haven't been replaced with values.
Internally, the query engine does support advanced query formatting options, via method as.format, such as partial and default. And there are several objects in the library that make use of those options.
One in particular that you should use for generating inserts is helpers.insert, which can generate both single-insert and multi-insert queries. That method, along with even more useful helpers.update make use of type ColumnSet, which is highly configurable, supporting default values for missing properties (among other things), via type Column.
Using ColumnSet, you can specify a default value either for selective columns or for all of them.
For example, let's assume that column c may be missing, in which case we want to set it to null:
var pgp = require('pg-promise')({
capSQL: true // to capitalize all generated SQL
});
// declaring a reusable ColumnSet object:
var csInsert = new pgp.helpers.ColumnSet(['a', 'b',
{
name: 'c',
def: null
}
], {table: 'some_table'});
var data = {a:1, b:'text'};
// generating our insert query:
var insert = pgp.helpers.insert(data, csInsert);
//=> INSERT INTO "some_table"("a","b","c") VALUES(1,'text',null)
This makes it possible to generate multi-insert queries automatically:
var data = [{a:1, b:'text'}, {a:2, b:'hello'}];
// generating a multi-insert query:
var insert = pgp.helpers.insert(data, csInsert);
//=> INSERT INTO "some_table"("a","b","c") VALUES(1,'text',null),(2,'hello',null)
The same approach works nicely for single-update and multi-update queries.
In all, to your original question:
Is it possible to pass default option directly without pre-formatting the query?
No, and neither it should. Instead, you should use the aforementioned methods within the helpers namespace to generate correct queries. They are way more powerful and flexible ;)
I've run into a bit of an issue with some data that I'm storing in my MongoDB (Note: I'm using mongoose as an ODM). I have two schemas:
mongoose.model('Buyer',{
credit: Number,
})
and
mongoose.model('Item',{
bid: Number,
location: { type: [Number], index: '2d' }
})
Buyer/Item will have a parent/child association, with a one-to-many relationship. I know that I can set up Items to be embedded subdocs to the Buyer document or I can create two separate documents with object id references to each other.
The problem I am facing is that I need to query Items where it's bid is lower than Buyer's credit but also where location is near a certain geo coordinate.
To satisfy the first criteria, it seems I should embed Items as a subdoc so that I can compare the two numbers. But, in order to compare locations with a geoNear query, it seems it would be better to separate the documents, otherwise, I can't perform geoNear on each subdocument.
Is there any way that I can perform both tasks on this data? If so, how should I structure my data? If not, is there a way that I can perform one query and then a second query on the result from the first query?
Thanks for your help!
There is another option (besides embedding and normalizing) for storing hierarchies in mongodb, that is storing them as tree structures. In this case you would store Buyers and Items in separate documents but in the same collection. Each Item document would need a field pointing to its Buyer (parent) document, and each Buyer document's parent field would be set to null. The docs I linked to explain several implementations you could choose from.
If your items are stored in two separate collections than the best option will be write your own function and call it using mongoose.connection.db.eval('some code...');. In such case you can execute your advanced logic on the server side.
You can write something like this:
var allNearItems = db.Items.find(
{ location: {
$near: {
$geometry: {
type: "Point" ,
coordinates: [ <longitude> , <latitude> ]
},
$maxDistance: 100
}
}
});
var res = [];
allNearItems.forEach(function(item){
var buyer = db.Buyers.find({ id: item.buyerId })[0];
if (!buyer) continue;
if (item.bid < buyer.credit) {
res.push(item.id);
}
});
return res;
After evaluation (place it in mongoose.connection.db.eval("...") call) you will get the array of item id`s.
Use it with cautions. If your allNearItems array will be too large or you will query it very often you can face the performance problems. MongoDB team actually has deprecated direct js code execution but it is still available on current stable release.
Can you suggest me an algorithm for filtering out data.
I am using javascript and trying to write out a filter function which filters an array of data.I have an array of data and an array of filters, so in order to apply each filter on every data, I have written 2 for loops
foreach(data)
{
foreach(filter)
{
check data with filter
}
}
this is not the proper code, but in short that what my function does, the problem is this takes a huge amount of time, can someone suggest a better method.
I am using the Mootools library and the array of data is JSON array
Details of data and Filter
Data is JSON array of lets say user, so it will be
data = [{"name" : "first", "email" : "first#first", "age" : "20"}.
{"name" : "second", "email" : "second#second", "age" : "21"}
{"name" : "third", "email" : "third#third", "age" : "22"}]
Array of filters is basically self define class for different fields of data
alFilter[0] = filterName;
alFilter[1] = filterEmail;
alFilter[2] = filterAge;
So when I enter the first for loop, I get a single JSON opbject (first row) in the above case.
When I enter the second for loop (filters loop) I have a filter class which extracts the exact field on which the current filter would work and check the filter with the appropriate field of the data.
So in my example
foreach(data)
{
foreach(filter)
{
//loop one - filter name
// loop two - filter email
// loop three - filter age
}
}
when the second loop ends i set a flag denoting if the data has been filtered or not and depending on it the data is displayed.
You're going to have to give us some more detail about the exact structure of your data and filters to really be able to help you out. Are the filters being used to select a subset of data, or to modify the data? What are the filters doing?
That said, there are a few general suggestions:
Do less work. Is there some way you can limit the amount of data you're working on? Some pre-filter that can run quickly and cut it down before you do your main loop?
Break out of the inner loop as soon as possible. If one of the filters rejects a datum, then break out of the inner loop and move on to the next datum. If this is possible, then you should also try to make the most selective filters come first. (This is assuming that your filters are being used to reject items out of the list, rather than modify them)
Check for redundancy in the computation the filters perform. If each of them performs some complicated calculations that share some subroutines, then perhaps memoization or dynamic programming may be used to avoid redundant computation.
Really, it all boils down to the first point, do less work, at all three levels of your code. Can you do less work by limiting the items in the outer loop? Do less work by stopping after a particular filter and doing the most selective filters first? Do less work by not doing any redundant computation inside of each filter?
That's pretty much how you should do it. The trick is to optimize that "check data with filter"-part. You need to traverse all your data and check against all your filters - you'll not going to get any faster than that.
Avoid string comparisons, use data models as native as possible, try to reduce the data set on each pass with filter, etc.
Without further knowledge, it's hard to optimize this for you.
You should sort the application of your filters, so that two things are optimized: expensive checks should come last, and checks that eliminate a lot of data should come first. Then, you should make sure that checking is cut short as soon as an "out" result occurs.
If your filters are looking for specific values, a range, or start of a text then jOrder (http://github.com/danstocker/jorder) will fit your problem.
All you need to do is create a jOrder table like this:
var table = jOrder(data)
.index('name', ['name'], { grouped: true, ordered: true })
.index('email', ['email'])
.index('age', ['age'], { grouped: true, ordered: true, type: jOrder.number });
And then call table.where() to filter the table.
When you're looking for exact matches:
filtered = table.where([{name: 'first'}, {name: 'second'}]);
When you're looking for a certain range of one field:
filtered = table.where([{age: {lower: 20, upper: 21}}], {mode: jOrder.range});
Or, when you're looking for values starting with a given string:
filtered = table.where([{name: 'fir'}], {mode: jOrder.startof});
Filtering will be magnitudes faster this way than with nested loops.
Supposing that a filter removes the data if it doesn't match, I suggest, that you switch the two loops like so:
foreach(filter) {
foreach(data) {
check data with filter
}
}
By doing so, the second filter doesn't have to work all data, but only the data that passed the first filter, and so on. Of course the tips above (like doing expensive checks last) are still true and should additionally be considered.