Fetch issues in backbone collection with id - javascript

Following is my JSON:
[
{
"id" : "1",
"type" : "report"
},
{
"id" : "2",
"type" : "report"
},
{
"id" : "1",
"type" : "email"
},
]
Consider, json is returned from backbone collection -> service call.
Now, when I'm using the json response to render my html table using backbone view and handlebars template system.
2 rows gets displayed, instead there should be 3 rows.
Note:
collection Parse response is returning correct json (i.e. 3 rows).
When I overwrite the id using collection parse with unique random generated number all 3 rows get displayed.
This is not ok, because I don't want to change the id.
I want the row to be displayed as following:
1 reports
2 reports
1 email

From the documentation for Collection add,
Note that adding the same model (a model with the same id) to a
collection more than once is a no-op.
While I cannot see a reason for why two different objects should have the same id, you may have a valid reason. One suggestion would be to add another property to each object in the json response, _dummyId and set that to an autoincrementing value from the server side. On the client side, in your model definition code, you then set the idAttribute to _dummyId.
JSON response,
[
{
"id" : "1",
"_dummyId": "1",
"type" : "report"
},
{
"id" : "2",
"_dummyId": "2",
"type" : "report"
},
{
"id" : "1",
"_dummyId": "3",
"type" : "email"
},
]
Your model definition, from http://backbonejs.org/#Model-idAttribute,
var Meal = Backbone.Model.extend({
idAttribute: "_dummyId"
});
That said, I do hope there is an elegant setting in backbone, something that makes a backbone collection acts a list instead of a set.

If you want to solve this, you have to set new unique id for each model you add to collection.
Try this method:
initialize: function() {
this.set("id", this.generateID());
},
generateID = function () {
return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function (c) {
var r = Math.random() * 16 | 0, v = c == 'x' ? r : (r & 0x3 | 0x8);
return v.toString(16);
});
}
If you need the original id, you need to save it before, and after create the new one, you set the original in another model attribute.
Backbone ignore me when I did #amith-george solution setting idAttribute to another dummyId

Related

Firebase filter with pagination

I'm doing an opensource Yelp with Firebase + Angular.
My database:
{
"reviews" : {
"-L0f3Bdjk9aVFtVZYteC" : {
"comment" : "my comment",
"ownerID" : "Kug2pR1z3LMcZbusqfyNPCqlyHI2",
"ownerName" : "MyName",
"rating" : 2,
"storeID" : "-L0e8Ua03XFG9k0zPmz-"
},
"-L0f7eUGqenqAPC1liYj" : {
"comment" : "me second comment",
"ownerID" : "Kug2pR1z3LMcZbusqfyNPCqlyHI2",
"ownerName" : "MyName",
"rating" : 3,
"storeID" : "-L0e8Ua03XFG9k0zPmz-"
},
},
"stores" : {
"-L0e8Ua03XFG9k0zPmz-" : {
"description" : "My good Store",
"name" : "GoodStore",
"ownerID" : "39UApyo0HIXmKPrTOi8D0nWLi6n2",
"tags" : [ "good", "health", "cheap" ],
}
},
"users" : {
"39UApyo0HIXmKPrTOi8D0nWLi6n2" : {
"name" : "First User"
},
"Kug2pR1z3LMcZbusqfyNPCqlyHI2" : {
"name" : "MyName",
"reviews" : {
"-L0f3Bdjk9aVFtVZYteC" : true,
"-L0f7eUGqenqAPC1liYj" : true
}
}
}
}
I use this code below to get all store's reviews (using AngularFire2)
getReviews(storeID: string){
return this.db.list('/reviews', ref => {
return ref.orderByChild('storeID').equalTo(storeID);
});
}
Now, I want to make a server-side review pagination, but I think I cannot do it with this database structure. Am I right? Tried:
getReviews(storeID: string){
return this.db.list('/reviews', ref => {
return ref.orderByChild('storeID').equalTo(storeID).limitToLast(10) //How to make pagination without retrive all data?
});
}
I thought that I could put all reviews inside stores, but (i) I don't want to retrieve all reviews at once when someone ask for a store and (ii) my review has a username, so I want to make it easy to change it (that why I have a denormalized table)
For the second page you need to know two things:
the store ID that you want to filter on
the key of the review you want to start at
You already have the store ID, so that's easy. As the key to start at, well use the key of the last item on the previous page, and then just request one item extra. Then finally, you'll need to use start() (and possibly endAt() for this:
return this.db.list('/reviews', ref => {
return ref.orderByChild('storeID')
.startAt(storeID, lastKeyOnPreviousPage)
.limitToLast(11)
});
Refer this and this documentation.
For the first page:
snapshot = await ref.orderByChild('storeID')
.equalTo(store_id) //store_id is the variable name.
.limitToLast(10)
.once("value")
Store the firstKey (NOT the lastKey) from the above query. (Since you are using limitToLast())
firstKey = null
snapshot.forEach(snap => {
if (!firstKey)
firstKey = snap.key
// code
})
For the next page:
snapshot = await ref.orderByChild('storeID') //storeID is the field name in the database
.startAt(store_id) //store_id is the variable name which has the desired store ID
.endAt(store_id, firstKey)
.limitToLast(10 + 1) //1 is added because you will also get value for the firstKey
.once("value")
The above query will fetch 11 list data which will contain one redundant data from the first page's query.
How it works:
startAt ( value : number | string | boolean | null , key ? : string ) : Query
The starting point is inclusive, so children with exactly the specified value will be included in the query. The optional key argument can be used to further limit the range of the query. If it is specified, then children that have exactly the specified value must also have a key name greater than or equal to the specified key.
endAt ( value : number | string | boolean | null , key ? : string ) : Query
The ending point is inclusive, so children with exactly the specified value will be included in the query. The optional key argument can be used to further limit the range of the query. If it is specified, then children that have exactly the specified value must also have a key name less than or equal to the specified key.
So the query will try to fetch:
storeID >= store_id && storeID <= store_id (lexicographically)
which will equal to
storeID == store_id

mongoose index already exists with different options

I am implementing search result view to my app.
I figured out that mongoose internally provide full text search function with $text.
I put the code below to Post.js
PostSchema.index({desc: 'text'}); //for example
Here's the code I put in my routing file route/posts.js
Post.find({$text: {$search : 'please work!'}}).exec(function (err, posts) {...})
The error message I come up with is below
Index with pattern: { _fts: "text", _ftsx: 1 } already exists with different options
Would there any body who know how to deal with this error and figure out?
Thanks you.
check on which field you have your text index defined. Right now mongodb allows only one text index per collection. so if you have defined a text index on desc column and try to use that index on some other column you are bound to get this error.
can you try to query your index and see on which column you created it. To get indexes you can do
db.collection.getIndexes()
and it will return something like this
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "some.ns"
},
{
"v" : 1,
"key" : {
"_fts" : "text",
"_ftsx" : 1
},
"name" : "desc_text",
"ns" : "some.ns",
"weights" : {
"title" : 1
},
"default_language" : "english",
"language_override" : "language",
"textIndexVersion" : 2
}
]
now if you want to scope in other columns also to use this index simply drop this index
db.collection.dropIndex('desc_text');
and then recreate it by including all columns you want to be covered by text index,
db.collection.createIndex({
title:'text;,
body: 'text;,
desc: 'text',
...... and so on
});

Meteor: Return only single object in nested array within collection

I'm attempting to filter returned data sets with Meteor's find().fetch() to contain just a single object, it doesn't appear very useful if I query for a single subdocument but instead I receive several, some not even containing any of the matched terms.
I have a simple mixed data collection that looks like this:
{
"_id" : ObjectId("570d20de3ae6b49a54ee01e7"),
"name" : "Entertainment",
"items" : [
{
"_id" : ObjectId("57a38b5f2bd9ac8225caff06"),
"slug" : "this-is-a-long-slug",
"title" : "This is a title"
},
{
"_id" : ObjectId("57a38b835ac9e2efc0fa09c6"),
"slug" : "mc",
"title" : "Technology"
}
]
}
{
"_id" : ObjectId("570d20de3ae6b49a54ee01e8"),
"name" : "Sitewide",
"items" : [
{
"_id" : ObjectId("57a38bc75ac9e2efc0fa09c9"),
"slug" : "example",
"name" : "Single Example"
}
]
}
I can easily query for a specific object in the nested items array with the MongoDB shell as this:
db.categories.find( { "items.slug": "mc" }, { "items.$": 1 } );
This returns good data, it contains just the single object I want to work with:
{
"_id" : ObjectId("570d20de3ae6b49a54ee01e7"),
"items" : [
{
"_id" : ObjectId("57a38b985ac9e2efc0fa09c8")
"slug" : "mc",
"name" : "Single Example"
}
]
}
However, if a similar query within Meteor is directly attempted:
/* server/publications.js */
Meteor.publish('categories.all', function () {
return Categories.find({}, { sort: { position: 1 } });
});
/* imports/ui/page.js */
Template.page.onCreated(function () {
this.subscribe('categories.all');
});
Template.page.helpers({
items: function () {
var item = Categories.find(
{ "items.slug": "mc" },
{ "items.$": 1 } )
.fetch();
console.log('item: %o', item);
}
});
The outcome isn't ideal as it returns the entire matched block, as well as every object in the nested items array:
{
"_id" : ObjectId("570d20de3ae6b49a54ee01e7"),
"name" : "Entertainment",
"boards" : [
{
"_id" : ObjectId("57a38b5f2bd9ac8225caff06")
"slug" : "this-is-a-long-slug",
"name" : "This is a title"
},
{
"_id" : ObjectId("57a38b835ac9e2efc0fa09c6")
"slug" : "mc",
"name" : "Technology"
}
]
}
I can then of course filter the returned cursor even further with a for loop to get just the needed object, but this seems unscalable and terribly inefficient while dealing with larger data sets.
I can't grasp why Meteor's find returns a completely different set of data than MongoDB's shell find, the only reasonable explanation is both function signatures are different.
Should I break up my nested collections into smaller collections and take a more relational database approach (i.e. store references to ObjectIDs) and query data from collection-to-collection, or is there a more powerful means available to efficiently filter large data sets into single objects that contain just the matched objects as demonstrated above?
The client side implementation of Mongo used by Meteor is called minimongo. It currently only implements a subset of available Mongo functionality. Minimongo does not currently support $ based projections. From the Field Specifiers section of the Meteor API:
Field operators such as $ and $elemMatch are not available on the client side yet.
This is one of the reasons why you're getting different results between the client and the Mongo shell. The closest you can get with your original query is the result you'll get by changing "items.$" to "items":
Categories.find(
{ "items.slug": "mc" },
{ "items": 1 }
).fetch();
This query still isn't quite right though. Minimongo expects your second find parameter to be one of the allowed option parameters outlined in the docs. To filter fields for example, you have to do something like:
Categories.find(
{ "items.slug": "mc" },
{
fields: {
"items": 1
}
}
).fetch();
On the client side (with Minimongo) you'll then need to filter the result further yourself.
There is another way of doing this though. If you run your Mongo query on the server, you won't be using Minimongo, which means projections are supported. As a quick example, try the following:
/server/main.js
const filteredCategories = Categories.find(
{ "items.slug": "mc" },
{
fields: {
"items.$": 1
}
}
).fetch();
console.log(filteredCategories);
The projection will work, and the logged results will match the results you see when using the Mongo console directly. Instead of running your Categories.find on the client side, you could instead create a Meteor Method that calls your Categories.find on the server, and returns the results back to the client.

MongoDB aggregation: How to extract the field in the results

all!
I'm new to MongoDB aggregation, after aggregating, I finally get the result:
"result" : [
{
"_id" : "531d84734031c76f06b853f0"
},
{
"_id" : "5316739f4031c76f06b85399"
},
{
"_id" : "53171a7f4031c76f06b853e5"
},
{
"_id" : "531687024031c76f06b853db"
},
{
"_id" : "5321135cf5fcb31a051e911a"
},
{
"_id" : "5315b2564031c76f06b8538f"
}
],
"ok" : 1
The data is just what I'm looking for, but I just want to make it one step further, I hope my data will be displayed like this:
"result" : [
"531d84734031c76f06b853f0",
"5316739f4031c76f06b85399",
"53171a7f4031c76f06b853e5",
"531687024031c76f06b853db",
"5321135cf5fcb31a051e911a",
"5315b2564031c76f06b8538f"
],
"ok" : 1
Yes, I just want to get all the unique id in a plain string array, is there anything I could do? Any help would be appreciated!
All MongoDB queries produce "key/value" pairs in the result document. All MongoDB content is basically a BSON document in this form, which is just "translated" back to native code form by the driver to the language it is implemented in.
So the aggregation framework alone is never going to produce a bare array of just the values as you want. But you can always just transform the array of results, as after all it is only an array
var result = db.collection.aggregate(pipeline);
var response = result.result.map(function(x) { return x._id } );
Also note that the default behavior in the shell and a preferred option is that the aggregation result is actually returned as a cursor from MongoDB 2.6 and onwards. Since this is in list form rather than as a distinct document you would process differently:
var response = db.collection.aggregate(pipeline).map(function(x) {
return x._id;
})

copyValue from another collection in MongoDb

Can i Copy Some Field from collection to another collection?
I want copy values of bar to foo collection, but i don't want type filed, and I want insert in foo e new _id e extra field (userId) ( then i use Node.js)
collection bar
{
"_id" : ObjectId("77777777ffffff9999999999"),
"type" : 0,
"name" : "Default",
"index" : 1,
"layout" : "1",
}
collection foo
{
"_id" : NEW OBJECT ID,
// "type" : 0, NO IN THIS COLLECTION
"userId" : ObjectId("77777777ffffff9999999911"),
"name" : "Default",
"index" : 1,
"layout" : "1",
}
I try with db.bar.copyTo("foo"); but copy entire collection
Actually that is probably your best option. But when you don't want the new field in your collection, then just remove it using $unset:
db.foo.update({ },{ "$unset": { "type": 1 } },false,true)
That will remove the field from all documents in your new collection in one statement.
In future releases from 2.6 and upwards you can also do this using aggregate:
db.bar.aggregate([
{ "$project": {
"userId" : 1
"name" : 1
"index" : 1
"layout" : 1
}},
{ "$out": "foo" }
])
The new $out method sends the output of the aggregation statement to a collection.
You can copy fields from one collection to another by using
db.copyFromCollection.find().forEach(function(x){
db.copyToCOllection.update(
{"_id" : ObjectId("53205a4a14952bee39f3376e")}, //some condition for update
{ $set:{parameterOfCopyToCollection:x.parameterOfCopyFromCollection}
});
});
Here we iterate through the data by using the above function.We iterate through the collection from which we want to copy the data and then inside the function we update the document to which we want to add/update that data.

Categories