Reading/writing nested data in a Firebase Database (web) - javascript

This seemingly simple question is surely a duplicate of many, so I apologize for that. Won't be offended when it's marked as such... But I have yet to find a clear, straight forward and modern answer that seems to work as I'd expect.
Given "Item One", how can I return the "subItems" array for the object that matches that title?
Additionally, how do I push new subitems to that array (if there's some different way I should target that array when writing rather than reading)
And the context of this is "items" is the top-level db.ref('items')

To find an item by a property, you need to run a query that orders and filters on that property. So in your case:
var ref = firebase.database().ref("items");
var query = ref.orderByChild("title").equalTo("Item One");
query.once("value", function(snapshot) {
snapshot.forEach(function(item) {
console.log(item.key); // -L8...EMj7P
console.log(item.child("subItems").val()); // logs the sub items
// console.log(item.ref.child("subItems").push().set(...)); // adds a sub item
});
});
Note that nesting data types is an anti-pattern in the Firebase Database, since it typically leads to problems later on. A more idiomatic approach is to have two top-level lists (e.g. items and subItems) that then use the same keys:
items: {
-L8...EMj7P: {
title: "Item One"
}
},
subItems: {
-L8...EMj7P: {
...
}
}

Related

CouchDB find paired documents and list remaining unpaired documents

I'm relatively new to NoSQL, but I have been enjoying the journey very much! I am however finding the map-reduce way of life a bit tricky! I need some help with a problem!
I have a database with two types of documents, opening transactions and closing transactions. For replication and offline functionality reasons I cannot merge the data into one document. The opening transaction document looks something like :
{
_id: "transaction-open-randomgeneratedstring",
type: "transactions-open",
vehicle: "vehicle-id",
created: "date string"
}
The closing documents looks something like:
{
_id: "transaction-close-randomgeneratedstring",
type: "transactions-close",
openid: "transaction-open-randomgeneratedstring",
created: "date string"
}
The randomgeneratedstring of a closing transactions match the randomgeneratedstring of the corresponding opening transaction.
I need a map-reduce to give me the list of open transactions that does not have a corresponding closing transaction. This will basically give me a list of outstanding transactions.
This is the map-reduce I have thus far, but it is not doing the job.
{
"map": function(doc) {
if(doc.type == "transactions-open") {
emit([doc._id, 0], "OPEN");
}
if(doc.type == "transactions-close"){
emit([doc.openid, 1], "CLOSE");
}
},
"reduce": function(keys, values, rereduce) {
var unique_labels = {};
var open = {};
keys.forEach(function(label) {
if(!unique_labels[label[0]]) {
unique_labels[label[0]] = true;
} else {
open[label[0]] = true;
}
});
return open;
}
}
I am open for changes in the _id naming / structure, but I cannot combine the two documents into one.
Thanks!
EDIT
Based on response from Hod, I changed the reduce to look like:
function(keys, values, rereducer)
{
if(values.length == 1)
return true;
}
This is certainly a step in the right direction, but the unwanted transactions are still in the result set, the value is only null. Is there no way to get those out of the result set?
As described - what you would do with a Join in SQL you do with a reduce in CouchDB. Code something like this - not tested:
{
"map": function(doc) {
if(doc.type == "transactions-open") {
emit([doc._id], 1);
}
if(doc.type == "transactions-close"){
emit([doc.openid], -1);
}
},
"reduce": "_sum";
}
So we emit a 1 for an open transaction under an ID and a -1 for a close under the same ID. Now when you reduce you will get a result for each ID of:
-1 = Closed with no record of an open (error condition).
0 = Opened and Closed
1 = Open and not yet closed.
The problem is with the keys parameter in your reduce function. The reduce phase is not called once with all possible keys. It's called per distinct key, and based on the group_level you specify.
Looking at your code, if you haven't specified any group_level, your reduce function is going to get called for every document separately.
Because you're emitting the id of the open transaction doc for both open and close markers, if you grouped at the first level, you'd get open or open/close pairs. You're still only getting a reduction on a limited set of docs at a time.
You could fix this either in your logic calling the query, or by emitting a key that let's you reduce on the entire set at once. (I imagine there are other ways too. These are the ones that come to mind.)
If you use the key approach, you'd need to emit something that looked like ["transaction", doc._id, 0]. Then a first level grouping would give you the whole transaction set like you're current code expects.
EDIT (Adding information based on edit of question.)
The reduce function is going to get called with whatever grouping you set up. It's always going to return something, even if it's just no results emitted (i.e. null).
If you don't want to handle that in the logic that's running the queries and processing the results, you need to use an approach that will allow you to group all the transaction documents together, instead of just the documents for a single transaction.
Based on what you've done so far, another approach would be to forgo the reduce phase and just look at the number of results returned by a query that's limited to the unique doc id.

Idempotency in MongoDB nested array, possible?

I am writing a REST api which I want to make idempotent. I am kind of struggling right now with nested arrays and idempotency. I want to update an item in product_notes array in one atomic operation. Is that possible in MongoDB? Or do I have to store arrays as objects instead (see my example at the end of this post)? Is it for example possible to mimic the upsert behaviour but for arrays?
{
username: "test01",
product_notes: [
{ product_id: ObjectID("123"), note: "My comment!" },
{ product_id: ObjectID("124"), note: "My other comment" } ]
}
If I want to update the note for an existing product_node I just use the update command and $set but what if the product_id isn't in the array yet. Then I would like to do an upsert but that (as far as I know) isn't part of the embedded document/array operators.
One way to solve this, and make it idempotent, would be to just add a new collection product_notes to relate between product_id and username.
This feels like violating the purpose of document-based databases.
Another solution:
{
username: "test01",
product_notes: {
"123": { product_id: ObjectID("123"), note: "My comment!" },
"124": { product_id: ObjectID("124"), note: "My other comment" } }
}
Anyone a bit more experienced than me who have anything to share regarding this?
My understanding of your requirement is that you would like to store unique product ids (array) for an user.
You could create an composite unique index on "username" and "username.product_id". So that when the same product id is inserted in the array, you would an exception which you could catch and handle in the code as you wanted the service to be Idempotent.
In terms of adding the new element to an array (i.e. product_notes), I have used Spring data in which you need to get the document by primary key (i.e. top level attribute - example "_id") and then add a new element to an array and update the document.
In terms of updating an attribute in existing array element:-
Again, get the document by primary key (i.e. top level attribute -
example "_id")
Find the correct product id occurrence by iterating the array data
Replace the "[]" with array occurrence
product_notes.[].note

Fastest way to add to and get values from a list of objects

I'm just getting started with JavaScript objects. I'm trying to store catalog inventory by id and locations. I think a single object would look something like this:
var item = {
id: number,
locations: ["location1", "location2"]
};
I've started to read a bit about it but am still trying to wrap my head around it. Not sure what is the fastest way add new items to a list with a location, add a new location to an existing item, all while checking for dupes. Performance of getting the locations later isn't as critical. This is part of a process that is running thousands of checks to eventually get items by id and location, so performance is key.
Final question, I'm not even sure if it's possible to store this in local storage. From another similar question, I'm not sure.
Using lodash, something like this should work to determine if an item id exists and append either a new item to the array, or just add a new location:
var item = [{
id: 1,
locations: ["location1", "location2"]
},{
id: 2,
locations: ["location2", "location4"]
}];
function findItem(id){
return _.findIndex(item, function(chr) {
return chr.id == id;
});
}
function addItem(id,locations) {
var position = findItem(id);
if (position<0) {
item.push({
id: id,
locations: locations
})
} else {
item[position].locations = _.uniq(item[position].locations.concat(locations));
}
}
addItem(2,['location292']);
addItem(3,['location23']);
console.log(item);
What it basically does is to search the array of objects (item) for an id as the one we are passing to the addItem() function, if it is found we add the new locations array to the existing item, if not it's creating a new object with a new id and location.
You've asked a question that contains some tradeoffs:
The simplest and fastest way to retrieve a list of locations is to store them in an array.
The fastest way to check something for a duplicates is not an array, but rather a map object that maintains an index of the key.
So, you'd have to discuss more about which set of tradeoffs you want. Do you want to optimize for performance of adding a non-duplicate or optimize for performance of retrieving the list of locations. Pick one or the other.
As for localStorage, you can store any string in LocalStorage and you can convert simply non-reference objects to a string with JSON.stringify(), so yes this type of structure can be stored in LocalStorage.
For example, if you want to use the array for optimized retrieval, then you can check for duplicates like this before adding:
function addLocation(item, newLocation) {
if (item.locations.indexOf(newLocation) === -1) {
item.locations.push(newLocation);
}
}
Also, you can store an array of items in LocalStorage like this:
localStorage.setItem("someKey", JSON.stringify(arrayOfItems));
And, then some time later, you can retrieve it like this:
var arrayOfItems = JSON.parse(localStorage.getItem("someKey"));

MongoDB - Query conundrum - Document refs or subdocument

I've run into a bit of an issue with some data that I'm storing in my MongoDB (Note: I'm using mongoose as an ODM). I have two schemas:
mongoose.model('Buyer',{
credit: Number,
})
and
mongoose.model('Item',{
bid: Number,
location: { type: [Number], index: '2d' }
})
Buyer/Item will have a parent/child association, with a one-to-many relationship. I know that I can set up Items to be embedded subdocs to the Buyer document or I can create two separate documents with object id references to each other.
The problem I am facing is that I need to query Items where it's bid is lower than Buyer's credit but also where location is near a certain geo coordinate.
To satisfy the first criteria, it seems I should embed Items as a subdoc so that I can compare the two numbers. But, in order to compare locations with a geoNear query, it seems it would be better to separate the documents, otherwise, I can't perform geoNear on each subdocument.
Is there any way that I can perform both tasks on this data? If so, how should I structure my data? If not, is there a way that I can perform one query and then a second query on the result from the first query?
Thanks for your help!
There is another option (besides embedding and normalizing) for storing hierarchies in mongodb, that is storing them as tree structures. In this case you would store Buyers and Items in separate documents but in the same collection. Each Item document would need a field pointing to its Buyer (parent) document, and each Buyer document's parent field would be set to null. The docs I linked to explain several implementations you could choose from.
If your items are stored in two separate collections than the best option will be write your own function and call it using mongoose.connection.db.eval('some code...');. In such case you can execute your advanced logic on the server side.
You can write something like this:
var allNearItems = db.Items.find(
{ location: {
$near: {
$geometry: {
type: "Point" ,
coordinates: [ <longitude> , <latitude> ]
},
$maxDistance: 100
}
}
});
var res = [];
allNearItems.forEach(function(item){
var buyer = db.Buyers.find({ id: item.buyerId })[0];
if (!buyer) continue;
if (item.bid < buyer.credit) {
res.push(item.id);
}
});
return res;
After evaluation (place it in mongoose.connection.db.eval("...") call) you will get the array of item id`s.
Use it with cautions. If your allNearItems array will be too large or you will query it very often you can face the performance problems. MongoDB team actually has deprecated direct js code execution but it is still available on current stable release.

BackboneJS: Sorting collection of linked items

I have some trouble figuring out how to sort a BackboneJS collection containing linked items.
Is it at all possible to do efficiently? (I am thinking of making one returning the count of previous elements, but that is really inefficient)
What should the comparator be? - and is a double linked list required?
My items look like
[
{
id: 1,
name: 'name',
previousItem: 2
},
{
id: 2,
name: 'othername',
previousItem: null
}
]
Here is the basic code to build the collection. I'm assuming you are using a Backbone Model here. In the loop, you need to add your models to the front of the collection (unshift) since you only know the previous item.
The key here is knowing what the last item is though. If you don't know it, then this will not work.
model = frontItem;
while (model != null) {
collection.unshift(model);
model = model.attr('previousItem')
}
there is a discussion about this on github, you can also use comparator, if you want to use comparator you need underscore
var PhotoCollection = Backbone.Collection.extend({
model: Photo,
comparator: function(item) {
return item.get('pid');
}
});

Categories