I have a hard time getting my data right for Firebase and need some guidance. I'm confident around relational database and find that I'm just replicating that way of thinking to firebase.
I'm creating a fitness app. The structure I have is as follows: Exercises, Workouts and Programs.
Exercises is quite simple, it's a name, description and a photo of a simple movement, like squats or benchpressing.
exercise
--e1
--name:"Squats"
--description: "..."
--e2
--name:"Benchpress"
--description: "..."
Then I have a list of workouts. A workout is a set of exercises performed in a special order with specific reps and sets. One exercise could be used in many workouts with different parameters (reps, sets, rest)
workouts
--w1
--name:"Easy workout"
--description: "Design for beginners"
--exercises:
--we1:
--exercise: e1
--reps: 12
--rest: 60
--order: 1
--we2
--exercise: e2
--reps:6
--rest: 30
--order: 2
--w2
--name: "Hard exercise"
...
So here I have w1, a workout that just uses exercise 1: First you perform 12 reps of exercise 1, then rest for 60 seconds, then 6 reps of exercise 2 and rest for 30 seconds.
Finally I have a program/plan. A program consist of multiple workouts that you can follow. For example your program could consist of 3 workouts that you should on monday, wednesday and friday.
programs
--p1
--name: "The Stack Overflow Fitness Plan"
--description: "I am a fancy text"
--workouts:
--w1: true
--w2: true
So as stated above I'm just putting up relational data up in firebase. Which isn't the correct way. I just have trouble how I should maange this data in flat way. Are duplicates for example ok?
At the moment I have a really expensive read(not best practice) since I query the program location, map it, query the workouts, map it,query the exercises. It becomes quite cumbersome fast.
Any thoughs and help is greatly appreciated.
You could take a look into the normalizr library. It was originally designed to help in working with rest API and redux : the same problem of duplication in a non relationnal environnement occurs.
Basically what I would do is have each different kind of entity stored as a Map in firebase.
const tree = {
exercises: { 1: { name: ... }, ... },
workouts: { 1: ... },
programs: { 1: ... },
}
Then you can have each of your components make subscriptions to their corresponding entity. With promises and/or streams it's pretty straightfoward.
Each of your relationships are simply an id in an Array and you can use the normalizr library to normalize / denormalize when you want to read / write in the db.
Note that with this pattern you can easily use the firebase update api to update several entities at once using a query such as :
const updateQuery = {
"workouts/id": ...,
"exercises/id": ...,
"programs/id": ...,
}
Related
I'm working on a POC to pull data from various liquidity pools (paired tokens, i.e. WEI/USDT from various exchanges.
In trying to create something like the DAI chart seen here:
I am trying to come up with a query and data model in JavaScript to contain this data.
The given would be "DAI". First get Uniswap results with DAI pools (any pool pairs containing "DAI"). Then get a list of results from SushiSwap of matching "WETH". Since both sources will likely not have all matching pools, with these two lists in memory, create a list of all items that match, i.e. USDT/WETH (matching in green in the image above).
I initially was going to create an associative array with a list of tokens to match:
poolList["Uniswap"] = { collection of pool objects }
poolList["Sushiswap"] = { collection of pool objects }
Where the collection data would looks something like
{
"data": {
"pools": [
{
"token0": {
"id": "0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2",
"name": "Wrapped Ether",
"symbol": "WETH"
},
"token1": {
"id": "0xd1063ee5ec2891991a29fefb52bcc448cd386844",
"name": "BanDogge Mastiff",
"symbol": "DOGGE"
}
},
{
"token0": {
"id": "0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2",
"name": "Wrapped Ether",
"symbol": "WETH"
},
How would one store the data from various exchanges so that either a filters list exists of common pairs, or create some sort of 2D array reflecting how the chart above appears?
I went about looking at this at the ground level: What kind of data problem is this?
In addressing this, I was able to work through a suggested response to my question, What can be used to identify the source collection for common elements from n-number of collections? and came up with a solution that renders the data in a workable format:
The table does not exactly reflect the initial question, however,
(index) represents the pool name or token pairs, i.e. USDT/ETH. Subsequent columns represent exchanges and their properties that contain these token pairs or liquidity pools.
To maintain the context of the original question and to assist others, the full code solution can be found here.
Hi everyone I am having an issue with code by zapier (javascript)... (I know, I know)
The issue I am having is I am bringing in my data from airtable. The data comes in as three distinct arrays. They are:
specCategory[]
specName[]
specDescription[]
I iterate over the arrays and split them out by the commas in the arrays. I then form each of these values in each of the arrays into their own object. I then push that object into an array.
My end goal is to push a JSON payload into PDFMonkey in the form:
{
"payload": [
{
specName: "specName data",
specCategory: "specCategory data",
specDescription: "specDescription data"
},
{
specName: "specName data",
specCategory: "specCategory data",
specDescription: "specDescription data"
},
{
specName: "specName data",
specCategory: "specCategory data",
specDescription: "specDescription data"
}
]
}
Zapier seems to return the correct payload for me. That is, an array of objects. However, when I go to access the data in a subsequent step, the data is returned back into three distinct arrays again.
Here is what the output from the zapier code looks like.
specArray
1
specCategory
Kitchen Appliances
specName
Gas Hob
specDescription
Westinghouse WHG 643 SA Gas Hob
2
specCategory
Kitchen Appliances
specName
Range Hood
specDescription
Westinghouse WRR 614 SA Range Hood
3
specCategory
Kitchen Appliances
specName
Oven
specDescription
Westinghouse WVE 613 S Oven
4
specCategory
Doors and Windows (Internal)
specName
Architraves
specDescription
42X12min Splayed Profile F/Joint Pine painted gloss
5
specCategory
External Stairs
specName
External Stair Balustrade
specDescription
Painted pre-primed ladies waist handrail with slats and bottom rails (not included if stair is under 1m in height)
Instead when accessing it in subsequent steps I receive three distinct arrays like:
specArraySpecName: specName[1],specName[2],specName[...],
specArraySpecCategory: specCategory[1],specCategory[2],specCategory[...],
specArraySpecDescription: specDescription[1],specDescription[2],specDescription[...]
Here is my code so you can have a look and see if what I am doing is wrong. When I try to output just the array of objects (instead of first wrapping the array in object tags) it outputs the single value of each object but the problem is that makes zapier loop the subsequent steps each time using each object as an input.
Is there a way to flatten or stringify the JSON object I am trying to create?
My code below for reference:
// this is wrapped in an async function
// you can use await throughout the function
let categories = inputData.specCategories.split(/\s*,\s*/);
let names = inputData.specName.split(/\s*,\s*/);
let descriptions = inputData.specDescriptions.split(/\s*,\s*/);
let specArray = [];
// for loop to input each of the discrete items into an array
for (let i = 0; i < categories.length; i++) {
let spec = {
specCategory: categories[i],
specName: names[i],
specDescription: descriptions[i]
specArray.push(spec);
}
output = { specArray };
I apologise in advance for the formatting but stack overflow would not let me post code blocks due to some not properly formatted code (tried ctrl + k, 4 space, triple back ticks etc) and I could not figure it out.
Thanks for your help!
Great question! It's worth mentioning that this is Zapier "working as expected"... for use cases that don't match yours. This behavior supports line items, a common structure in Accounting & E-Commerce. But, that doesn't help you, but you want Zapier to stop messing with your nicely structured values.
The best way to handle this is probably to stringify the whole JSON. Zapier only mangles arrays, so it'll leave a JSON string unharmed.
That would be something like this in the code:
// ...
output = { result: JSON.stringify(specArray) };
With any luck, you'll be able to use that payload in a body field for PDFMonkey and it'll process correctly!
But, that's how I share JSON between Zapier steps to keep it un-mangled.
I am trying to check If a field exists in a sub-document of an array and if it does, it will only provide those documents in the callback. But every time I log the callback document it gives me all values in my array instead of ones based on the query.
I am following this tutorial
And the only difference is I am using the findOne function instead of find function but it still gives me back all values. I tried using find and it does the same thing.
I am also using the same collection style as the example in the link above.
Example
In the image above you can see in the image above I have a document with a uid field and a contacts array. What I am trying to do is first select a document based on the inputted uid. Then after selecting that document then I want to display the values from the contacts array where contacts.uid field exists. So from the image above only values that would be displayed is contacts[0] and contacts[3] because contacts1 doesn't have a uid field.
Contact.contactModel.findOne({$and: [
{uid: self.uid},
{contacts: {
$elemMatch: {
uid: {
$exists: true,
$ne: undefined,
}
}
}}
]}
You problems come from a misconception about data modeling in MongoDB, not uncommon for developers coming from other DBMS. Let me illustrate this with the example of how data modeling works with an RDBMS vs MongoDB (and a lot of the other NoSQL databases as well).
With an RDBMS, you identify your entities and their properties. Next, you identify the relations, normalize the data model and bang your had against the wall for a few to get the UPPER LEFT ABOVE AND BEYOND JOIN™ that will answer the questions arising from use case A. Then, you pretty much do the same for use case B.
With MongoDB, you would turn this upside down. Looking at your use cases, you would try to find out what information you need to answer the questions arising from the use case and then model your data so that those questions can get answered in the most efficient way.
Let us stick with your example of a contacts database. A few assumptions to be made here:
Each user can have an arbitrary number of contacts.
Each contact and each user need to be uniquely identified by something other than a name, because names can change and whatnot.
Redundancy is not a bad thing.
With the first assumption, embedding contacts into a user document is out of question, since there is a document size limit. Regarding our second assumption: the uid field becomes not redundant, but simply useless, as there already is the _id field uniquely identifying the data set in question.
The use cases
Let us look at some use cases, which are simplified for the sake of the example, but it will give you the picture.
Given a user, I want to find a single contact.
Given a user, I want to find all of his contacts.
Given a user, I want to find the details of his contact "John Doe"
Given a contact, I want to edit it.
Given a contact, I want to delete it.
The data models
User
{
"_id": new ObjectId(),
"name": new String(),
"whatever": {}
}
Contact
{
"_id": new ObjectId(),
"contactOf": ObjectId(),
"name": new String(),
"phone": new String()
}
Obviously, contactOf refers to an ObjectId which must exist in the User collection.
The implementations
Given a user, I want to find a single contact.
If I have the user object, I have it's _id, and the query for a single contact becomes as easy as
db.contacts.findOne({"contactOf":self._id})
Given a user, I want to find all of his contacts.
Equally easy:
db.contacts.find({"contactOf":self._id})
Given a user, I want to find the details of his contact "John Doe"
db.contacts.find({"contactOf":self._id,"name":"John Doe"})
Now we have the contact one way or the other, including his/her/undecided/choose not to say _id, we can easily edit/delete it:
Given a contact, I want to edit it.
db.contacts.update({"_id":contact._id},{$set:{"name":"John F Doe"}})
I trust that by now you get an idea on how to delete John from the contacts of our user.
Notes
Indices
With your data model, you would have needed to add additional indices for the uid fields - which serves no purpose, as we found out. Furthermore, _id is indexed by default, so we make good use of this index. An additional index should be done on the contact collection, however:
db.contact.ensureIndex({"contactOf":1,"name":1})
Normalization
Not done here at all. The reasons for this are manifold, but the most important is that while John Doe might have only have the mobile number of "Mallory H Ousefriend", his wife Jane Doe might also have the email address "janes_naughty_boy#censored.com" - which at least Mallory surely would not want to pop up in John's contact list. So even if we had identity of a contact, you most likely would not want to reflect that.
Conclusion
With a little bit of data remodeling, we reduced the number of additional indices we need to 1, made the queries much simpler and circumvented the BSON document size limit. As for the performance, I guess we are talking of at least one order of magnitude.
In the tutorial you mentioned above, they pass 2 parameters to the method, one for filter and one for projection but you just passed one, that's the difference. You can change your query to be like this:
Contact.contactModel.findOne(
{uid: self.uid},
{contacts: {
$elemMatch: {
uid: {
$exists: true,
$ne: undefined,
}
}
}}
)
The agg framework makes filtering for existence of a field a little tricky. I believe the OP wants all docs where a field exists in an array of subdocs and then to return ONLY those subdocs where the field exists. The following should do the trick:
var inputtedUID = "0"; // doesn't matter
db.foo.aggregate(
[
// This $match finds the docs with our input UID:
{$match: {"uid": inputtedUID }}
// ... and the $addFields/$filter will strip out those entries in contacts where contacts.uid does NOT exist. We wish we could use {cond: {$zz.name: {$exists:true} }} but
// we cannot use $exists here so we need the convoluted $ifNull treatment. Note we
// overwrite the original contacts with the filtered contacts:
,{$addFields: {contacts: {$filter: {
input: "$contacts",
as: "zz",
cond: {$ne: [ {$ifNull:["$$zz.uid",null]}, null]}
}}
}}
,{$limit:1} // just get 1 like findOne()
]);
show(c);
{
"_id" : 0,
"uid" : 0,
"contacts" : [
{
"uid" : "buzz",
"n" : 1
},
{
"uid" : "dave",
"n" : 2
}
]
}
I've run into a bit of an issue with some data that I'm storing in my MongoDB (Note: I'm using mongoose as an ODM). I have two schemas:
mongoose.model('Buyer',{
credit: Number,
})
and
mongoose.model('Item',{
bid: Number,
location: { type: [Number], index: '2d' }
})
Buyer/Item will have a parent/child association, with a one-to-many relationship. I know that I can set up Items to be embedded subdocs to the Buyer document or I can create two separate documents with object id references to each other.
The problem I am facing is that I need to query Items where it's bid is lower than Buyer's credit but also where location is near a certain geo coordinate.
To satisfy the first criteria, it seems I should embed Items as a subdoc so that I can compare the two numbers. But, in order to compare locations with a geoNear query, it seems it would be better to separate the documents, otherwise, I can't perform geoNear on each subdocument.
Is there any way that I can perform both tasks on this data? If so, how should I structure my data? If not, is there a way that I can perform one query and then a second query on the result from the first query?
Thanks for your help!
There is another option (besides embedding and normalizing) for storing hierarchies in mongodb, that is storing them as tree structures. In this case you would store Buyers and Items in separate documents but in the same collection. Each Item document would need a field pointing to its Buyer (parent) document, and each Buyer document's parent field would be set to null. The docs I linked to explain several implementations you could choose from.
If your items are stored in two separate collections than the best option will be write your own function and call it using mongoose.connection.db.eval('some code...');. In such case you can execute your advanced logic on the server side.
You can write something like this:
var allNearItems = db.Items.find(
{ location: {
$near: {
$geometry: {
type: "Point" ,
coordinates: [ <longitude> , <latitude> ]
},
$maxDistance: 100
}
}
});
var res = [];
allNearItems.forEach(function(item){
var buyer = db.Buyers.find({ id: item.buyerId })[0];
if (!buyer) continue;
if (item.bid < buyer.credit) {
res.push(item.id);
}
});
return res;
After evaluation (place it in mongoose.connection.db.eval("...") call) you will get the array of item id`s.
Use it with cautions. If your allNearItems array will be too large or you will query it very often you can face the performance problems. MongoDB team actually has deprecated direct js code execution but it is still available on current stable release.
Well i am struggling with the aggregation problems. I thought the easiest way to solve problem is to use map reduce or make separate find queries and then loop through with the async library help.
The schema is here:
db.keyword
keyword: String
start: Date
source: String(Only one of these (‘google’,’yahoo’,’bing’,’duckduckgo’) )
job: ref db.job
results: [
{
title: String
url: String
position: Number
}
]
db.job
name: String
keywords: [ String ]
urls: [ String ]
sources: [ String(‘google’,’yahoo’,’bing’,’duckduckgo’) ]
Now i need to take the data to this form:
data = {
categories: [ 'keyword1', 'keyword2', 'keyword3' ],
series: [
{
name: 'google',
data: [33, 43, 22]
},
{
name: 'yahoo',
data: [12, 5, 3]
}
]
}
Well the biggest problem is that the series[0].data array is made of really difficult find, matching the db.job.urls against the db.keyword.results.url and then get the position.
Is there any way to simplify the query_? I have looked through many of the map reduce examples, but I cant find the correct way what data to map and which to reduce.
It looks as though you are trying to combine data from two separate collections (keyword and job).
Map Reduce as well as the new Aggregation Framework can only operate on a single collection at a time.
Your best bet is probably to query each collection separately and programmatically combine the results, saving them in whichever form is best suited to your application.
If you would like to experiment with Map Reduce, here is a link to a blog post written by a user who used an incremental Map Reduce operation to combine values from two collections.
http://tebros.com/2011/07/using-mongodb-mapreduce-to-join-2-collections/
For more information on using Map Reduce with MongoDB, please see the Mongo Documentation:
http://www.mongodb.org/display/DOCS/MapReduce
(The section on incremental Map Reduce is here: http://www.mongodb.org/display/DOCS/MapReduce#MapReduce-IncrementalMapreduce)
There are some additional Map Reduce examples in the MongoDB Cookbook:
http://cookbook.mongodb.org/
For a step-by-step walkthrough of how a Map Reduce operation is run, please see the "Extras" section of the MongoDB Cookbook recipe "Finding Max And Min Values with Versioned Documents" http://cookbook.mongodb.org/patterns/finding_max_and_min/
Hopefully the above will give you some ideas for how to achieve your desired results. As I mentioned, I believe that the most straightforward solution is simply to combine the results programmatically. However, if you are successful writing a Map Reduce operation that does this, please post your solution, so that the Community may gain the benefit of your experience.