How to load multiple collections from Firestore that have related fields? - javascript

Let's say I have 2 models.
Post
Author
The Author looks like below:
{
authorId: 'ghr4334t',
fullName: 'This is a post!'
Nickname: 'Avola
}
The Post looks like below and will have a reference to the author like below:
{
postId: '12fdc24',
authorId: 'ghr4334t',
content: 'This is a post!'
}
Currently, when a user clicks on a post, in order to show all the relevant information, I load the data as follow:
getPost(postId).then(post=> {
getAuthor(listing.uid).then((document) => {
// update state so I have the post object and author object.
})
})
So the above, I load the post, then I load the author. Once I've loaded them both, I can finally construct a custom object:
const finalPost = {
author: { ...this.state.authorData },
post: { ...this.state.postData }
}
Naturally..If I have a couple more fields that reference other collections, there will be a nest of get and .then() calls like below:
getPost(postId).then(post=> {
getAuthor(listing.uid).then((document) => {
getSomethingElse(listing.uid).then((document) => {
getAnother(listing.uid).then((document) => {
// finally update state with everything.
})
})
})
})
Is there a more a better way to load related information together without having to stack .then() calls?

Unfortunately, there isn't a better way to achieve what you want, with queries directly. Queries in Firestore doesn't provide you with many options on how to query and return data, mainly, when you would need to do any kind of JOIN on them, to search via references, which makes the work not very easy. I believe the way you are doing is the best option you have for more now.
An alternative you can try is to have Subcollections, where you will have a subcollection of Author inside your collection Post. This way, you will only treat with the reference of the Post, since the Author will be within the document of each specific Post. This way, the queries would be more simple, looking like this below. Of course, this would require you to modify your database.
var messageRef = db.collection('Post').doc('Post1')
.collection('Author').doc('Author1');
In case you still think this is not enough, I would recommend you to raise a Feature Request at Google's System, where the Google Developers will be able to check if having a new way of getting data is possible to be implemented.
Let me know if the information clarified your doubts!

Related

How to get and set a ref for a newly cached related object in Apollo client InMemoryCache?

I have a set of related items like so:
book {
id
...
related_entity {
id
...
}
}
which apollo caches as two separate cache objects, where the related_entity field on book is a ref to an EntityNode object. This is fine, the related entity data is also used elsewhere outside of the context of a book so having it separate works, and everything seems well and good and updates as expected...except in the case where the related entity does not exist on the initial fetch (and thus the ref on the book object is null) and I create one later on.
I've tried adding an update function to the useMutation hook that creates the aforementioned related_entity per their documentation: https://www.apollographql.com/docs/react/caching/cache-interaction/#example-adding-an-item-to-a-list like this:
const [mutateEntity, _i] = useMutation(CREATE_OR_UPDATE_ENTITY,{
update(cache, {data}) {
cache.modify({
id: `BookNode:${bookId}`,
fields: {
relatedEntity(_i) {
const newEntityRef = cache.writeFragment({
fragment: gql`
fragment NewEntity on EntityNode {
id
...someOtherAttr
}`,
data: data.entityData
});
return newEntityRef;
}
}
})
}
});
but no matter what I seem to try, newEntityRef is always undefined, even though the new EntityNode is definitely in the cache and can be read just fine using the exact same fragment. I could give up and just force a refetch of the Book object, but the data is already right there.
Am I doing something wrong/is there a better way?
Barring that is there another way to get a ref for a cached object given you have its identifier?
It looks like this is actually an issue with apollo-cache-persist - I removed it and the code above functions as expected per the docs. It also looks like I could instead update to the new version under a different package name apollo3-cache-persist, but I ended up not needing cache persistence anyway.

Using Apollo's writeFragment to update nested list

I am working on a application in which a ship can be configured using rudders and other stuff. The database structure is sort of nested, and so far I have been keeping my GraphQL queries in correspondence with the database.
That means: I could fetch a ship using some query ship(projectId, shipId), but instead I am using a nested query:
query {
project(id:1) {
id
title
ship(id:1) {
id
name
rudders {
id
position
}
}
}
}
Such a structure of course leads to a lot of nested arrays. For example, if I have just added a new rudder, I would have to retrieve using cache.readQuery, which gives me the project object rather than the rudder list. To add the rudder to the cache, I'd get a long line with nested, destructured objects, making the code hard to read.
So I thought of using GraphQL fragments. On the internet, I see them being used a lot to prevent having to re-type several fields on extensive objects (which I personally find very useful as well!). However, there are not so many examples where a fragment is used for an array.
Fragments for arrays could save all the object destructuring when appending some data to an array that is nested in some cached query. Using Apollo's readFragment and writeFragment, I managed to get something working.
The fragment:
export const FRAGMENT_RUDDER_ARRAY = gql`
fragment rudderArray on ShipObject {
rudders {
id
position
}
}
`
Used in the main ship query:
query {
project(id: ...) {
id
title
ship(id: ...) {
id
name
...rudderArray
}
}
}
${RUDDER_FRAGMENT_ARRAY}
Using this, I can write a much clearer update() function to update Apollo's cache after a mutation. See below:
const [ createRudder ] = useMutation(CREATE_RUDDER_MUTATION, {
onError: (error) => { console.log(JSON.stringify(error))},
update(cache, {data: {createRudder}}) {
const {rudders} = cache.readFragment({
id: `ShipObject:${shipId}`,
fragment: FRAGMENT_RUDDER_ARRAY,
fragmentName: 'rudderArray'
});
cache.writeFragment({
id: `ShipObject:${shipId}`,
fragment: FRAGMENT_RUDDER_ARRAY,
fragmentName: 'rudderArray',
data: {rudders: rudders.concat(createRudder.rudder)}
});
}
});
Now what is my question? Well, since I almost never see fragments being used for this end, I find this working well, but I am wondering if there's any drawbacks to this.
On the other hand, I also decided to share this because I could not find any examples. So if this is a good idea, feel free to use the pattern!

Deleting data from associated tables using knex.js

I want to delete from an articles table using knex by article_id. This already exists in comments table as a foreign key.
How can I test that data has been deleted and how can I send that to the user.
I decided to approach this by writing a function to delete from both functions with a .then. Does this look like I am on the right lines?
exports.deleteArticleById = function (req, res, next) {
const { article_id } = req.params;
return connection('comments')
.where('comments.article_id', article_id)
.del()
.returning('*')
.then((deleted) => {
console.log(deleted);
return connection('articles')
.where('articles.article_id', article_id)
.del()
.returning('*');
})
.then((article) => {
console.log(article);
return res.status(204).send('article deleted');
})
.catch(err => next(err));
};
At the moment I am getting the correct data with the logs but I am getting a status 500 but I think I need to be trying to get a 204?
Any help would be much appreciated.
What you're trying to do is called a cascading deletion.
These are better (and almost always) handled at the database level instead of the application level.
It's the job of the DBMS to enforce this kind of referential integrity assuming you define your schema correctly so that entities are correctly linked together, via foreign keys.
In short, you should define your database schema as such that when you delete an Article, it's associated Comments also get deleted for you.
Here's how I would do it using knex.js migrations:
// Define Article.
db.schema.createTableIfNotExists('article', t => {
t.increments('article_id').primary()
t.text('content')
})
// Define Comment.
// Each Comment is associated with an Article (1 - many).
db.schema.createTableIfNotExists('comment', t => {
t.increments('comment_id').primary() // Add an autoincrement primary key (PK).
t.integer('article_id').unsigned() // Add a foreign key (FK)...
.references('article.article_id') // ...which references Article PK.
.onUpdate('CASCADE') // If Article PK is changed, update FK as well.
.onDelete('CASCADE') // If Article is deleted, delete Comment as well.
t.text('content')
})
So when you run this to delete an Article:
await db('article').where({ article_id: 1 }).del()
All Comments associated with that Article also get deleted, automatically.
Don't try to perform cascading deletions yourself by writing application code. The DBMS is specifically designed with intricate mechanisms to ensure that deletions always happen in a consistent manner; It's purpose is to handle these operations for you. it would be wasteful, complicated and quite error-prone to attempt to replicate this functionality yourself.

angular angularfire2 simple join

i'm trying to create a post-comment relationship​​ where the a user can write a post and others users can comment on the post.
I can show the posts but when in trying to do the join for displaying the comments that belongs to the post i cant..
below is my db schema
i was thinking that first i need to get the key from the posts node and then move to comments and somehow get the comments of each post..
and use it in *ngfor inside the ngfor of the post?
i was trying something like
findAllComments(){
this.db.list('posts', { preserveSnapshot: true})
.subscribe(snapshots=>{
snapshots.forEach(snapshot => {
return this.db.list(`comments/${snapshot.key}`)
});
});
}
but this returns void of course:
When I console.log:
findAllComments(){
this.db.list('/posts', { preserveSnapshot: true})
.subscribe(snapshots=>{
snapshots.forEach(snapshot => {
const kapa = this.db.list(`comments/${snapshot.key}`).do(console.log)
kapa.subscribe();
});
});
}
I get in console this
I'm not sure if my thinking on this is right.
I'm confused because I am new in angular and firebase.
You aren't returning a subset of posts (you're querying on all posts) so there's no need to have a join of any sort here. You can just query for all comments:
findAllComments(){
// {preserveSnapshot: true} is deprecated
return this.db.list('/comments').snapshotChanges();
}
Assuming you actually want to retrieve a subset of comments (not what your example depicts), you could do something like this:
this.replies = db.list('AngularFire/joins/messages').snapshotChanges().map(snapshots => {
console.log('snapshots', snapshots);
return snapshots.map(ss => {
return db.list(`AngularFire/joins/replies/${ss.key}`).valueChanges();
});
});
There is a complete working example of the latter here.
I guess in the first part, you are not subscribing to the comments list. As there is no subscription to the comments, the request to the get the list of comments from firebase will not be fired and hence you don't see any comments.
In the second part, as you are subscribing to the comments list, you are seeing them.
In cases like these, where you want to fetch something based on a previous request, you could use switch/concat/merge Maps. Hope this helps

Pushing new item to a mongoDB document array

I've looked through a bunch of other SO posts and have found different ways to do this, so I'm wondering which is most preferred. I'm teaching this to students, so I want to give them best practices.
If I have the following BlogPost object (Simplified):
var BlogPostSchema = new mongoose.Schema({
body: String,
comments: [String]
});
and I want to add a new comment to the array of comments for this blog, I can think of at least 3 main ways to accomplish this:
1) Push the comment to the blog object in Angular and submit a PUT request to the /blogs/:blogID endpoint, updating the whole blog object with the new comment included.
2) Submit a POST request to a /blogs/:blogID/comments endpoint where the request body is just the new comment, find the blog, push the comment to the array in vanilla js, and save it:
BlogPost.findById(req.params.blogID, function(err, blogPost) {
blogPost.comments.push(req.body);
blogPost.save(function(err) {
if (err) return res.status(500).send(err);
res.send(blogPost);
});
});
OR
3) Submit the POST to a /blogs/:blogID/comments endpoint with the request body of the new comment, then use MongoDB's $push or $addToSet to add the commend to the array of comments:
BlogPost.findByIdAndUpdate(
req.params.blogID,
{$push: {comments: req.body}},
{safe: true, new: true},
function(err, blogPost) {
if (err) return res.status(500).send(err);
res.send(blogPost);
});
});
I did find this stackoverflow post where the answerer talks about option 2 vs. option 3 and basically says to use option 2 whenever you can, which does seem simpler to me. (And I usually try to avoid methods that stop me from being able to use hooks and other mongoose goodies.)
What do you think? Any advice?
From application point of view, point 3 is better. The reason I think are.
The query itself specifies what we are trying to achieve. it's
easily readable.
save function is a wild card, so we don't know what it's going to change.
if you fetch the document and manipulate it and then call save it, there is outside but real chance that you might mess up some
other field of the document in process of manipulation
unintentionally, not the case with point 3.
In case of addToSet,basically the previous point is more visible.
Think about the concurrency, if multiple calls comes with different comment for same blog and you are trying option 2, there
is a chance that you might override the changes which were done in
between you fetched the document and when you are saving it. Option
3 is better in that sense.
Performance wise they both do the same thing, so there might not be much or any visible difference. But option 3 is bit safer and cleaner.

Categories