I want to delete from an articles table using knex by article_id. This already exists in comments table as a foreign key.
How can I test that data has been deleted and how can I send that to the user.
I decided to approach this by writing a function to delete from both functions with a .then. Does this look like I am on the right lines?
exports.deleteArticleById = function (req, res, next) {
const { article_id } = req.params;
return connection('comments')
.where('comments.article_id', article_id)
.del()
.returning('*')
.then((deleted) => {
console.log(deleted);
return connection('articles')
.where('articles.article_id', article_id)
.del()
.returning('*');
})
.then((article) => {
console.log(article);
return res.status(204).send('article deleted');
})
.catch(err => next(err));
};
At the moment I am getting the correct data with the logs but I am getting a status 500 but I think I need to be trying to get a 204?
Any help would be much appreciated.
What you're trying to do is called a cascading deletion.
These are better (and almost always) handled at the database level instead of the application level.
It's the job of the DBMS to enforce this kind of referential integrity assuming you define your schema correctly so that entities are correctly linked together, via foreign keys.
In short, you should define your database schema as such that when you delete an Article, it's associated Comments also get deleted for you.
Here's how I would do it using knex.js migrations:
// Define Article.
db.schema.createTableIfNotExists('article', t => {
t.increments('article_id').primary()
t.text('content')
})
// Define Comment.
// Each Comment is associated with an Article (1 - many).
db.schema.createTableIfNotExists('comment', t => {
t.increments('comment_id').primary() // Add an autoincrement primary key (PK).
t.integer('article_id').unsigned() // Add a foreign key (FK)...
.references('article.article_id') // ...which references Article PK.
.onUpdate('CASCADE') // If Article PK is changed, update FK as well.
.onDelete('CASCADE') // If Article is deleted, delete Comment as well.
t.text('content')
})
So when you run this to delete an Article:
await db('article').where({ article_id: 1 }).del()
All Comments associated with that Article also get deleted, automatically.
Don't try to perform cascading deletions yourself by writing application code. The DBMS is specifically designed with intricate mechanisms to ensure that deletions always happen in a consistent manner; It's purpose is to handle these operations for you. it would be wasteful, complicated and quite error-prone to attempt to replicate this functionality yourself.
Related
I have the following database setup:
Post model that has a commentCount field corresponding to the number of comments defined as an integer.
Comment model that has a postId field as an ObjectId.
In order to ensure I don't make excess database calls I store the commentCount on the post model.
Currently, I have everything working but for clarity and data consistency, I am trying to determine the best way of setting up the following interaction:
I delete a comment with the following code:
const comment = await Comment.findById(postId)
const deletedComment = await comment.deleteOne()
This correspondingly triggers a deleteOne middleware pre save hook as follows:
CommentSchema.pre('deleteOne', { document: true, query: false }, async function() {
// Update related post information
await Post.findByIdAndUpdate(
this.postId,
{ $inc: { commentCount: -1 }},
{ new: true }
)
})
which is meant to keep the commentCount field synchronized with the number of comments corresponding to the post.
However, I'm curious how errors work in Mongoose middleware hooks. For example, if the middleware update fails on the post and I throw an error, will that also cancel the original comment.deleteOne() operation. I could handle this operation in the API route leveraging transactions to ensure all succeed but that seems a bit overkill for this scenario. Leveraging middleware hook seems cleaner.
Suggestions?
I have a fully functioning CRUD app that I'm building some additional functionality for. The new functionality allows users to make changes to a list of vendors. They can add new vendors, update them and delete them. The add and delete seem to be working just fine, but updating doesn't seem to be working even though it follows a similar method I use in the existing CRUD functionality elsewhere in the app. Here's my code:
// async function from AXIOS request
const { original, updatedVendor } = req.body;
let list = await Vendor.findOne({ id: 1 });
if (!list) return res.status(500).json({ msg: 'Vendors not found' });
let indexOfUpdate = list.vendors.findIndex(
(element) => element.id === original.id
);
list.vendors[indexOfUpdate].id = updatedVendor.id;
list.vendors[indexOfUpdate].name = updatedVendor.name;
const updated = await list.save();
res.json(updated);
The save() isn't updating the existing document on the DB side. I've console logged that the list.vendors array of objects is, indeed, being changed, but save() isn't doing the saving.
EDIT:
A note on the manner of using save, this format doesn't work either:
list.save().then(res.json(list));
EDIT 2:
To answer the questions about seeing the logs, I cannot post the full console.log(list.vendors) as it contains private information, however, I can confirm that the change made to the list is showing up when I run the following in the VendorSchema:
VendorSchema.post('save', function () {
console.log(util.inspect(this, { maxArrayLength: null }));
});
However, the save still isn't changing the DB side.
Since you are using nested objects, Mongoose will not be able to detect the changes made. You need to mark the modified as an object before the save
list.markModified('vendors');
i'm trying to create a post-comment relationship​​ where the a user can write a post and others users can comment on the post.
I can show the posts but when in trying to do the join for displaying the comments that belongs to the post i cant..
below is my db schema
i was thinking that first i need to get the key from the posts node and then move to comments and somehow get the comments of each post..
and use it in *ngfor inside the ngfor of the post?
i was trying something like
findAllComments(){
this.db.list('posts', { preserveSnapshot: true})
.subscribe(snapshots=>{
snapshots.forEach(snapshot => {
return this.db.list(`comments/${snapshot.key}`)
});
});
}
but this returns void of course:
When I console.log:
findAllComments(){
this.db.list('/posts', { preserveSnapshot: true})
.subscribe(snapshots=>{
snapshots.forEach(snapshot => {
const kapa = this.db.list(`comments/${snapshot.key}`).do(console.log)
kapa.subscribe();
});
});
}
I get in console this
I'm not sure if my thinking on this is right.
I'm confused because I am new in angular and firebase.
You aren't returning a subset of posts (you're querying on all posts) so there's no need to have a join of any sort here. You can just query for all comments:
findAllComments(){
// {preserveSnapshot: true} is deprecated
return this.db.list('/comments').snapshotChanges();
}
Assuming you actually want to retrieve a subset of comments (not what your example depicts), you could do something like this:
this.replies = db.list('AngularFire/joins/messages').snapshotChanges().map(snapshots => {
console.log('snapshots', snapshots);
return snapshots.map(ss => {
return db.list(`AngularFire/joins/replies/${ss.key}`).valueChanges();
});
});
There is a complete working example of the latter here.
I guess in the first part, you are not subscribing to the comments list. As there is no subscription to the comments, the request to the get the list of comments from firebase will not be fired and hence you don't see any comments.
In the second part, as you are subscribing to the comments list, you are seeing them.
In cases like these, where you want to fetch something based on a previous request, you could use switch/concat/merge Maps. Hope this helps
I've looked through a bunch of other SO posts and have found different ways to do this, so I'm wondering which is most preferred. I'm teaching this to students, so I want to give them best practices.
If I have the following BlogPost object (Simplified):
var BlogPostSchema = new mongoose.Schema({
body: String,
comments: [String]
});
and I want to add a new comment to the array of comments for this blog, I can think of at least 3 main ways to accomplish this:
1) Push the comment to the blog object in Angular and submit a PUT request to the /blogs/:blogID endpoint, updating the whole blog object with the new comment included.
2) Submit a POST request to a /blogs/:blogID/comments endpoint where the request body is just the new comment, find the blog, push the comment to the array in vanilla js, and save it:
BlogPost.findById(req.params.blogID, function(err, blogPost) {
blogPost.comments.push(req.body);
blogPost.save(function(err) {
if (err) return res.status(500).send(err);
res.send(blogPost);
});
});
OR
3) Submit the POST to a /blogs/:blogID/comments endpoint with the request body of the new comment, then use MongoDB's $push or $addToSet to add the commend to the array of comments:
BlogPost.findByIdAndUpdate(
req.params.blogID,
{$push: {comments: req.body}},
{safe: true, new: true},
function(err, blogPost) {
if (err) return res.status(500).send(err);
res.send(blogPost);
});
});
I did find this stackoverflow post where the answerer talks about option 2 vs. option 3 and basically says to use option 2 whenever you can, which does seem simpler to me. (And I usually try to avoid methods that stop me from being able to use hooks and other mongoose goodies.)
What do you think? Any advice?
From application point of view, point 3 is better. The reason I think are.
The query itself specifies what we are trying to achieve. it's
easily readable.
save function is a wild card, so we don't know what it's going to change.
if you fetch the document and manipulate it and then call save it, there is outside but real chance that you might mess up some
other field of the document in process of manipulation
unintentionally, not the case with point 3.
In case of addToSet,basically the previous point is more visible.
Think about the concurrency, if multiple calls comes with different comment for same blog and you are trying option 2, there
is a chance that you might override the changes which were done in
between you fetched the document and when you are saving it. Option
3 is better in that sense.
Performance wise they both do the same thing, so there might not be much or any visible difference. But option 3 is bit safer and cleaner.
Fairly simple problem, just cant find the good/clean way to do this without making a call to another find
I've got my node app rigged up with Angular-Resource, and I'm just making some round-trip like data calls on new or changed data.
So ngResource making the $save() call to my /api/users/:id and such. And Node reacts to this call by creating or finding the user, making the updates, and saving them.
Whether through create() or save(), it returns the created record, and for right now, I use res.json(user) to spill the created/returned record for my Angular to handle populating my view with the updated information
Now, I know with Sequelizes find() and findAll() methods, I can use findAll({ include: [{ all: true }]}) or specify my models individually.
What I want to know is, what is the best way to get my records associations on save/create
and unfortunately, this just doesn't work:
models.User.create(newuser, {include:[{ all: true }]}).then(function(user) {
res.json(user);
});
Do I really have to perform another find() just to get my managed models associations?
To better illustrate the opted solution from RedactedProfile's comment, here's the code.
models.User
.create(newuser, {include:[{ all: true }]})
.then(user => {
user.reload().then(user => { res.json(user); })
});