I have a fully functioning CRUD app that I'm building some additional functionality for. The new functionality allows users to make changes to a list of vendors. They can add new vendors, update them and delete them. The add and delete seem to be working just fine, but updating doesn't seem to be working even though it follows a similar method I use in the existing CRUD functionality elsewhere in the app. Here's my code:
// async function from AXIOS request
const { original, updatedVendor } = req.body;
let list = await Vendor.findOne({ id: 1 });
if (!list) return res.status(500).json({ msg: 'Vendors not found' });
let indexOfUpdate = list.vendors.findIndex(
(element) => element.id === original.id
);
list.vendors[indexOfUpdate].id = updatedVendor.id;
list.vendors[indexOfUpdate].name = updatedVendor.name;
const updated = await list.save();
res.json(updated);
The save() isn't updating the existing document on the DB side. I've console logged that the list.vendors array of objects is, indeed, being changed, but save() isn't doing the saving.
EDIT:
A note on the manner of using save, this format doesn't work either:
list.save().then(res.json(list));
EDIT 2:
To answer the questions about seeing the logs, I cannot post the full console.log(list.vendors) as it contains private information, however, I can confirm that the change made to the list is showing up when I run the following in the VendorSchema:
VendorSchema.post('save', function () {
console.log(util.inspect(this, { maxArrayLength: null }));
});
However, the save still isn't changing the DB side.
Since you are using nested objects, Mongoose will not be able to detect the changes made. You need to mark the modified as an object before the save
list.markModified('vendors');
Related
First time poster here!
While I was trying to build a little exercise organizer application with ReactJS and Firebase realtime database I encountered a problem with the Firebase push() method.
I have a couple elements on my page that push data to the database once they are clicked, which looks like this:
const planRef = firebase.database().ref("plan");
const currentExName = e.currentTarget.firstChild.textContent;
const exercise = {
name: currentExName,
type: e.currentTarget.children[1].textContent,
user: this.state.user.displayName
};
planRef.push(exercise);
Also, if the element is clicked again, then it gets removed from the database like this:
planRef.orderByKey().on("value", snapshot => {
let exercises = snapshot.val();
for (let ex in exercises) {
if (exercises[ex].name === currentExName) {
planRef.child(ex).set(null);
}
}
});
This is working fine as long as I don't try to push something to the database when I just deleted the last bit of data from it. In that case it gets removed right away.
Data getting removed
Summary:
Write data to the realtime database using ref.push()
Delete data using ref.child(child).set(null) (I tried remove() before, same problem)
Try to push the same data to the database again which leads to the data getting deleted right after being written to the database
I couldn't find anything about this kind of problem so far so I guess I might have made a mistake somewhere. Let me know if the information provided is not sufficient.
Thanks in advance.
Removing a child is an asynchronous operation. I guess what is happening here is the removing operation takes more time than the new writing operation. You will need to await for it if you want to write again on the same key.
planRef.child(ex).set(null).then(() => {
planRef.child(ex).push(new);
});
Or using async/await:
await planRef.child(ex).set(null);
planRef.child(ex).push(new);
Let me know if it worked.
I am using Firebase Realtime Database. I have an object which has all the posts created by all our users. This object is huge.
In order to display the posts in a fast way, we have given each user an object with relevant post IDs.
The structure looks like this:
/allPosts/$postID/
: { $postID: {id: $postID, details: '', title: '', timestamp: ''} }
/user/$userID/postsRelevantToThisUser/
: { $postID: {id: $postID} }
'postsRelevantToThisUser' only contains the IDs of the posts. I need to iterate over each of these IDs and retrieve the entire post information from /allPosts/
As a result, the client won't have to download the entire allPosts object and the app will be much faster.
To do this, I've written the below code. It is successfully retrieving and rendering only the relevant posts. Whenever a new postID is added or removed from /postsRelevantToThisUser/ in Firebase Realtime Database, React Native correctly re-renders the list.
However, when anything in /allPosts/$postID changes, for exampe: if title parameter changes, it is not reflected in the view.
What's a good way to solve this problem?
let userPostRef = firebase.database().ref(`/users/${uid}/postsRelevantToThisUser`)
userPostRef.on('value', (snapshot) => {
let relPostIds = [];
let posts = [];
snapshot.forEach(function(childSnapshot) {
const {id} = childSnapshot.val();
relPostIds.push(id);
})
relPostIds.map(postId => {
firebase.database().ref(`allPosts/${postId}`).on('value', (postSnapshot) => {
let post = postSnapshot.val()
posts.push(post);
this.setState({ postsToRender:posts });
})
})
Since you've spread the data that you need to show the posts to the user over multiple places, you will need to keep listeners attached to multiple places if you want to get realtime updates about that data.
So to listen for title updates, you'll need to keep a listener to each /allPosts/$postID that the user can currently see. While it can be a bit finicky in code to keep track of all those listeners, they are actually quite efficient for Firebase itself, so performance should be fine up to a few dozen listeners at least (and it seems unlikely a user will be actively reading more post titles at once).
Alternatively, you can duplicate the information that you want to show in the list view, under each user's /user/$userID/postsRelevantToThisUser nodes. That way you're duplicating more data, but won't need the additional listeners.
Either approach is fine, but I have a personal preference for the latter, as it keeps the code that reads the data (which is the most critical for scalability) simpler.
I want to delete from an articles table using knex by article_id. This already exists in comments table as a foreign key.
How can I test that data has been deleted and how can I send that to the user.
I decided to approach this by writing a function to delete from both functions with a .then. Does this look like I am on the right lines?
exports.deleteArticleById = function (req, res, next) {
const { article_id } = req.params;
return connection('comments')
.where('comments.article_id', article_id)
.del()
.returning('*')
.then((deleted) => {
console.log(deleted);
return connection('articles')
.where('articles.article_id', article_id)
.del()
.returning('*');
})
.then((article) => {
console.log(article);
return res.status(204).send('article deleted');
})
.catch(err => next(err));
};
At the moment I am getting the correct data with the logs but I am getting a status 500 but I think I need to be trying to get a 204?
Any help would be much appreciated.
What you're trying to do is called a cascading deletion.
These are better (and almost always) handled at the database level instead of the application level.
It's the job of the DBMS to enforce this kind of referential integrity assuming you define your schema correctly so that entities are correctly linked together, via foreign keys.
In short, you should define your database schema as such that when you delete an Article, it's associated Comments also get deleted for you.
Here's how I would do it using knex.js migrations:
// Define Article.
db.schema.createTableIfNotExists('article', t => {
t.increments('article_id').primary()
t.text('content')
})
// Define Comment.
// Each Comment is associated with an Article (1 - many).
db.schema.createTableIfNotExists('comment', t => {
t.increments('comment_id').primary() // Add an autoincrement primary key (PK).
t.integer('article_id').unsigned() // Add a foreign key (FK)...
.references('article.article_id') // ...which references Article PK.
.onUpdate('CASCADE') // If Article PK is changed, update FK as well.
.onDelete('CASCADE') // If Article is deleted, delete Comment as well.
t.text('content')
})
So when you run this to delete an Article:
await db('article').where({ article_id: 1 }).del()
All Comments associated with that Article also get deleted, automatically.
Don't try to perform cascading deletions yourself by writing application code. The DBMS is specifically designed with intricate mechanisms to ensure that deletions always happen in a consistent manner; It's purpose is to handle these operations for you. it would be wasteful, complicated and quite error-prone to attempt to replicate this functionality yourself.
My use case is the following:
I have a list of comments that I fetch using a GraphQL query. When the user writes a new comment, it gets submitted using a GraphQL mutation. Then I'm using updateQueries to append the new comment to the list.
In the UI, I want to highlight the newly created comments. I tried to add a property isNew: true on the new comment in mutationResult, but Apollo removes the property before saving it to the store (I assume that's because the isNew field isn't requested in the gql query).
Is there any way to achieve this?
Depends on what do you mean by "newly created objects". If it is authentication based application with users that can login, you can compare the create_date of comment with some last_online date of user. If the user is not forced to create an account, you can store such an information in local storage or cookies (when he/she last time visited the website).
On the other hand, if you think about real-time update of comments list, I would recommend you take a look at graphql-subscriptions with use of websockets. It provides you with reactivity in your user interface with use of pub-sub mechanism. Simple use case - whenever new comment is added to a post, every user/viewer is notified about that, the comment can be appended to the comments list and highlighted in a way you want it.
In order to achieve this, you could create a subscription called newCommentAdded, which client would subscribe to and every time a new comment is being created, the server side of the application would notify (publish) about that.
Simple implementation of such a case could look like that
const Subscription = new GraphQLObjectType({
name: 'Subscription',
fields: {
newCommentAdded: {
type: Comment, // this would be your GraphQLObject type for Comment
resolve: (root, args, context) => {
return root.comment;
}
}
}
});
// then create graphql schema with use of above defined subscription
const graphQLSchema = new GraphQLSchema({
query: Query, // your query object
mutation: Mutation, // your mutation object
subscription: Subscription
});
The above part is only the graphql-js part, however it is necessary to create a SubscriptionManager which uses the PubSub mechanism.
import { SubscriptionManager, PubSub } from 'graphql-subscriptions';
const pubSub = new PubSub();
const subscriptionManagerOptions = {
schema: graphQLSchema,
setupFunctions: {
newCommentAdded: (options, args) => {
newCommentAdded: {
filter: ( payload ) => {
// return true -> means that the subscrition will be published to the client side in every single case you call the 'publish' method
// here you can provide some conditions when to publish the result, like IDs of currently logged in user to whom you would publish the newly created comment
return true;
}
}
},
pubsub: pubSub
});
const subscriptionManager = new SubscriptionManager(subscriptionManagerOptions);
export { subscriptionManager, pubSub };
And the final step is to publish newly created comment to the client side when it is necessary, via above created SubscriptionManager instance. You could do that in the mutation method creating new comment, or wherever you need
// here newComment is your comment instance
subscriptionManager.publish( 'newCommentAdded', { comment: newComment } );
In order to make the pub-sub mechanism with use of websockets, it is necessary to create such a server alongside your main server. You can use the subscriptions-transport-ws module.
The biggest advantage of such a solution is that it provides reactivity in your application (real-time changes applied to comments list below post etc.). I hope that this might be a good choice for your use case.
I could see this being done a couple of ways. You are right that Apollo will strip the isNew value because it is not a part of your schema and is not listed in the queries selection set. I like to separate the concerns of the server data that is managed by apollo and the front-end application state that lends itself to using redux/flux or even more simply by managing it in your component's state.
Apollo gives you the option to supply your own redux store. You can allow apollo to manage its data fetching logic and then manage your own front-end state alongside it. Here is a write up discussing how you can do this: http://dev.apollodata.com/react/redux.html.
If you are using React, you might be able to use component lifecycle hooks to detect when new comments appear. This might be a bit of a hack but you could use componentWillReceiveProps to compare the new list of comments with the old list of comments, identify which are new, store that in the component state, and then invalidate them after a period of time using setTimeout.
componentWillReceiveProps(newProps) {
// Compute a diff.
const oldCommentIds = new Set(this.props.data.allComments.map(comment => comment.id));
const nextCommentIds = new Set(newProps.data.allComments.map(comment => comment.id));
const newCommentIds = new Set(
[...nextCommentIds].filter(commentId => !oldCommentIds.has(commentId))
);
this.setState({
newCommentIds
});
// invalidate after 1 second
const that = this;
setTimeout(() => {
that.setState({
newCommentIds: new Set()
})
}, 1000);
}
// Then somewhere in your render function have something like this.
render() {
...
{
this.props.data.allComments.map(comment => {
const isNew = this.state.newCommentIds.has(comment.id);
return <CommentComponent isNew={isNew} comment={comment} />
})
}
...
}
The code above was right off the cuff so you might need to play around a bit. Hope this helps :)
I want to publish and subscribe subset of same collection based on different route. Here is what I have
In /server/publish.js
Meteor.publish("questions", function() {
return Questions.find({});
});
Meteor.publish("questionSummaryByUser", function(userId) {
var q = Questions.find({userId : userId});
return q;
});
In /client/main.js
Deps.autorun(function() {
Meteor.subscribe("questions");
});
Deps.autorun(function () {
Meteor.subscribe("questionSummaryByUser", Session.get("selectedUserId"));
});
I am using the router package (https://github.com/tmeasday/meteor-router). They way i want the app to work is when i go to "/questions" i want to list all the questions by all the users and when i visit "/users/:user_id/questions", I want to list questions only by specific user. For this I have setup the "/users/:user_id/questions" route to set the userid in "selectedUserId" session (which i am also using in "questionSummaryByUser" publish method.
However when i see the list of questions in "/users/:user_id/questions" I get all the questions irrespective of the user_id.
I read here that the collections are merged at client side, but still could not figure a solution for the above mentioned scenario.
Note that I just started with Meteor, so do not know in and outs of it.
Thanks in advance.
The good practice is to filter the collection data in the place where you use it, not rely of the subset you get by subscribe. That way you can be sure that the data you get is the same you want to display, even when you add further subscriptions to the same collection. Imagine if later you'd like to display, for example, a sidebar with top 10 questions from all users. Then you'd have to fetch those as well, and if you have a place when you display all subscribed data, you'll get a mess of every function.
So, in the template where you want to display user's questions, do
Template.mine.questions = function() {
return Questions.find({userId: Meteor.userId()});
};
Then you won't even need the separate questionSummaryByUser channel.
To filter data in the subscription, you have several options. Whichever you choose, keep in mind that subscription is not the place in which you choose the data to be displayed. This should always be filtered as above.
Option 1
Keep everything in a single parametrized channel.
Meteor.publish('questions', function(options) {
if(options.filterByUser) {
return Questions.find({userId: options.userId});
} else {
return Questions.find({});
}
});
Option 2
Make all channel return data only when it's needed.
Meteor.publish('allQuestions', function(necessary) {
if(!necessary) return [];
return Questions.find({});
});
Meteor.publish('questionSummaryByUser', function(userId) {
return Questions.find({userId : userId});
});
Option 3
Manually turn off subcriptions in the client. This is probably an overkill in this case, it requires some unnecessary work.
var allQuestionsHandle = Meteor.subscribe('allQuestions');
...
allQuestionsHandle.stop();