I do not understand the difference between a resolver and a service in a nestJS application using graphQl and mongoDB.
I found examples like this, where the resolver just calls a service, so the resolver functions are always small as they just call a service function. But with this usage I don't understand the purpose of the resolver at all...
#Resolver('Tasks')
export class TasksResolver {
constructor(
private readonly taskService: TasksService
) {}
#Mutation(type => WriteResult)
async deleteTask(
#Args('id') id: string,
) {
return this.taskService.deleteTask(id);
}
}
#Injectable()
export class TasksService {
deleteTask(id: string) {
// Define collection, get some data for any checking and then update dataset
const Tasks = this.db.collection('tasks')
const data = await Task.findOne({ _ id: id })
let res
if (data.checkSomething) res = Task.updateOne({ _id: id }, { $set: { delete: true } })
return res
}
}
On the other side I can put all the service logic into the resolver and just leave the mongodb part in the service, but then the services are small and just replacing a simple mongodb call. So why shouldn't I put that also to the resolver.
#Resolver('Tasks')
export class TasksResolver {
constructor(
private readonly taskService: TasksService
) {}
#Mutation(type => WriteResult)
async deleteTask(
#Args('id') id: string,
) {
const data = await this.taskService.findOne(id)
let res
if (data.checkSomething) {
const update = { $set: { delete: true } }
res = this.taskService.updateOne(id, update)
}
return res
}
}
#Injectable()
export class TasksService {
findOne(id: string) {
const Tasks = this.db.collection('tasks')
return Task.findOne({ _ id: id })
}
updateOne(id: string, update) {
const Tasks = this.db.collection('tasks')
return Task.updateOne({ _ id: id }, update)
}
}
What is the correct usage of the resolver and service? In both cases one part keeps nearly a one liner for each function, so why should I split that at all?
You're right that it's a pretty linear call and there isn't much logic behind it, but the idea is to separate the concerns of each class. The resolver, much like a REST or RPC controller, should act as a gateway to your business logic, so that the logic can be easily re-used or re-called in other parts of the server. If you have a hybrid server with RPC or a REST + GQL combo, you could re-use the service to ensure both REST and GQL get the same return.
In the end, it comes down to your choice on what you want to do, but separating the resovler from the service (having thin gateways and fat logic classes) is Nest's opinion on the right design.
Your service help you fetch the data from Database. Your Resolver help to delivery these data to user. Sometimes, the data that you delivery to user not the same with data from Database, so the Resolver will make up these data as user's demand before sending it to user.
In terms of best practice, the resolver or controller should be thought of as the manager. In this case (as is stereotypical), the manager shouldn't be doing any of the actual work except for telling the workers what to do. The Manager determines who (which worker/service) should do the work. Sometimes it might be two or more workers/services. They specialize in telling "who", "what" to do.
The workers on the other hand, execute on the actual task. In your case, another option would be to have a database "repository" for database commands like findOne, findOneByX, updateOne; and also a service to handle the actual logic of the task. So the service worker takes the instructions from the manager(resolver) and only uses their logic to tell their database fetching repository buddies what to fetch.
In this way, the manager manages who should do the task. The service contains the logic and tells the other repository methods focused on database fetches what to fetch.
// So you would have...
task.resolver.ts
task.service.ts
task.repository.ts
The task.resolver will contain one line that calls the task.service method
The task.service will contain the logic to manage the task
The task.repository will contain methods like what you have in your suggested task.service - essentially database only methods
I have a set of related items like so:
book {
id
...
related_entity {
id
...
}
}
which apollo caches as two separate cache objects, where the related_entity field on book is a ref to an EntityNode object. This is fine, the related entity data is also used elsewhere outside of the context of a book so having it separate works, and everything seems well and good and updates as expected...except in the case where the related entity does not exist on the initial fetch (and thus the ref on the book object is null) and I create one later on.
I've tried adding an update function to the useMutation hook that creates the aforementioned related_entity per their documentation: https://www.apollographql.com/docs/react/caching/cache-interaction/#example-adding-an-item-to-a-list like this:
const [mutateEntity, _i] = useMutation(CREATE_OR_UPDATE_ENTITY,{
update(cache, {data}) {
cache.modify({
id: `BookNode:${bookId}`,
fields: {
relatedEntity(_i) {
const newEntityRef = cache.writeFragment({
fragment: gql`
fragment NewEntity on EntityNode {
id
...someOtherAttr
}`,
data: data.entityData
});
return newEntityRef;
}
}
})
}
});
but no matter what I seem to try, newEntityRef is always undefined, even though the new EntityNode is definitely in the cache and can be read just fine using the exact same fragment. I could give up and just force a refetch of the Book object, but the data is already right there.
Am I doing something wrong/is there a better way?
Barring that is there another way to get a ref for a cached object given you have its identifier?
It looks like this is actually an issue with apollo-cache-persist - I removed it and the code above functions as expected per the docs. It also looks like I could instead update to the new version under a different package name apollo3-cache-persist, but I ended up not needing cache persistence anyway.
I have a fully functioning CRUD app that I'm building some additional functionality for. The new functionality allows users to make changes to a list of vendors. They can add new vendors, update them and delete them. The add and delete seem to be working just fine, but updating doesn't seem to be working even though it follows a similar method I use in the existing CRUD functionality elsewhere in the app. Here's my code:
// async function from AXIOS request
const { original, updatedVendor } = req.body;
let list = await Vendor.findOne({ id: 1 });
if (!list) return res.status(500).json({ msg: 'Vendors not found' });
let indexOfUpdate = list.vendors.findIndex(
(element) => element.id === original.id
);
list.vendors[indexOfUpdate].id = updatedVendor.id;
list.vendors[indexOfUpdate].name = updatedVendor.name;
const updated = await list.save();
res.json(updated);
The save() isn't updating the existing document on the DB side. I've console logged that the list.vendors array of objects is, indeed, being changed, but save() isn't doing the saving.
EDIT:
A note on the manner of using save, this format doesn't work either:
list.save().then(res.json(list));
EDIT 2:
To answer the questions about seeing the logs, I cannot post the full console.log(list.vendors) as it contains private information, however, I can confirm that the change made to the list is showing up when I run the following in the VendorSchema:
VendorSchema.post('save', function () {
console.log(util.inspect(this, { maxArrayLength: null }));
});
However, the save still isn't changing the DB side.
Since you are using nested objects, Mongoose will not be able to detect the changes made. You need to mark the modified as an object before the save
list.markModified('vendors');
In my application i have two MobX stores - store_A for handling user information (who is currently logged, etc), and store_B for handling events for all users.
After user login, i want to display all events regarding that user.
How can i access logged user info (from store_A) from within store_B so that i can filter events correctly?
At this point i have to store loggeduserName data inside my store_b to retrive that data...
Code from my events store:
class ObservableEventsStore {
...
//after logIn, save userName:
#action setUser(userName) {
this.givenUser = userName
}
...
#computed get filteredByUser() {
let filteredByUser = this.wholeList
.filter((event) => this.givenUser === event.user)
// this.givenUser is what i want to get from store_A
return filteredByUser
}
I want to get loggedUser data from UserStore, i have it stored there as well ...
There is no idiomatic approach, any means to obtain a reference to the userStore is valid. I think in general you can take three approaches to achieve this:
construct the userStore before the EventStore and pass the reference to the EventStore in it's constuctor (or set it afterwards)
If the UserStores is a singleton, just import and use it
Use a dependency injection system like InversifyJS
I'm reading about Flux but the example Todo app is too simplistic for me to understand some key points.
Imagine a single-page app like Facebook that has user profile pages. On each user profile page, we want to show some user info and their last posts, with infinite scroll. We can navigate from one user profile to another one.
In Flux architecture, how would this correspond to Stores and Dispatchers?
Would we use one PostStore per user, or would we have some kind of a global store? What about dispatchers, would we create a new Dispatcher for each “user page”, or would we use a singleton? Finally, what part of the architecture is responsible for managing the lifecycle of “page-specific” Stores in response to route change?
Moreover, a single pseudo-page may have several lists of data of the same type. For example, on a profile page, I want to show both Followers and Follows. How can a singleton UserStore work in this case? Would UserPageStore manage followedBy: UserStore and follows: UserStore?
In a Flux app there should only be one Dispatcher. All data flows through this central hub. Having a singleton Dispatcher allows it to manage all Stores. This becomes important when you need Store #1 update itself, and then have Store #2 update itself based on both the Action and on the state of Store #1. Flux assumes this situation is an eventuality in a large application. Ideally this situation would not need to happen, and developers should strive to avoid this complexity, if possible. But the singleton Dispatcher is ready to handle it when the time comes.
Stores are singletons as well. They should remain as independent and decoupled as possible -- a self-contained universe that one can query from a Controller-View. The only road into the Store is through the callback it registers with the Dispatcher. The only road out is through getter functions. Stores also publish an event when their state has changed, so Controller-Views can know when to query for the new state, using the getters.
In your example app, there would be a single PostStore. This same store could manage the posts on a "page" (pseudo-page) that is more like FB's Newsfeed, where posts appear from different users. Its logical domain is the list of posts, and it can handle any list of posts. When we move from pseudo-page to pseudo-page, we want to reinitialize the state of the store to reflect the new state. We might also want to cache the previous state in localStorage as an optimization for moving back and forth between pseudo-pages, but my inclination would be to set up a PageStore that waits for all other stores, manages the relationship with localStorage for all the stores on the pseudo-page, and then updates its own state. Note that this PageStore would store nothing about the posts -- that's the domain of the PostStore. It would simply know whether a particular pseudo-page has been cached or not, because pseudo-pages are its domain.
The PostStore would have an initialize() method. This method would always clear the old state, even if this is the first initialization, and then create the state based on the data it received through the Action, via the Dispatcher. Moving from one pseudo-page to another would probably involve a PAGE_UPDATE action, which would trigger the invocation of initialize(). There are details to work out around retrieving data from the local cache, retrieving data from the server, optimistic rendering and XHR error states, but this is the general idea.
If a particular pseudo-page does not need all the Stores in the application, I'm not entirely sure there is any reason to destroy the unused ones, other than memory constraints. But stores don't typically consume a great deal of memory. You just need to make sure to remove the event listeners in the Controller-Views you are destroying. This is done in React's componentWillUnmount() method.
(Note: I have used ES6 syntax using JSX Harmony option.)
As an exercise, I wrote a sample Flux app that allows to browse Github users and repos.
It is based on fisherwebdev's answer but also reflects an approach I use for normalizing API responses.
I made it to document a few approaches I have tried while learning Flux.
I tried to keep it close to real world (pagination, no fake localStorage APIs).
There are a few bits here I was especially interested in:
It uses Flux architecture and react-router;
It can show user page with partial known info and load details on the go;
It supports pagination both for users and repos;
It parses Github's nested JSON responses with normalizr;
Content Stores don't need to contain a giant switch with actions;
“Back” is immediate (because all data is in Stores).
How I Classify Stores
I tried to avoid some of the duplication I've seen in other Flux example, specifically in Stores.
I found it useful to logically divide Stores into three categories:
Content Stores hold all app entities. Everything that has an ID needs its own Content Store. Components that render individual items ask Content Stores for the fresh data.
Content Stores harvest their objects from all server actions. For example, UserStore looks into action.response.entities.users if it exists regardless of which action fired. There is no need for a switch. Normalizr makes it easy to flatten any API reponses to this format.
// Content Stores keep their data like this
{
7: {
id: 7,
name: 'Dan'
},
...
}
List Stores keep track of IDs of entities that appear in some global list (e.g. “feed”, “your notifications”). In this project, I don't have such Stores, but I thought I'd mention them anyway. They handle pagination.
They normally respond to just a few actions (e.g. REQUEST_FEED, REQUEST_FEED_SUCCESS, REQUEST_FEED_ERROR).
// Paginated Stores keep their data like this
[7, 10, 5, ...]
Indexed List Stores are like List Stores but they define one-to-many relationship. For example, “user's subscribers”, “repository's stargazers”, “user's repositories”. They also handle pagination.
They also normally respond to just a few actions (e.g. REQUEST_USER_REPOS, REQUEST_USER_REPOS_SUCCESS, REQUEST_USER_REPOS_ERROR).
In most social apps, you'll have lots of these and you want to be able to quickly create one more of them.
// Indexed Paginated Stores keep their data like this
{
2: [7, 10, 5, ...],
6: [7, 1, 2, ...],
...
}
Note: these are not actual classes or something; it's just how I like to think about Stores.
I made a few helpers though.
StoreUtils
createStore
This method gives you the most basic Store:
createStore(spec) {
var store = merge(EventEmitter.prototype, merge(spec, {
emitChange() {
this.emit(CHANGE_EVENT);
},
addChangeListener(callback) {
this.on(CHANGE_EVENT, callback);
},
removeChangeListener(callback) {
this.removeListener(CHANGE_EVENT, callback);
}
}));
_.each(store, function (val, key) {
if (_.isFunction(val)) {
store[key] = store[key].bind(store);
}
});
store.setMaxListeners(0);
return store;
}
I use it to create all Stores.
isInBag, mergeIntoBag
Small helpers useful for Content Stores.
isInBag(bag, id, fields) {
var item = bag[id];
if (!bag[id]) {
return false;
}
if (fields) {
return fields.every(field => item.hasOwnProperty(field));
} else {
return true;
}
},
mergeIntoBag(bag, entities, transform) {
if (!transform) {
transform = (x) => x;
}
for (var key in entities) {
if (!entities.hasOwnProperty(key)) {
continue;
}
if (!bag.hasOwnProperty(key)) {
bag[key] = transform(entities[key]);
} else if (!shallowEqual(bag[key], entities[key])) {
bag[key] = transform(merge(bag[key], entities[key]));
}
}
}
PaginatedList
Stores pagination state and enforces certain assertions (can't fetch page while fetching, etc).
class PaginatedList {
constructor(ids) {
this._ids = ids || [];
this._pageCount = 0;
this._nextPageUrl = null;
this._isExpectingPage = false;
}
getIds() {
return this._ids;
}
getPageCount() {
return this._pageCount;
}
isExpectingPage() {
return this._isExpectingPage;
}
getNextPageUrl() {
return this._nextPageUrl;
}
isLastPage() {
return this.getNextPageUrl() === null && this.getPageCount() > 0;
}
prepend(id) {
this._ids = _.union([id], this._ids);
}
remove(id) {
this._ids = _.without(this._ids, id);
}
expectPage() {
invariant(!this._isExpectingPage, 'Cannot call expectPage twice without prior cancelPage or receivePage call.');
this._isExpectingPage = true;
}
cancelPage() {
invariant(this._isExpectingPage, 'Cannot call cancelPage without prior expectPage call.');
this._isExpectingPage = false;
}
receivePage(newIds, nextPageUrl) {
invariant(this._isExpectingPage, 'Cannot call receivePage without prior expectPage call.');
if (newIds.length) {
this._ids = _.union(this._ids, newIds);
}
this._isExpectingPage = false;
this._nextPageUrl = nextPageUrl || null;
this._pageCount++;
}
}
PaginatedStoreUtils
createListStore, createIndexedListStore, createListActionHandler
Makes creation of Indexed List Stores as simple as possible by providing boilerplate methods and action handling:
var PROXIED_PAGINATED_LIST_METHODS = [
'getIds', 'getPageCount', 'getNextPageUrl',
'isExpectingPage', 'isLastPage'
];
function createListStoreSpec({ getList, callListMethod }) {
var spec = {
getList: getList
};
PROXIED_PAGINATED_LIST_METHODS.forEach(method => {
spec[method] = function (...args) {
return callListMethod(method, args);
};
});
return spec;
}
/**
* Creates a simple paginated store that represents a global list (e.g. feed).
*/
function createListStore(spec) {
var list = new PaginatedList();
function getList() {
return list;
}
function callListMethod(method, args) {
return list[method].call(list, args);
}
return createStore(
merge(spec, createListStoreSpec({
getList: getList,
callListMethod: callListMethod
}))
);
}
/**
* Creates an indexed paginated store that represents a one-many relationship
* (e.g. user's posts). Expects foreign key ID to be passed as first parameter
* to store methods.
*/
function createIndexedListStore(spec) {
var lists = {};
function getList(id) {
if (!lists[id]) {
lists[id] = new PaginatedList();
}
return lists[id];
}
function callListMethod(method, args) {
var id = args.shift();
if (typeof id === 'undefined') {
throw new Error('Indexed pagination store methods expect ID as first parameter.');
}
var list = getList(id);
return list[method].call(list, args);
}
return createStore(
merge(spec, createListStoreSpec({
getList: getList,
callListMethod: callListMethod
}))
);
}
/**
* Creates a handler that responds to list store pagination actions.
*/
function createListActionHandler(actions) {
var {
request: requestAction,
error: errorAction,
success: successAction,
preload: preloadAction
} = actions;
invariant(requestAction, 'Pass a valid request action.');
invariant(errorAction, 'Pass a valid error action.');
invariant(successAction, 'Pass a valid success action.');
return function (action, list, emitChange) {
switch (action.type) {
case requestAction:
list.expectPage();
emitChange();
break;
case errorAction:
list.cancelPage();
emitChange();
break;
case successAction:
list.receivePage(
action.response.result,
action.response.nextPageUrl
);
emitChange();
break;
}
};
}
var PaginatedStoreUtils = {
createListStore: createListStore,
createIndexedListStore: createIndexedListStore,
createListActionHandler: createListActionHandler
};
createStoreMixin
A mixin that allows components to tune in to Stores they're interested in, e.g. mixins: [createStoreMixin(UserStore)].
function createStoreMixin(...stores) {
var StoreMixin = {
getInitialState() {
return this.getStateFromStores(this.props);
},
componentDidMount() {
stores.forEach(store =>
store.addChangeListener(this.handleStoresChanged)
);
this.setState(this.getStateFromStores(this.props));
},
componentWillUnmount() {
stores.forEach(store =>
store.removeChangeListener(this.handleStoresChanged)
);
},
handleStoresChanged() {
if (this.isMounted()) {
this.setState(this.getStateFromStores(this.props));
}
}
};
return StoreMixin;
}
So in Reflux the concept of the Dispatcher is removed and you only need to think in terms of data flow through actions and stores. I.e.
Actions <-- Store { <-- Another Store } <-- Components
Each arrow here models how the data flow is listened to, which in turn means that the data flows in the opposite direction. The actual figure for data flow is this:
Actions --> Stores --> Components
^ | |
+----------+------------+
In your use case, if I understood correctly, we need a openUserProfile action that initiates the user profile loading and switching the page and also some posts loading actions that will load posts when the user profile page is opened and during the infinite scroll event. So I'd imagine we have the following data stores in the application:
A page data store that handles switching pages
A user profile data store that loads the user profile when the page is opened
A posts list data store that loads and handles the visible posts
In Reflux you'd set it up like this:
The actions
// Set up the two actions we need for this use case.
var Actions = Reflux.createActions(['openUserProfile', 'loadUserProfile', 'loadInitialPosts', 'loadMorePosts']);
The page store
var currentPageStore = Reflux.createStore({
init: function() {
this.listenTo(openUserProfile, this.openUserProfileCallback);
},
// We are assuming that the action is invoked with a profileid
openUserProfileCallback: function(userProfileId) {
// Trigger to the page handling component to open the user profile
this.trigger('user profile');
// Invoke the following action with the loaded the user profile
Actions.loadUserProfile(userProfileId);
}
});
The user profile store
var currentUserProfileStore = Reflux.createStore({
init: function() {
this.listenTo(Actions.loadUserProfile, this.switchToUser);
},
switchToUser: function(userProfileId) {
// Do some ajaxy stuff then with the loaded user profile
// trigger the stores internal change event with it
this.trigger(userProfile);
}
});
The posts store
var currentPostsStore = Reflux.createStore({
init: function() {
// for initial posts loading by listening to when the
// user profile store changes
this.listenTo(currentUserProfileStore, this.loadInitialPostsFor);
// for infinite posts loading
this.listenTo(Actions.loadMorePosts, this.loadMorePosts);
},
loadInitialPostsFor: function(userProfile) {
this.currentUserProfile = userProfile;
// Do some ajax stuff here to fetch the initial posts then send
// them through the change event
this.trigger(postData, 'initial');
},
loadMorePosts: function() {
// Do some ajaxy stuff to fetch more posts then send them through
// the change event
this.trigger(postData, 'more');
}
});
The components
I'm assuming you have a component for the whole page view, the user profile page and the posts list. The following needs to be wired up:
The buttons that opens up the user profile need to invoke the Action.openUserProfile with the correct id during it's click event.
The page component should be listening to the currentPageStore so it knows which page to switch to.
The user profile page component needs to listen to the currentUserProfileStore so it knows what user profile data to show
The posts list needs to listen to the currentPostsStore to receive the loaded posts
The infinite scroll event needs to call the Action.loadMorePosts.
And that should be pretty much it.