I am using reactjs and the flux architecture in a project I'm working on. I am a bit puzzled by how to break up nested data correctly into stores and why I should split up my data into multiple stores.
To explain the problem I'll use this example:
Imagine a Todo application where you have Projects. Each project has tasks and each task can have notes.
The application uses a REST api to retrieve the data, returning the following response:
{
projects: [
{
id: 1,
name: "Action Required",
tasks: [
{
id: 1,
name: "Go grocery shopping",
notes: [
{
id: 1,
name: "Check shop 1"
},
{
id: 2,
name: "Also check shop 2"
}
]
}
]
},
]
}
The fictive application's interface displays a list of projects on the left and when you select a project, that project becomes active and its tasks are displayed on the right. When you click a task you can see its notes in a popup.
What I would do is use 1 single store, the "Project Store". An action does the request to the server, fetches the data and instructs the store to fill itself with the new data. The store internally saves this tree of entities (Projects -> Tasks -> Notes).
To be able to show and hide tasks based on which project is selected I'd also keep a variable in the store, "activeProjectId". Based on that the view can get the active project, its tasks and render them.
Problem solved.
However: after searching a bit online to see if this is a good solution I see a lot of people stating that you should use a separate store per entity.
This would mean:
A ProjectStore, TaskStore and NoteStore. To be able to manage associations I would possibly also need a "TasksByProjectStore" and a "NotesByTaskStore".
Can someone please explain why this would be better? The only thing I see is a lot of overhead in managing the stores and the data flow.
There are pro's and cons to use one store or different stores. Some implementations of flux specifically favour one store to rule them all, so to speak, while others also facilitate multiple stores.
Whether one store or multiple stores suit your needs, depend on a) what your app does, and b) which future developments or changes you expect. In a nutshell:
One store is better if your key concern is the dependencies between your nested entities. And if you are less worried about dependencies between single entity relation between server-store-component. One store is great if e.g. you want to manage stats on project level about underlying tasks and notes. Many parent-child-like relationships and all-in-one data fetching form server favour one store solution.
Multiple stores better if your key concern is dependencies between single entity connections between server-store-component. Weak entity-to-entity relationships and independent single entity server fetches and updates favour multiple stores.
In your case: my bet would be that one store is better. You have evident parent-child relationship, and get all project data at once from server.
The somewhat longer answer:
One store:
Great to minimize overhead of managing multiple stores.
It works well if your top view component is the only stateful component, and gets all data from the store, and distributes details to stateless children.
However, the need to manage dependencies between your entities does not simply go away: instead of managing them between different stores, you need to manage them inside the single store. Which therefore gets bigger (more lines of code).
Also, in a typical flux setup, each store emits a single 'I have changed' event, and leaves it up to the component(s) to figure out what changed and if they need top re-render. So if you have many nested entities, and one of the entities receives many updates from the server, then your superstore emits many changes, which might trigger a lot of unnecessary updates of the entire structure and all components. Flux-react can handle a lot, and the detail-changed-update-everything is what it handles well, but it may not suit everyones needs (I had to abandon this approach when it screwed up my transitions between states in one project).
Multiple stores:
more overhead, yes, but for some projects you get returns as well
if you have a close coupling between server data and components, with the flux store in between, it is easier to separate concerns in separate stores
if e.g. you are expecting many changes and updates to your notes data structure, than it is easier to have a stateful component for notes, that listens to the notes store, which in turn handles notes data updates from the server. When processing changes in your notes structure, you can focus on notes store only, without figuring out how notes are handled inside some big superstore.
Related
I have a next.js application with redux and a node.js Rest-API. You can view items and as often you can store favorites.
Now the view component of the items and the favorites is obviously the same.
I have now two options how I can do it and I want to ask you what is best practice with next.js:
1.Option:
Have two routes. One called "search" and one called "favorites".
Pros:
Clean approach as everything is clearly separated from the root
Cons:
Have to remove full DOM and add full DOM just to show favorites - which is essentially the exact same view
2.Option: One route called "search" with a prop for the section
Cons:
Unclean IMO since I need to add a prop to many components
Pro:
For me seems to be way faster
Redux store is organized the following:
{
search:{
results:[],
total:0,
},
favorites:{
results:[],
total:0,
}
}
It's hard to provide a concrete answer to this question. Do you want one page for users to browse items, allowing them to filter by which ones they've favorited, or do you want one page for searching items regardless of which ones they have favorited and a separate page for finding favorites? This doesn't seem like a best practices problem, it's totally your choice. That said, I would probably prefer option 1 because that is how most applications do it. The fact that it is slow is weird—next.js tends to be very good about preloading. Give your page(s) a try in production mode—often Next.js is just slow to build them in development mode and the real page your users will face will be much faster.
Problem
I am trying to design webapp with a fairly complex state, where many single actions should trigger multiple changes and updates across numerous components, including fetching and displaying data asynchronously from external endpoints.
Some context and background:
I am building a hybrid cytoscape.js / redux app for modeling protein interactions using graphs.
My Redux store needs to hold a representation of the graph (a collection of node and edge objects), as well as numerous filtering parameters that can be set by the user (i.e only display nodes that contain a certain value, etc).
The current implementation uses React.js to manage all the state and as the app grew it became quite monolithic, hard to reason about and debug.
Considerations and questions
Having never used Redux before , I'm struggling a bit when trying to conceptually design the new implementation. Specifically, I have the following questions / concerns:
Cytoscape.js is an isolated component since it's directly manipulating the DOM. It expects the state (specifically the node and edge collections) to be of a certain shape, which is nested and a little hard to handle. Since every update to any node or edge object should be reflected graphically in cytoscape, should I mirror the shape it expects in my Redux store, or should I transform it every time I make an update? If so, what would be a good place to do it? mapStateToProps or inside a reducer?
Certain events, such as selecting nodes and/or edges, crate multiple side-effects across the entire app (data is fetched asynchronously from the server, other data is extracted from the selection and is transformed / aggregated, some of it derived and some of it from external api calls). I'm having trouble wrapping my head around how I should handle these changes. More specifically, let's say a SELECTION_CHANGE action is fired. Should it contain the selected objects, or just their IDs? I'm guessing IDs would be less taxing from a performance point. More importantly, how should I handle the cascade of updates the SELECTION_CHANGE actions requires? A single SELECTION_CHANGE action should trigger changes in multiple parts of the UI and state tree. Meaning triggering multiple actions across different reducers. What would be a good way to batch / queue / trigger multiple actions depending on SELECTION_CHANGE?
The user needs to be able to filter and manipulate the graph according to arbitrary predicates. More specifically, he should be able to permanently delete \ add nodes and edges, and also restrict the view to a particular subset of the graph. In other words, some changes are permanent (deleting \ adding or otherwise editing the graph) while others relate only to what is shown (for example, showing only nodes with expression levels above a certain threshold, etc). Should I keep a separate, "filtered" copy of the graph in my state tree, or should I calculate it on the fly for every change in the filtering parameters? And as before, if so, where would be a good place to perform these filtering actions: mapStateToProps, reducers or someplace else I haven't thought of?
I'm hoping these high-level and abstract questions are descriptive enough of what I'm trying to achieve, and if not I'll be happy to elaborate.
The recommended approach to Redux state shape is to keep your state as minimal as possible, and derive data from that as needed (usually in selector functions, which can be called in a component's mapState and in other locations such as thunk action creators or sagas). For nested/relational data, it works best if you store it in a normalized structure, and denormalize it as needed.
While what you put into your actions is up to you, I generally prefer to keep them fairly minimal. That means doing lookups of necessary items and their IDs in an action creator, and then looking up the data and doing necessary work in a reducer. As for the reducer handling, there's several ways to approach it. If you're going with a "combined slice reducers" approach, the combineReducers utility will give each slice reducer a chance to respond to a given action, and update its own slice as needed. You can also write more complex reducers that operate at a higher level in the state tree, and do all the nested update logic yourself as needed (this is more common if you're using a "feature folder"-type project structure). Either way, you should be able to do all your updating for a single logical operation with one dispatched action, although at times you may want to dispatch multiple consecutive actions in a row to perform a higher-level operation (such as UPDATE_ITEM -> APPLY_EDITS -> CLOSE_MODAL to handle clicking the "OK" button on an editing popup window).
I'd encourage you to read through the Redux docs, as they address many of these topics, and point to other relevant information. In particular, you should read the new Structuring Reducers section. Be sure to read through the articles linked in the "Prerequisite Concepts" page. The Redux FAQ also points to a great deal of relevant info, particularly the Reducers, Organizing State, Code Structure, and Performance categories.
Finally, a couple other relevant links. I keep a big list of links to high-quality tutorials and articles on React, Redux, and related topics, at https://github.com/markerikson/react-redux-links . Lots of useful info linked from there. I also am a big fan of the Redux-ORM library, which provides a very nice abstraction layer over managing normalized data in your Redux store without trying to change what makes Redux special.
I'm getting started with angular2 and wanted to create a todo-list web app. The UI should consist of two pages (=components) which get slide in/out via JavaScript.
The first one shows all todos in a vertical list and the other one shows additional details when a todo item from the list is clicked.
I'm now asking myself, what's the right way in angular2 to declare the page components?
Should I build a generic component like this?
<page type="list"></page>
<page type="detail"></page>
Or should I create a new component for each page?
<listpage></listpage>
<detailpage></detailpage>
In general, without knowing more details, my gut sense would be that the latter would be more appropriate, i.e. create a new component for each page.
You require two fundamentally distinct types of entities:
a collection and
a single item from that collection.
The first solution that you propose, i.e. a generic component, would be more suitable to multiple entities that all share some basic underlying structure but differ in some (but not all) details, e.g., two different collection views that both list all items but format those items in two different ways. To force that strategy onto your use case would require your generic page component to have no universally shared structure: what template or logic would you share between a todo-list collection and a single todo-item from that collection? Everything would depend on the value of the type attribute. Then what meaning would page have? Essentially nothing.
Your latter suggestion, i.e. having two distinct components for these two different entities, seems more in the true spirit of how angular components are meant to be used.
I suppose one could argue that a more generic page component/view could have some valuable structure that persists for both the list view and the detail view, e.g. main title, navigation links, user info, etc. However, even if you implemented that, I think you'd want to eventually create separate (more deeply nested?) components for the whole list versus for an individual item, which eventually comes back around to implementing your latter suggestion.
I think a useful model is the example code provided on the official Angular2 web site. I could point to several different examples, but I think the clearest parallel to your situation is in the "Advanced Documentation" section, under the "Routing & Navigation" heading. There they provide code that separates components as follows:
hero-list.component: This would seem to parallel your listpage.
hero-detail.component: This would seem to parallel your detailpage.
Clearly they've separated out these two parts of the app into distinct components.
This sort of strategy decision may also depend on the size/complexity of your "entities". If your "list" and "detail" views were both extremely simple, I suppose you could distinguish between them within a single component (e.g. page), just using an attribute (e.g. type). However, in a todo app, I can't imagine either a list view or a detail view being extremely simple. Thus trying to squash both into a single page component would make the component too complex.
I have an app in React which, at a basic level, is a document that displays data from an arbitrary number of 2nd-level "child" resources, and each child displays data from an arbitrary number of 3rd-level "grandchild" resources, and each grandchild displays data from an arbitrary number of 4th-level "great-grandchild" resources.
This is the basic JSON structure retrieved from my API server:
{ id: 1,
children: [
{ id: 1,
grandchildren: [
{ id: 1,
greatgrandchildren: [{ id: 1 }, { id: 2 }, ...]
},
...
]
},
...
]
}
The objects at each level have a bunch of additional properties I haven't shown here for simplicity.
As per the recommended React way of doing things, this object is retrieved and set as state at the top-level component of my app when it loads, then the relevant data is passed down as props to the children in order to build out the component tree, which mirrors this structure.
This is fine, however I need to be able to do CRUD (create/read/update/delete) operations on each resource, and it's turning out to be a pain because of the need to pass the entire data object to setState() when I'm just trying to modify a small part of it. It's not so bad at the top or 2nd levels, but anything past that and things get unwieldy quickly due to the need to iterate, fetch the object I want based on id, change it, then build a copy of the entire structure with just that bit changed. The React addons provide an update() function, which is useful, but really only helps with the last step - I still have to deal with nested iteration and rebuilding the relevant array outside that.
Additional complexity is added because my goal is to optimistically update the view with temp data, then either update it again with the proper data that gets returned from the server (on success) or revert (on fail). This means I need to be careful to keep a copy of the old data/state around without mutating it (i.e. sharing refs).
Finally, I currently have a callback method for each CRUD action for each level defined on my top-level component (12 methods altogether). This seems excessive, but they all need the ability to call this.setState(), and I'm finding it hard to work out how to refactor the commonality among them. I already have a separate API object that does the actual Ajax calls.
So my question is: is there a better React-suitable approach for dealing with CRUD operations and manipulating the data for them with a nested data structure like I have, in a way that permits optimistic updates?
On the surface it looks like the flux architecture might be the answer you are looking for. In this case you would have a Store for each type of resource (or maybe one Store for all of it depending on the structure of the data) and have the views listen for the changes in the data that they care about.
After a bit of research I've come to understand that the state-changing problem is one of the reasons why immutability libraries (like immutable.js and mori) seem to be a hot topic in the React community, as they facilitate modifying small parts of deeply nested structures while keeping the original copy untouched. After some refactoring I was able to make better use of the React.addons.update() helper and shrink things down a bit, but I think using one of these libraries would be the next logical step.
As for the structure/organization stuff, it seems Flux (or another architecture like MV*) exists to solve that problem, which makes sense because React is just a view library.
I am experimenting few best practices in AngularJS specially on designing model. One true power in my opinion in AngularJS is
'When model changes view gets updated & vice versa'
. That leads to the obvious fact
'At any given time the model is the single source of truth for
application state'
Now, After reading various blog posts on designing the right model structure and I decided to use something like 'Single Object' approach. Meaning the whole app state is maintained in a single JavaScript object.
For example of a to-do application
$scope.appState = {
name: "toDoApp",
auth: {
userName: "John Doe",
email: "john#doe.com",
token: "DFRED%$%ErDEFedfereRWE2324deE$%^$%#423",
},
toDoLists: [
{ name: "Personal List",
items: [
{ id: 1, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : 0},
{ id: 2, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : 1},
{ id: 3, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : 0}]
},
{ name: "Work List",
items: [
{ id: 1, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : -1},
{ id: 2, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : 0},
{ id: 3, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : 0}]
},
{ name: "Family List",
items: [
{ id: 1, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : 2},
{ id: 2, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : 0},
{ id: 3, task: "Do something ", dueDate: "2013-01-10 11:56:05", status:"Open", priority : 5}]
}
]
};
This object will go HUGE depending on the application complexity. Regarding this I have the below worries and marked them as questions.
Is such approach advisable? What are the downsides and pitfalls I will face when application starts to scale?
When small portion of object is updated say priority is increased will angular smartly re-render the delta alone or will it consider the
object got changed and re-render whole screen? (This will lead to poor
performance), If so what are the works around?
Now since the whole DOM got smoothly translated into one JavaScript object the application has to keep manipulating this object. Do we
have right tools for complex JavaScript object manipulation like
jQuery was king of DOM manipulator?
With the above doubts I strongly find the below advantages.
Data has got neatly abstracted & well organized so that anytime it
can be serialized to server, firebase or local export to user.
Implementing crash recovery will be easy, Think this feature as 'Hibernate' option in desktops.
Model & View totally decoupled. For example, company A can write Model to maintain state and few obvious Controllers to change the
model and some basic view to interact with users. Now this company A
can invite other developer to openly write their own views and
requesting the company A for more controllers and REST methods. This
will empower LEAN development.
What if I start versioning this object to server and I can make a playback to the user in the SAME way he saw the website and can continue to work without hassle. This will work as a true back button for single page apps.
In my day job we use the "state in a single object" pattern in a big enterprise AngularJS application. So far I can only see benefits. Let me address your questions:
Is such approach advisable? What are the downsides and pitfalls I will face when application starts to scale?
I see two main benefits:
1) DEBUGGING. When the application scales, the toughest question to answer is what is happening to my application right now? When I have all the application state in one object, I can just print it on the console at any point in time while the application is running.
That means it's much easier to understand what is happening when an error occurs, and you can even do it with the application on production, without the need to use debugging tools.
Write your code using mostly pure functions that process the state object (or parts of it), inject those functions and the state in the console, and you'll have the best debugging tool available. :)
2) SIMPLICITY. When you use a single object, you know very clearly on your application what changes state and what reads state. And they are completely separate pieces of code. Let me illustrate with an example:
Suppose you have a "checkout" screen, with a summary of the checkout, the freight options, and the payment options. If we implement Summary, Freight and PaymentOptions with separate and internal state, that means that every time the user changes one of the components, you have to explicitly change the others.
So, if the user changes the Freight option, you have to call the Summary in some way, to tell it to update its values. The same have to happen if the user selects a PaymentOption with a discount. Can you see the spaghetti code building?
When you use a centralized state object things get easier, because each component only interacts with the state object. Summary is just a pure function of the state. When the user selects a new freight or payment option, the state is updated. Then, the Summary is automatically updated, because the state has just changed.
When small portion of object is updated say priority is increased will angular smartly re-render the delta alone or will it consider the object got changed and re-render whole screen? (This will lead to poor performance), If so what are the works around?
We encountered some performance problems when using this architecture with angular. Angular dirty checking works very well when you use watchers on objects, and not so well when you use expensive functions. So, what we usually do when finding a performance bottleneck is saving a function result in a "cache" object set on $scope. Every time the state changes we calculate the function again and save it to the cache. Then reference this cache on the view.
Now since the whole DOM got smoothly translated into one JavaScript object the application has to keep manipulating this object. Do we have right tools for complex JavaScript object manipulation like jQuery was king of DOM manipulator?
Yes! :) Since we have a big object as our state, each and every library written to manipulate objects can be used here.
All the benefits you mentioned are true: it makes it easier to take snapshots of the application, serialize it, implement undos...
I've written some more implementation details of the centralized state architecture in my blog.
Also look for information regarding frameworks that are based on the centralized state idea, like Om, Mercury and Morearty.
Is such approach advisable? What are the downsides and pitfalls I will
face when application starts to scale?
I'd say overall this is probably not a great idea since it creates major problems with maintainability of this massive object and introduces the possibility of side effects throughout the entire application making it hard to properly test.
Essentially you get no sort of encapsulation since any method anywhere in the app might have changed any part of the data model.
When small portion of object is updated say priority is increased will
angular smartly re-render the delta alone or will it consider the
object got changed and re-render whole screen? (This will lead to poor
performance), If so what are the works around?
When a digest occurs in angular all the watches are processed to determine what has changed, all changes will cause the handlers for the watches to be called. So long as you are conscious of how many DOM elements you're creating and do a good job of managing that number performance isn't a huge issue, there are also options like bind-once to avoid having too many watchers if that becomes an issue (Chrome profiling tools are great for figuring out if you need to work on these problems and to find the correct targets for performance)
Now since the whole DOM got smoothly translated into one JavaScript
object the application has to keep manipulating this object. Do we
have right tools for complex JavaScript object manipulation like
jQuery was king of DOM manipulator?
You can still use jQuery but you would want to do any DOM manipulations within directives. This allows you to apply them within the view which is really the driver of the DOM. This keeps your controllers from being tied to the views and keeps everything testable. Angular includes jQLite a smaller derivative of jQuery but if you include jQuery before angular it will use that instead and you can use the full power of jQuery within Angular.
Data has got neatly abstracted & well organized so that anytime it can
be serialized to server, firebase or local export to user.
Implementing crash recovery will be easy, Think this feature as
'Hibernate' option in desktops.
This is true but I think it's better if you come up with a well defined save object that persists the information you need to restore state rather than storing the entire data model for all the parts of the app in one place. Changes to the model over time will cause problems with the saved versions.
Model & View totally decoupled. For example, company A can write Model
to maintain state and few obvious Controllers to change the model and
some basic view to interact with users. Now this company A can invite
other developer to openly write their own views and requesting the
company A for more controllers and REST methods. This will empower
LEAN development.
So long as you write your models in your controllers this still remains true and is an advantage. You can also write custom directives to encapsulate functionality and templates so you can greatly reduce the complexity of your code.
What if I start version control this object to server and I can make a
playback to the user in the SAME way he saw the website and can
continue to work without hassle. This will work as a true back button
for single page apps.
I think ngRoute or ui-router really already cover this for you and hooking into them to save the state is really a better route (no pun intended).
Also just my extended opinion here, but I think binding from the view to the model is one of the great things using Angular gives us. In terms of the most powerful part I think it's really in directives which allow you to extend the vocabulary of HTML and allow for massive code re-use. Dependency injection also deserves an honorable mention, but yeah all great features no matter how you rank 'em.
I don't see any advantages in this approach. How is one huge object more abstracted away than, say, application model (app name, version, etc), user model (credentials, tokens, other auth stuff), toDoList model (single object with name and collection of tasks).
Now regarding decoupling of view and model. Let's say you need a widget to display current user's name. In Single Object approach, your view would look something like this:
<div class="user" ng-controller="UserWidgetCtrl">
<span>{{appState.auth.userName}}</span>
</div>
Compare with this:
<div class="user" ng-controller="UserWidgetCtrl">
<span>{{user.userName}}</span>
</div>
Of course, you might argue, that it is up to UserWidgetController to provide access to user object, thus hidding structure of appState. But then again controller must be coupled to appState structure:
function UserWidgetCtrl($scope) {
$scope.user = $scope.appState.auth;
}
Compare with this:
function UserWidgetCtrl($scope, UserService) {
UserService.getUser().then(function(user) {
$scope.user = user;
})
}
In latter case, the controller does not get to decide, where user data comes from. In former case, any change in appState hierarchy means that either controllers or view will have to be updated. You could however keep single object behind the scenes, but access to separate parts (user, in this case) should be abstracted by dedicated services.
And don't forget, as your object structure goes, your $watch'es will get slower and consume more memory, especially with object equality flag turned on.