I have an app in React which, at a basic level, is a document that displays data from an arbitrary number of 2nd-level "child" resources, and each child displays data from an arbitrary number of 3rd-level "grandchild" resources, and each grandchild displays data from an arbitrary number of 4th-level "great-grandchild" resources.
This is the basic JSON structure retrieved from my API server:
{ id: 1,
children: [
{ id: 1,
grandchildren: [
{ id: 1,
greatgrandchildren: [{ id: 1 }, { id: 2 }, ...]
},
...
]
},
...
]
}
The objects at each level have a bunch of additional properties I haven't shown here for simplicity.
As per the recommended React way of doing things, this object is retrieved and set as state at the top-level component of my app when it loads, then the relevant data is passed down as props to the children in order to build out the component tree, which mirrors this structure.
This is fine, however I need to be able to do CRUD (create/read/update/delete) operations on each resource, and it's turning out to be a pain because of the need to pass the entire data object to setState() when I'm just trying to modify a small part of it. It's not so bad at the top or 2nd levels, but anything past that and things get unwieldy quickly due to the need to iterate, fetch the object I want based on id, change it, then build a copy of the entire structure with just that bit changed. The React addons provide an update() function, which is useful, but really only helps with the last step - I still have to deal with nested iteration and rebuilding the relevant array outside that.
Additional complexity is added because my goal is to optimistically update the view with temp data, then either update it again with the proper data that gets returned from the server (on success) or revert (on fail). This means I need to be careful to keep a copy of the old data/state around without mutating it (i.e. sharing refs).
Finally, I currently have a callback method for each CRUD action for each level defined on my top-level component (12 methods altogether). This seems excessive, but they all need the ability to call this.setState(), and I'm finding it hard to work out how to refactor the commonality among them. I already have a separate API object that does the actual Ajax calls.
So my question is: is there a better React-suitable approach for dealing with CRUD operations and manipulating the data for them with a nested data structure like I have, in a way that permits optimistic updates?
On the surface it looks like the flux architecture might be the answer you are looking for. In this case you would have a Store for each type of resource (or maybe one Store for all of it depending on the structure of the data) and have the views listen for the changes in the data that they care about.
After a bit of research I've come to understand that the state-changing problem is one of the reasons why immutability libraries (like immutable.js and mori) seem to be a hot topic in the React community, as they facilitate modifying small parts of deeply nested structures while keeping the original copy untouched. After some refactoring I was able to make better use of the React.addons.update() helper and shrink things down a bit, but I think using one of these libraries would be the next logical step.
As for the structure/organization stuff, it seems Flux (or another architecture like MV*) exists to solve that problem, which makes sense because React is just a view library.
Related
In a state-managing javascript framework (eg: React), if you have a collection of objects to store in state, which is the more useful and/or performant dataset type to hold them all, an object or an array? Here are a few of the differences I can think of that might come up in using them in state:
Referencing entries:
With objects you can reference an entry directly by its key, whereas with an array you would have to use a function like dataset.find(). The performance difference might be negligible when doing a single lookup on a small dataset, but I imagine it gets larger if the find function has to pore over a large set, or if you need to reference many entries at once.
Updating dataset:
With objects you can add new entries with {...dataset, [newId]: newEntry}, edit old entries with {...dataset, [id]: alteredEntry} and even edit multiple entries in one swoop with {...dataset, [id1]: alteredEntry1, [id2]: alteredEntry2}. Whereas with arrays, adding is easy [...dataset, newEntry1, newEntry2], but to edit you have to use find(), and then probably write a few lines of code cloning the dataset and/or the entry for immutability's sake. And then for editing multiple entries it's going to either require a loop of find() functions (which sounds bad for large lists) or use filter() and then have to deal with adding them back into the dataset afterwards.
Deleting
To delete a single entry from the object dataset you would do delete dataset[id] and for multiple entries you would either use a loop, or a lodash function like _.omit(). To remove entries from an array (and keep it dense) you'd have to either use findIndex() and then .slice(index, 1), or just use filter() which would work nicely for single or multiple deletes. I'm not sure about the performance implications of any of these options.
Looping/Rendering: For an array you can use dataset.map() or even easily render a specialized set on the fly with dataset.filter() or dataset.sort(). For the object to render in React you would have to use Object.values(dataset) before running one of the other iteration functions on it, which I suppose might create a performance hit depending on dataset size.
Are there any points I'm missing here? Does the usability of either one depend perhaps on how large the dataset is, or possibly how frequent the need to use "look up" operations are? Just trying to pin down what circumstances might dictate the superiority of one or the other.
There's no one real answer, the only valid answer is It dependsTM.
Though there are different use-cases that requires different solutions. It all boils down to how the data is going to be used.
A single array of objects
Best used when the order matters and when it's likely rendered as a whole list, where each item is passed from the list looping directly and where items are rarely accessed individually.
This is the quickest (least developer-time consuming) way of storing received data, if the data is already using this structure to begin with, which is often the case.
Pros of array state
Items order can be tracked easily,
Easy looping, where the individual items are passed down from the list.
It's often the original structure returned from API endpoints,
Cons of an array state
Updating an item would trigger a render of the full list.
Needs a little more code to find/edit individual items.
A single object by id
Best used when the order doesn't matter, and it's mostly used to render individual items, like on an edit item page. It's a step in the direction of a normalized state, explained in the next section.
Pros of an object state
Quick and easy to access/update by id
Cons of an object state
Can't re-order items easily
Looping requires an extra step (e.g. Object.keys().map)
Updating an item would trigger a render of the full list,
Likely needs to be parsed into the target state object structure
Normalized state
Implemented using both an object of all items by id, and an array of all the id strings.
{
items: {
byId: { /**/ },
allIds: ['abc123', 'zxy456', /* etc. */],
}
}
This becomes necessary when:
all use-cases are equally likely,
performance is a concern (e.g. huge list),
The data is nested a lot and/or duplicated at different levels,
re-rendering the list as undesirable side-effects
An example of an undesirable side-effect: updating an item, which triggers a full list re-render, could lose a modal open/close state.
Pros
Items order can be tracked,
Referencing individual items is quick,
Updating an item:
Requires minimal code
Doesn't trigger a full list render since the full list loops over allIds strings,
Changing the order is quick and clear, with minimal rendering,
Adding an item is simple but requires adding it in both dataset
Avoids duplicated objects in nested data structures
Cons
Individual removal is the worse case scenario, while not a huge deal either.
A little more code needed to manage the state overall.
Might be confusing to keep both state dataset in sync.
This approach is a common normalization process used in a lot of places, here's additional references:
Redux's state normalization is a strongly recommended best practice,
The normalizr lib.
I am using the react-redux for one of my app, The design is quite difficult and performance required is very high. its actually wyswyg builder.
We have been using the react from last 2 months, Then we moved to the react-redux for the separation of code and improve maitainance, code readability and the parent-child approach headache ofc.
So, I have an array which has quite complex structure
This is how my state look a like:
const initialState = {
builder:{},
CurrentElementName:"",
CurrentSlideName:"",
.......
}
As redux recommend to have less data in particular object as possible, I have another 8-9 reducer which are divided from the main state.
My problem: builder object is very complex which has almost 3-4 levels down, objects and arrays, this all are managed runtime.
So, on the componentdidmount my application will call the server get the data and fill the builder:{}
builder:{
slidedata:[{index:0,controName:'button',....},{index:0,controName:'slide',....}],
currentSlideName:'slide1',
currentElementName:'button1'
}
This builder object is quite complex and depends on the user actions like drag and drop, changing the property, changing events, changing position this builder object is being changed by the reducer
let clonedState= JSON.parse(JSON.stringify(state));
//doing other operations
Every time some thing changes this object needs to perform certain complex operations, for ex, adding the new slide will do certain operations and add the slide to the current array called slidedata.
What is the best practice to fast this things up? am I doing something wrong?
I am not sure what is the wrong in this code, and as redux recommend I can not use the flat structure for this particular array as its managed run-time.
I am sure that component has the data which the component wants.
Is there any way to handle the big data? This iterations and changing the state is killing my performance.
Based on my current experience with React-Redux framework, Re-select and ImmutableJS make a perfect combination for your requirement.
Re-Select uses memoization technique on Javascript objects and have list of API's specifically dealing with these kind of large set of Javascript objects thus improving performance. Read the docs.
Note: One should wisely read the documentation before using this setup to churn the best of these tools.
You can either create your own boilerplate code using above libraries or use the one which i am currently using in my project.
https://www.reactboilerplate.com/
This boilerplate is specifically designed for performance. You can customize it based on your needs.
Hope this helps!
Problem
I am trying to design webapp with a fairly complex state, where many single actions should trigger multiple changes and updates across numerous components, including fetching and displaying data asynchronously from external endpoints.
Some context and background:
I am building a hybrid cytoscape.js / redux app for modeling protein interactions using graphs.
My Redux store needs to hold a representation of the graph (a collection of node and edge objects), as well as numerous filtering parameters that can be set by the user (i.e only display nodes that contain a certain value, etc).
The current implementation uses React.js to manage all the state and as the app grew it became quite monolithic, hard to reason about and debug.
Considerations and questions
Having never used Redux before , I'm struggling a bit when trying to conceptually design the new implementation. Specifically, I have the following questions / concerns:
Cytoscape.js is an isolated component since it's directly manipulating the DOM. It expects the state (specifically the node and edge collections) to be of a certain shape, which is nested and a little hard to handle. Since every update to any node or edge object should be reflected graphically in cytoscape, should I mirror the shape it expects in my Redux store, or should I transform it every time I make an update? If so, what would be a good place to do it? mapStateToProps or inside a reducer?
Certain events, such as selecting nodes and/or edges, crate multiple side-effects across the entire app (data is fetched asynchronously from the server, other data is extracted from the selection and is transformed / aggregated, some of it derived and some of it from external api calls). I'm having trouble wrapping my head around how I should handle these changes. More specifically, let's say a SELECTION_CHANGE action is fired. Should it contain the selected objects, or just their IDs? I'm guessing IDs would be less taxing from a performance point. More importantly, how should I handle the cascade of updates the SELECTION_CHANGE actions requires? A single SELECTION_CHANGE action should trigger changes in multiple parts of the UI and state tree. Meaning triggering multiple actions across different reducers. What would be a good way to batch / queue / trigger multiple actions depending on SELECTION_CHANGE?
The user needs to be able to filter and manipulate the graph according to arbitrary predicates. More specifically, he should be able to permanently delete \ add nodes and edges, and also restrict the view to a particular subset of the graph. In other words, some changes are permanent (deleting \ adding or otherwise editing the graph) while others relate only to what is shown (for example, showing only nodes with expression levels above a certain threshold, etc). Should I keep a separate, "filtered" copy of the graph in my state tree, or should I calculate it on the fly for every change in the filtering parameters? And as before, if so, where would be a good place to perform these filtering actions: mapStateToProps, reducers or someplace else I haven't thought of?
I'm hoping these high-level and abstract questions are descriptive enough of what I'm trying to achieve, and if not I'll be happy to elaborate.
The recommended approach to Redux state shape is to keep your state as minimal as possible, and derive data from that as needed (usually in selector functions, which can be called in a component's mapState and in other locations such as thunk action creators or sagas). For nested/relational data, it works best if you store it in a normalized structure, and denormalize it as needed.
While what you put into your actions is up to you, I generally prefer to keep them fairly minimal. That means doing lookups of necessary items and their IDs in an action creator, and then looking up the data and doing necessary work in a reducer. As for the reducer handling, there's several ways to approach it. If you're going with a "combined slice reducers" approach, the combineReducers utility will give each slice reducer a chance to respond to a given action, and update its own slice as needed. You can also write more complex reducers that operate at a higher level in the state tree, and do all the nested update logic yourself as needed (this is more common if you're using a "feature folder"-type project structure). Either way, you should be able to do all your updating for a single logical operation with one dispatched action, although at times you may want to dispatch multiple consecutive actions in a row to perform a higher-level operation (such as UPDATE_ITEM -> APPLY_EDITS -> CLOSE_MODAL to handle clicking the "OK" button on an editing popup window).
I'd encourage you to read through the Redux docs, as they address many of these topics, and point to other relevant information. In particular, you should read the new Structuring Reducers section. Be sure to read through the articles linked in the "Prerequisite Concepts" page. The Redux FAQ also points to a great deal of relevant info, particularly the Reducers, Organizing State, Code Structure, and Performance categories.
Finally, a couple other relevant links. I keep a big list of links to high-quality tutorials and articles on React, Redux, and related topics, at https://github.com/markerikson/react-redux-links . Lots of useful info linked from there. I also am a big fan of the Redux-ORM library, which provides a very nice abstraction layer over managing normalized data in your Redux store without trying to change what makes Redux special.
So I started learning React a week ago and I inevitably got to the problem of state and how components are supposed to communicate with the rest of the app. I searched around and Redux seems to be the flavor of the month. I read through all the documentation and I think it's actually a pretty revolutionary idea. Here are my thoughts on it:
State is generally agreed to be pretty evil and a large source of bugs in programming. Instead of scattering it all throughout your app Redux says why not just have it all concentrated in a global state tree that you have to emit actions to change? Sounds interesting. All programs need state so let's stick it in one impure space and only modify it from within there so bugs are easy to track down. Then we can also declaratively bind individual state pieces to React components and have them auto-redraw and everything is beautiful.
However, I have two questions about this whole design. For one, why does the state tree need to be immutable? Say I don't care about time travel debugging, hot reload, and have already implemented undo/redo in my app. It just seems so cumbersome to have to do this:
case COMPLETE_TODO:
return [
...state.slice(0, action.index),
Object.assign({}, state[action.index], {
completed: true
}),
...state.slice(action.index + 1)
];
Instead of this:
case COMPLETE_TODO:
state[action.index].completed = true;
Not to mention I am making an online whiteboard just to learn and every state change might be as simple as adding a brush stroke to the command list. After a while (hundreds of brush strokes) duplicating this entire array might start becoming extremely expensive and time-consuming.
I'm ok with a global state tree that is independent from the UI that is mutated via actions, but does it really need to be immutable? What's wrong with a simple implementation like this (very rough draft. wrote in 1 minute)?
var store = { items: [] };
export function getState() {
return store;
}
export function addTodo(text) {
store.items.push({ "text": text, "completed", false});
}
export function completeTodo(index) {
store.items[index].completed = true;
}
It's still a global state tree mutated via actions emitted but extremely simple and efficient.
Isn't Redux just glorified global state?
Of course it is. But the same holds for every database you have ever used. It is better to treat Redux as an in-memory database - which your components can reactively depend upon.
Immutability enables checking if any sub-tree has been altered very efficient because it simplifies down to an identity check.
Yes, your implementation is efficient, but the entire virtual dom will have to be re-rendered each time the tree is manipulated somehow.
If you are using React, it will eventually do a diff against the actual dom and perform minimal batch-optimized manipulations, but the full top-down re-rendering is still inefficient.
For an immutable tree, stateless components just have to check if the subtree(s) it depends on, differ in identities compared to previous value(s), and if so - the rendering can be avoided entirely.
Yes it is!!!
Since there is no governance of who is allowed to write a specific property/variable/entry to the store and practically you can dispatch any action from anywhere, the code tends to be harder to maintain and even spaghetti when your code base grows and/or managed by more than one person.
I had the same questions and issues with Redux when I started use it so I have created a library that fix these issue:
It is called Yassi:
Yassi solves the problems you mentioned by define a globally readable and privately writable store. It means that anyone can read a property from the store (such as in Redux but simpler).
However only the owner of the property, meaning the object that declare the property can write/update that property in the store
In addition, Yassi has other perks in it such as zero boilerplate to declare entry in the store by using annotations (use #yassit('someName'))
Update the value of that entry does not require actions/reducers or other such cumbersome code snippets, instead just update the variable like in regular object.
I am using reactjs and the flux architecture in a project I'm working on. I am a bit puzzled by how to break up nested data correctly into stores and why I should split up my data into multiple stores.
To explain the problem I'll use this example:
Imagine a Todo application where you have Projects. Each project has tasks and each task can have notes.
The application uses a REST api to retrieve the data, returning the following response:
{
projects: [
{
id: 1,
name: "Action Required",
tasks: [
{
id: 1,
name: "Go grocery shopping",
notes: [
{
id: 1,
name: "Check shop 1"
},
{
id: 2,
name: "Also check shop 2"
}
]
}
]
},
]
}
The fictive application's interface displays a list of projects on the left and when you select a project, that project becomes active and its tasks are displayed on the right. When you click a task you can see its notes in a popup.
What I would do is use 1 single store, the "Project Store". An action does the request to the server, fetches the data and instructs the store to fill itself with the new data. The store internally saves this tree of entities (Projects -> Tasks -> Notes).
To be able to show and hide tasks based on which project is selected I'd also keep a variable in the store, "activeProjectId". Based on that the view can get the active project, its tasks and render them.
Problem solved.
However: after searching a bit online to see if this is a good solution I see a lot of people stating that you should use a separate store per entity.
This would mean:
A ProjectStore, TaskStore and NoteStore. To be able to manage associations I would possibly also need a "TasksByProjectStore" and a "NotesByTaskStore".
Can someone please explain why this would be better? The only thing I see is a lot of overhead in managing the stores and the data flow.
There are pro's and cons to use one store or different stores. Some implementations of flux specifically favour one store to rule them all, so to speak, while others also facilitate multiple stores.
Whether one store or multiple stores suit your needs, depend on a) what your app does, and b) which future developments or changes you expect. In a nutshell:
One store is better if your key concern is the dependencies between your nested entities. And if you are less worried about dependencies between single entity relation between server-store-component. One store is great if e.g. you want to manage stats on project level about underlying tasks and notes. Many parent-child-like relationships and all-in-one data fetching form server favour one store solution.
Multiple stores better if your key concern is dependencies between single entity connections between server-store-component. Weak entity-to-entity relationships and independent single entity server fetches and updates favour multiple stores.
In your case: my bet would be that one store is better. You have evident parent-child relationship, and get all project data at once from server.
The somewhat longer answer:
One store:
Great to minimize overhead of managing multiple stores.
It works well if your top view component is the only stateful component, and gets all data from the store, and distributes details to stateless children.
However, the need to manage dependencies between your entities does not simply go away: instead of managing them between different stores, you need to manage them inside the single store. Which therefore gets bigger (more lines of code).
Also, in a typical flux setup, each store emits a single 'I have changed' event, and leaves it up to the component(s) to figure out what changed and if they need top re-render. So if you have many nested entities, and one of the entities receives many updates from the server, then your superstore emits many changes, which might trigger a lot of unnecessary updates of the entire structure and all components. Flux-react can handle a lot, and the detail-changed-update-everything is what it handles well, but it may not suit everyones needs (I had to abandon this approach when it screwed up my transitions between states in one project).
Multiple stores:
more overhead, yes, but for some projects you get returns as well
if you have a close coupling between server data and components, with the flux store in between, it is easier to separate concerns in separate stores
if e.g. you are expecting many changes and updates to your notes data structure, than it is easier to have a stateful component for notes, that listens to the notes store, which in turn handles notes data updates from the server. When processing changes in your notes structure, you can focus on notes store only, without figuring out how notes are handled inside some big superstore.