I'm working on a project that brings in a ton of data from one endpoint into a a single reducer. I'd like to convert that data in ES6 Classes, so I can give them helper method, provide relations between the data, and not have to work with plain javascript objects all the time. Also, to get relations between the data, I'm having to do n-squared computations and that's slowing down the frontend.
Here are the options I'm seeing:
1) Create a selector that connects with the redux store. This selector could get the data from the reducer and convert it into multiple ES6 classes that I've defined. If the reducer gets new data that is different, then the selector will recreate the ES6 class instantiations.
2) https://github.com/tommikaikkonen/redux-orm
This seems fantastic as well.
3) Create multiple selectors on the data set to that will compute a specified relation in the data set, so I can just call that selector each time I want to get a relation that would otherwise be an n-squared computation to get.
My question is which route of the three should I take? Is there an alternative besides these 3? Or do people just work with javascript objects mostly on the frontend and not deal with ES6 classes.
Update:
Two months later, and I'm still using Redux-ORM in production and it is fantastic! Highly recommend.
It's certainly entirely possible to do all that handling with "plain" functions and selectors. There's info on normalization in the Redux FAQ, and I have some articles on selectors and normalization as part of my React/Redux links list.
That said, I am a huge proponent of Redux-ORM. It's a fantastic tool for helping manage normalized/relational data in your Redux store. I use it for normalizing nested data, querying data, and updating that data immutably.
My Practical Redux blog post series includes two articles talking about Redux-ORM specifically: Redux-ORM Basics and Redux-ORM Concepts and Techniques. The latest post, Practical Redux Part 5: Loading and Displaying Data, shows Redux-ORM in action as well.
The author of Redux-ORM, Tommi Kaikkonen, actually just put up a beta of a major update to Redux-ORM that improves the API and behavior, which I'm looking forward to playing with.
I definitely recommend it!
Related
I am using the react-redux for one of my app, The design is quite difficult and performance required is very high. its actually wyswyg builder.
We have been using the react from last 2 months, Then we moved to the react-redux for the separation of code and improve maitainance, code readability and the parent-child approach headache ofc.
So, I have an array which has quite complex structure
This is how my state look a like:
const initialState = {
builder:{},
CurrentElementName:"",
CurrentSlideName:"",
.......
}
As redux recommend to have less data in particular object as possible, I have another 8-9 reducer which are divided from the main state.
My problem: builder object is very complex which has almost 3-4 levels down, objects and arrays, this all are managed runtime.
So, on the componentdidmount my application will call the server get the data and fill the builder:{}
builder:{
slidedata:[{index:0,controName:'button',....},{index:0,controName:'slide',....}],
currentSlideName:'slide1',
currentElementName:'button1'
}
This builder object is quite complex and depends on the user actions like drag and drop, changing the property, changing events, changing position this builder object is being changed by the reducer
let clonedState= JSON.parse(JSON.stringify(state));
//doing other operations
Every time some thing changes this object needs to perform certain complex operations, for ex, adding the new slide will do certain operations and add the slide to the current array called slidedata.
What is the best practice to fast this things up? am I doing something wrong?
I am not sure what is the wrong in this code, and as redux recommend I can not use the flat structure for this particular array as its managed run-time.
I am sure that component has the data which the component wants.
Is there any way to handle the big data? This iterations and changing the state is killing my performance.
Based on my current experience with React-Redux framework, Re-select and ImmutableJS make a perfect combination for your requirement.
Re-Select uses memoization technique on Javascript objects and have list of API's specifically dealing with these kind of large set of Javascript objects thus improving performance. Read the docs.
Note: One should wisely read the documentation before using this setup to churn the best of these tools.
You can either create your own boilerplate code using above libraries or use the one which i am currently using in my project.
https://www.reactboilerplate.com/
This boilerplate is specifically designed for performance. You can customize it based on your needs.
Hope this helps!
I'm dealing with a hand full of javascript objects that i get from an external api-library. I want to store the incoming objects in my react application using redux.
These objects are es2015 classes that also come with two handy methods called fromJSON and toJSON. As i want my redux store to be serializable (as it should be) i need a way to translate them to plain objects (toJSON does that by giving me back a dict). In my application i need to use these Objects as they come from the API since i need the methods attached to them and the api-client also wants these specific objects.
Is this a common need as i can't find much about this online or am i totally going the wrong path? How would i implement such a transformation? I'm currently thinking about attaching the es2015 classes to my action and call toJSON in my reducers. I could then create specific selectors that catch the json from redux and convert them back to the classes using the fromJSON functionality (would i have to memoize them?). These selectors could then be used in mapStateToProps to map then finally to a prop.
Let me know what you think about this and how/if i could improve this process.
Redux says the state SHOULD be serializable. It should be so, because at some point you might need to store the state locally (via some form of localstorage). But this does not mean the state has to consist of plain objects.
For example, lot of projects use Immutable.js objects for their state. There is an overhead serializing Immutable.js objects. But it can be done using transit-immutable-js.
From what I understand this principal is similar to your question.
What I understand from your question is that (whatever you mean by ES2015 classes) are deserializable/serilizable by fromJSON and toJSON methods, which is all you need.
Thus you can use your "API" objects in state and serialize it only when you need to store it locally.
Problem
I am trying to design webapp with a fairly complex state, where many single actions should trigger multiple changes and updates across numerous components, including fetching and displaying data asynchronously from external endpoints.
Some context and background:
I am building a hybrid cytoscape.js / redux app for modeling protein interactions using graphs.
My Redux store needs to hold a representation of the graph (a collection of node and edge objects), as well as numerous filtering parameters that can be set by the user (i.e only display nodes that contain a certain value, etc).
The current implementation uses React.js to manage all the state and as the app grew it became quite monolithic, hard to reason about and debug.
Considerations and questions
Having never used Redux before , I'm struggling a bit when trying to conceptually design the new implementation. Specifically, I have the following questions / concerns:
Cytoscape.js is an isolated component since it's directly manipulating the DOM. It expects the state (specifically the node and edge collections) to be of a certain shape, which is nested and a little hard to handle. Since every update to any node or edge object should be reflected graphically in cytoscape, should I mirror the shape it expects in my Redux store, or should I transform it every time I make an update? If so, what would be a good place to do it? mapStateToProps or inside a reducer?
Certain events, such as selecting nodes and/or edges, crate multiple side-effects across the entire app (data is fetched asynchronously from the server, other data is extracted from the selection and is transformed / aggregated, some of it derived and some of it from external api calls). I'm having trouble wrapping my head around how I should handle these changes. More specifically, let's say a SELECTION_CHANGE action is fired. Should it contain the selected objects, or just their IDs? I'm guessing IDs would be less taxing from a performance point. More importantly, how should I handle the cascade of updates the SELECTION_CHANGE actions requires? A single SELECTION_CHANGE action should trigger changes in multiple parts of the UI and state tree. Meaning triggering multiple actions across different reducers. What would be a good way to batch / queue / trigger multiple actions depending on SELECTION_CHANGE?
The user needs to be able to filter and manipulate the graph according to arbitrary predicates. More specifically, he should be able to permanently delete \ add nodes and edges, and also restrict the view to a particular subset of the graph. In other words, some changes are permanent (deleting \ adding or otherwise editing the graph) while others relate only to what is shown (for example, showing only nodes with expression levels above a certain threshold, etc). Should I keep a separate, "filtered" copy of the graph in my state tree, or should I calculate it on the fly for every change in the filtering parameters? And as before, if so, where would be a good place to perform these filtering actions: mapStateToProps, reducers or someplace else I haven't thought of?
I'm hoping these high-level and abstract questions are descriptive enough of what I'm trying to achieve, and if not I'll be happy to elaborate.
The recommended approach to Redux state shape is to keep your state as minimal as possible, and derive data from that as needed (usually in selector functions, which can be called in a component's mapState and in other locations such as thunk action creators or sagas). For nested/relational data, it works best if you store it in a normalized structure, and denormalize it as needed.
While what you put into your actions is up to you, I generally prefer to keep them fairly minimal. That means doing lookups of necessary items and their IDs in an action creator, and then looking up the data and doing necessary work in a reducer. As for the reducer handling, there's several ways to approach it. If you're going with a "combined slice reducers" approach, the combineReducers utility will give each slice reducer a chance to respond to a given action, and update its own slice as needed. You can also write more complex reducers that operate at a higher level in the state tree, and do all the nested update logic yourself as needed (this is more common if you're using a "feature folder"-type project structure). Either way, you should be able to do all your updating for a single logical operation with one dispatched action, although at times you may want to dispatch multiple consecutive actions in a row to perform a higher-level operation (such as UPDATE_ITEM -> APPLY_EDITS -> CLOSE_MODAL to handle clicking the "OK" button on an editing popup window).
I'd encourage you to read through the Redux docs, as they address many of these topics, and point to other relevant information. In particular, you should read the new Structuring Reducers section. Be sure to read through the articles linked in the "Prerequisite Concepts" page. The Redux FAQ also points to a great deal of relevant info, particularly the Reducers, Organizing State, Code Structure, and Performance categories.
Finally, a couple other relevant links. I keep a big list of links to high-quality tutorials and articles on React, Redux, and related topics, at https://github.com/markerikson/react-redux-links . Lots of useful info linked from there. I also am a big fan of the Redux-ORM library, which provides a very nice abstraction layer over managing normalized data in your Redux store without trying to change what makes Redux special.
I'm trying to wrap my head around Redux and how to implement it in a React Native app.
I get the general idea and I like it. But I'm not quite sure how to structure my store. I'll try to give an example with two scenes from the app.
ProjectListScreen:
A list of projects built with a ListView component. Each row exposes about 5 fields from each project object.
ProjectScreen:
A ScrollView showing all fields for a specific project. The project object can be quite large and not entirely flat. For example it holds an array of UUIDs pointing to images.
So, should I have a single reducer to handle the complete "Projects" or should I have one reducer for ProjectList and one for Projects? I.e. should I think in terms of the real domain or in terms of views/screens in the app?
I suspect that the answer will be to mimic the domain. But what if there are 1000 projects in the list? I need to load 1000 projects into the store including all fields for each project. But I will only need five of those fields to render the ListView. Only a couple of projects will probably be loaded fully because the user won't open all 1000 projects in a ProjectScreen. A change in one project will force a copy of the while array in order to stay immutable.
I don't want to get stuck in premature optimizing, but I'd like to get the structure of the store somewhat right from start. I know I could use Immutable.js to optimize the updating of items in the list, but this would give me non JS objects to work with, which feels kind of cumbersome.
I'd rather use seamless-immutable, but I don't think this kind of partial update of a large list will be faster with SI than copying the list.
I'd love to hear that performance will be a non-issue compared with the UI rendering and other tasks. This would make it a no-brainer to go with a domain structure.
Domain, absolutely. Your store structure should totally reflect the actual data you're working with. Your UI layer should then do any transformations needed on a per-component basis, primarily in your mapStateToProps functions. Focus on the actions and behavior that make up your application at its core. (This should also lead to a better testing setup as well.)
The best practice for organizing data that is nested or relational is to normalize it, similar to tables in a database. I've written some prior answers that describe this concept to some extent, and link to some further articles and discussions ( [0], [1], [2] ).
I generally advise against using Immutable.js, at least to start with. It has its uses, and can offer some performance improvements if your app is configured correctly, but also has a number of potential performance pitfalls as well ( [3] ). I would suggest sticking with plain JS objects. There's a variety of ways to use those in an immutable fashion. I have a list of articles describing those approaches ( [4] ), as well as a list of libraries that can make immutable use of plain JS objects easier ( [5] ).
Finally, I'm actually just starting to work on a new page for the Redux docs that describes best practices for structuring reducers. It'll take some time to get it written and polished, but you might want to keep an eye on it. The issue is at [6], and that has a link to my current WIP article branch.
[0] https://stackoverflow.com/a/38105182/62937
[1] https://stackoverflow.com/a/38017227/62937
[2] https://stackoverflow.com/a/37997192/62937
[3] https://github.com/markerikson/react-redux-links/blob/master/react-performance.md#immutable-data
[4] https://github.com/markerikson/react-redux-links/blob/master/immutable-data.md
[5] https://github.com/markerikson/redux-ecosystem-links/blob/master/immutable-data.md
[6] https://github.com/reactjs/redux/issues/1784
edit:
As a follow-up, the Redux docs now include a "Structuring Reducers" section, which has a variety of information on this topic.
Overall I'm happy with using Backbone.js for my company's frontend application. However I've noticed a lot of foundational problems that I wonder if anyone else encountered.
The biggest issue is that the frontend team does not control the API that powers our application. The objects passed are fairly complex in structure. Nested arrays, sub objects, etc.... This of itself is expected. The API serves a different purpose than the frontend. What each considers an "object" are completely different things.
In practice this leads to issues. Namely, one API endpoint may be broken into multiple frontend models. This is a common problem when dealing with APIs. It's typically addressed through Data Access Objects or a Data Access Layers that translate API objects into internal objects. Backbone by contrast expects models to be tightly coupled with the API endpoints. Sync operations on a model (i.e. save, fetch) immediately reach out to the API.
Adding to the issues, I seriously believe that toJSON in Backbone does too much. It's used to reproduce the model in a format that can be consumed internally; defines how the model should get posted to an API; and used for equality checks between models for many internal operations. Any of the three could get broken out into their own method.
Has anyone else dealt with this? What strategies did you use? Implement DAOs? Is there a fork of Backbone that accounts for these issues?
[edit]
A closer to real world case that I've encountered. When we query the API for results, there's about a hundred filters we can pass along with the request. The overall filter structure is pretty simple on that end an array of filter objects like so:
{
filterName: '',
// '~', '=', '<=', '>='
operator: '',
value: ''
}
For one particularly problematic filter, depending on the 'operator' we either allow the user to select a single option or we allow the user to construct a "sentence" from the available options. The former option renders as an HTML select, the latter we implemented with a lextree parser. Obviously the two necessitate wildly different code so we split this into two different classes, but to the API the filter is the same regardless of our implementation.
This is straightforward enough, but issues with Backbone come into how the classes are defined. We may want to make use of the built-in get/set functions. But this will dump properties into attributes, which affects the default toJSON which is also used to build the API representation of this filter, and whether this filter is equal to another filter of the same type.
With a Data Access pattern there'd be another layer that knew how to translate that filter to the API and vice versa. Any CRUD operation would get picked up by the specific DAO and processed by proxy.