What's the difference between a store and a model? - javascript

In flux based applications there is a concept called a store. I've been trying to determine what features a store has and how they differ from models? Does server communication happen in a store? Where does that occur? Are they always singletons?

Stores are domain models, not ORM models.
They manage application state for a logical domain. They can manage state using collections, single values or a combination of both.
But they have a number of specific features that set them apart from normal models:
They have no setters. No one changes the stores from the outside.
The only way data gets into stores is through the callback they register with the dispatcher. They receive every action that goes through the system through this callback. They define which actions they will respond to, ignoring most of them.
The only methods they publicly expose are a set of getters and methods to register/unregister listeners.
When the state they manage changes, they emit a change event. This alerts the view layer that the state of the store has changed, so the views can query for the new data they need, using the getters.
They can call for new data, but when that data is returned it should be in the form of a new action so that all the stores may respond to it. Doing this ensures the resiliency of the application -- you always have all new data throughout the entire system.
Some people prefer to call for new data in the action creators, rather than the stores, which enforces that the new data will originate with an action. This is perfectly acceptable and actually is more common, I believe. But really either style is fine.

Related

Three questions about Store in Flux

Following the flux concepts we can get the next assertions for which I couldn't find explanations.
Every store will receive every action.
Why? My suggestion: since a store contents some business-logic we have to provide it with all possible changes and data so the store can decide what to do with them on its own.
The data in a store must only be mutated by responding to an action.
Why? My suggestion: the reason is violation of unidirectional data-flow in case of not responding to an action.
Every time a store's data changes it must emit a "change" event.
Why? I can't get this point.
Flux is just a way of managing the data flow of your application, so it is up to the developer to make sure this actually happens. But I'll try to paint a picture of why these concepts are a part of Flux.
Every store will receive every action.
If you have only one dispatcher in your application, every store will listen to actions dispatched through that dispatcher. It is up to you whether or not the store should act on the action dispatched, but to be able to react on it the store has to know of it.
Not all actions should lead to changes in a store, though. But the dispatcher simply doesn't care, because it won't know anything about the store implementation. It's just telling all stores that this action happened, do what you want with it or go on with your life without caring.
The data in a store must only be mutated by responding to an action.
You're right that doing it with a different approach could be violation of unidirectional data-flow. Doing things this way makes sure all parts of your application has the correct state based on the actions that happens.
By not doing it this way you would let go of one of the flux strengths. Update your store based on dispatched actions, and other stores will also be aware that the action happened, and thereby react to it if they want to. If you update the store directly you will end up having no clear picture of what parts of your application that are altering the state of your store.
Every time a store's data changes it must emit a "change" event.
People often describe the stores in a flux application as the source of truth. When a store's data changes, the basis for the visualization of your data changes. You want to be confident that if my store holds a certain value, this is what my application uses as it's data.
It's related to the first quote here. The store doesn't know if a listener depends on it's data. By emitting a change, it will let all listeners know that hey, I changed. Make sure you have all my latest changes. If you don't emit change, the listener could end up displaying something based on old data.
All of these statements are related to the same thing: If an action happens in your application, don't make any assumptions about which part of your application that wants to know the details of it. Make sure everyone can act on it, if they want to.

Should I use a single Model object for the whole SPA or many small models for each component?

I am thinking about how to organize Model in my webapp. I have two options, as I see it:
Create a single globally available Model object, then on instantiating new components they must register themselves in the Model object via methods, then update it when their state changes. Others would listen to changes from the Model by some key.
Create a model object for each component and let others know via some Event dispatcher about their state change.
In 1st case I feel like it will be easier to manage all the state change in one place, have more standardized way of doing it, while with the latter scenario I think it will be harder to maintain consistence over the system.
What are the factors I should base the decision on?
I would go with the first but adding the idea of the second: Instead of the components checking constantly the keys for changes you can make the only model signal the components when needed after they register. You can pass the signal/callback with the register itself.
That way you don't need to create an army of models that update themselves (that might also mean you have to avoid cyclic propagation). In addition to this, if I understand correctly it would violate the single source of truth. Also it looks worse both memory and performance wise.

Partial state changes for Vaadin's AbstractJavascriptComponent

I'm implementing a JavaScript-based Vaadin component that will need to show and update a relatively large data set. I'm doing this by extending AbstractJavaScriptComponent.
I'm trying to keep the JS side as "dumb" as possible, delegating user interactions to the server using RPC, and which updates the shared state. Then the JS connector wrapper's onStateChange function is called with the new state, which causes the DOM to be updated accordingly.
I have 2 problems:
I don't want to transfer the whole data set each time a small part gets updated.
I don't want to entirely rebuild the UI each time either.
I can solve the second problem by keeping the previous state and comparing parts of it to find out what changed and only make the necessary DOM changes.
But that still leaves the first problem.
Do I have to stop using Vaadin's shared state mechanism and instead only use RPC for communicating the changes to the state?
Update:
I've been doing some testing, and it certainly appears that Vaadin's shared state mechanism is horrible in terms of efficiency:
Whenever the component calls getState() in order to update some property in the state object (or even without updating anything), the whole state object is transferred. The only way to avoid this, as far as I can see, is to not use the shared state mechanism and instead use RPC calls to communicate specific state changes to client.
There are some issues with the RPC approach that will need to be resolved, for example: if you change a value multiple times within a single request/response cycle, you don't want to make the RPC call multiple times. Instead, you want only the last value to be sent just like the shared state mechanism only sends the final state in the response. You can keep dirty flags for each part of the state that you want to send separately (or just keep a copy of the previous state and compare), but then you need to somehow trigger the RPC call at the end of the request handling. How can this be done?
Any ideas on this are welcome!
Update 2:
Vaadin 8 fixes the root issue: it sends only the changed state properties. Also, it doesn't call onStateChange() on the JS connector anymore when only doing an RPC call (and not changing any state).
OP is correct in stating that shared state synchronisation is inefficient for AbstractJavaScriptComponent-based components. The entire state object is serialised and made available to the Javascript connector's onStateChange method whenever the connector is marked as dirty. Other non-javascript components handle state updates more intelligently by only sending changes. The exact place in the code where this happens is line 97 in com.vaadin.server.LegacyCommunicationManager.java
boolean supportsDiffState = !JavaScriptConnectorState.class
.isAssignableFrom(stateType);
I'm not sure why state update is handled differently for AbstractJavaScriptComponent-based components. Maybe it's to simplify the javascript connector and remove the need to assemble a complete state object from deltas. It would be great if this could be addressed in a future version.
As you suggest, you could dispense with JavaScriptComponentState completely and rely on server->client RPC for updates. Keep dirty flags in you server-side component or compare old state and new state by any mechanism you want.
To coalesce the changes and send only one RPC call for each change, you could override beforeClientResponse(boolean initial) in your server-side component. This is called just before sending a response to the client and is your chance to add a set of RPC calls to update the client-side component.
Alternatively, you could override encodeState where you have free-reign to send exactly whatever JSON you like to the client. You could choose to add a list of changes to the base JSON object returned by super.encodeSate. Your javascript connector could interpret as appropriate in its onStateChange method.
Edited to add: calling getState() in your server-side component will mark the connector as dirty. If you want to get state without marking it as dirty then use getState(false) instead.
Following our discussion about this, I've created a drop-in replacement for AbstractJavaScriptComponent that transmits state deltas and includes some extra enhancements. It's in the very early stages but should be useful.
https://github.com/emuanalytics/vaadin-enhancedjavascript
The solution is deceptively simple: basically re-enabling state difference calculation by bypassing this line of code in com.vaadin.server.LegacyCommunicationManager.java:
boolean supportsDiffState = !JavaScriptConnectorState.class
.isAssignableFrom(stateType);
The implementation of the solution is complicated by the fact that the Vaadin classes are not easily extended so I've had to copy and re-implement 6 classes.
Vaadin's shared state works exactly like you want out of the box: when a component is added to the DOM first time, the whole shared state is transferred from server to client, so that it's possible to display the component. After that, only changes are transferred. For example, one changes the caption of a visible component by calling component.setCaption("new caption"), Vaadin only transfers that new caption text to client and "merges" that to the client-side shared state instance of the component.

How to integrate Redux with very large data-sets and IndexedDB

I have an app that uses a sync API to get its data, and requires to store all the data locally.
The data set itself is very large, and I am reluctant to store it in memory, since it can contains thousands of records. Since I don't think the actual data structure is relevant, let's assume I am building an email client that needs to be accessible offline, and that I want my storage mechanism to be IndexedDB (which is async).
I know that a simple solution would be to not have the data structure as part of my state object and only populate the state with the required data (eg - store email content on state when EMAIL_OPEN action is triggered). This is quite simple, especially with redux-thunk.
However, this would mean I need to compromise 2 things:
The user data is no longer part of the "application state", although in truth it is. Since the sync behavior is complex, and removing it from the app state machine will hurt the elegance of the redux concepts (the way I understand them)
I really like the redux architecture and would like all of my logic to go through it, not just the view state.
Are there any best-practices on how to use redux with a not-in-memory state properties? The thing I find hardest to wrap my head around is that redux relies on synchronous APIs, and so I cannot replace my state object with an async state object (unless I remove redux completely and replace it with my own, async implementation and connector).
I couldn't find an answer using Google, but if there are already good resources on the subject I would love to be pointed out as well.
UPDATE:
Question was answered but wanted to give a better explantation into how I implemented it, in case someone runs into it:
The main idea is to maintain change lists of both client and server using simply redux reducers, and use a connector to listen to these change lists to update IDB, and also to update the server with client changes:
When client makes changes, use reducers to update client change list.
When server sends updates, use reducers to update server change list.
A connector listens to store, and on state change updates IDB. Also maintain internal list of items that were modified.
When updating the server, use list of modified items to pull delta from IDB and send to server.
When accessing the data, use normal actions to pull from IDB (eg using redux-thunk)
The only caveat with this approach is that since the real state is stored in IDB, so we do lose some of the value of having one state object (and also harder to rewind/fast-forward state)
I think your first hunch is correct. If(!) you can't store everything in the store, you have to store less in the store. But I believe I can make that solution sound much better:
IndexedDB just becomes another endpoint, much like any server API you consume. When you fetch data from the server, you forward it to IndexedDB, from where your store is then populated. The store gets just what it needs and caches it as long as it doesn't get too big or stale.
It's really not different than, say, Facebook consuming their API. There's never all the data for a user in the store. References are implemented with IDs and these are loaded when required.
You can keep all your logic in redux. Just create actions as usual for user actions and data changes, get the data you need and process it. The interface is still completely defined by the user data because you always have the information in the store that is needed to GET TO the rest of it when needed. It's just somewhat condensed, i. e. you only save the total number of messages or the IDs of a mailbox until the user navigates to it.

ExtJS - How to use Proxy, Model? How are they related?

I've been trying to learn to work with Models and Stores. But the proxy bit is confusing me a lot. So I'm going to list out my understanding here - please point out the gaps in my understanding.
My understanding
Models are used to represent domain objects.
Models can be created by ModelManager, or by simply using the constructor
Models are saved in Stores
Stores may be in memory stores, or can be server stores. This is configured using Proxy.
Proxy tells the store how to talk to a backing store - be that a JSON array, or a REST resource, or a simply configured URL via ajax.
Stores are responsible for storing models, and Proxies are responsible for controlling/helping with that task.
When a model's values are changed, its dirty flag gets set. It gets automatically cleared when the model is saved. (more in this later)
The part that confuses me
Why is there a proxy config and save method on Model? I understand that models can only be stored on to stores.
Why is the dirty flag not cleared simply when I add a model object to a store?
When I add a model object to a store, why does the model not acquire the proxy configured with that store?
proxy is a static configuration for a Model. Does that mean that we cannot use objects of a particular model with multiple data sources? By extension, does this mean having multiple stores for a single model essentially useless?
When we define a Store, are we defining a class (store-type, if we may call it that), or is it an instance of a store? Reason I ask is when we declare a grid, we simply pass it a store config as store: 'MyApp.store.MyStore' - does the grid instantiate a grid of that type, or is it simply using the store we've already instantiated?
Thanks!
PS: +50 bounty to the person who explains all this :) - will offer bounty after those 48 hours are over..
The docs say:
A Model represents some object that your application manages.
A Store is just a collection of Model instances - usually loaded from a server somewhere.
Models are saved in Stores
Not only. The Models can be used separately (f.i. for populating forms with data. Check out Ext.form.Panel.loadRecord for more info).
Why is there a proxy config and save method on Model? I understand that models can only be stored on to stores.
As I've said, not only.
Why is the dirty flag not cleared simply when I add a model object to a store?
Why should it? The Record becomes clean only when it gets synchronized with the corresponding record on the server side.
When I add a model object to a store, why does the model not acquire the proxy configured with that store?
This is, again, because model can be used without store.
proxy is a static configuration for a Model. Does that mean that we cannot use objects of a particular model with multiple data sources?
We cannot use objects of a particular model but we can use one model definition for multiple store. For example:
Ext.define('MyModel', {
// ... config
});
var store1 = Ext.create('Ext.data.Store', {
model: 'MyModel',
// ... config
proxy: {
// ...
// this proxy will be used when store1.sync() and store1.load() are called
}
// ...
});
// ...
var storeN = Ext.create('Ext.data.Store', {
model: 'MyModel',
// ... config
proxy: {
// ...
// this proxy will be used when storeN.sync() and storeN.load() are called
}
// ...
});
Since store1 and storeN use different proxies, the records, contained by these stores, may be completely different.
When we define a Store, are we defining a class (store-type, if we may call it that), or is it an instance of a store?
Yes, when we define a Store, we are defining a class.
Reason I ask is when we declare a grid, we simply pass it a store config as store: 'MyApp.store.MyStore' - does the grid instantiate a grid of that type, or is it simply using the store we've already instantiated?
There are couple of ways to set store config for grid:
store: existingStore,
store: 'someStoresId',
store: 'MyApp.store.MyStore',
In the first and the second cases existing instances of stores would be used. In the third case newly created instance of 'MyApp.store.MyStore' would be used. So, store: 'MyApp.store.MyStore', is equal to
var myStore = Ext.create('MyApp.store.MyStore', {});
// ...
// the following - is in a grid's config:
store: myStore,
UPDATE
When a model is added to store and then the store's sync() is called, why is the model's dirty flag not cleared?
It should be cleared, unless reader, writer and server response are not set up properly. Check out writer example. There is dirty flag being cleared automaticaly on store's sync().
if a model class is not given any proxy at all, why does it track its dirty status?
To let you know if the record was changed since the creation moment.
What is achieved by introducing the concept of Models syncing themselves to the server?
Let's assume you are creating a widget which contains plain set of inputs. The values for this inputs can be loaded from db (this set of inputs represents one row in db table). And when user changes the inputs' values data should be sent to the server. This data can be stored in one record.
So what would you use for this interface: one record or a store with only one record?
Standalone model - is for widgets that represent one row in db table.
Store - is for widgets that represent set of rows in db table.
I'm coming from ExtJS 2.2 [sic] to 4, and the Model behavior and terminology threw me for a loop too.
The best quick explanation I can find is from this post in the "Countdown to ExtJS 4" series on Sencha's blog. Turns out a Model acts like it does because it's "really" a Record.
The centerpiece of the data package is Ext.data.Model. A Model
represents some type of data in an application - for example an
e-commerce app might have models for Users, Products and Orders. At
its simplest a Model is just a set of fields and their data. Anyone
familiar with Ext JS 3 will have used Ext.data.Record, which was the
precursor to Ext.data.Model.
Here's the confusing part: A Model is both a model for the data you're using and a single instance of an object following that model. Let's call its two uses "Model qua model" and "Model qua Record".
This is why its load method requires a unique id (full stop). Model qua Record uses that id to create RESTful URLs for retrieving (and saving, etc) a single Record worth of data. The RESTful URL convention is described here and is linked to in this post on Sencha's blog that talks specifically about Model usage.
Here are a few RESTful URLs formed to that standard to get you familiar with the format, which it appears ExtJS' Model does use:
Operate on a Record with id of 1
GET /people/1 <<< That's what's used to retrieve a single record into Model
return the first record with id of 2
DELETE /people/2
destroy the first record with id of 7
POST /people/7?_method=DELETE
Etc etc.
This is also why Models have their own proxies, so that they can run RESTful operations via that URL modified to follow the conventions described in that link. You might pull 100s of records from one URL, which you'd use as your Store's proxy source, but when you want to save what's in the single Model (again, think "Model qua Record" here), you might perform those Record-specific operations through another URL whose backend messes with one record at a time.
So When Do I use Stores?
To store more than one instance of that Model, you'd slap them into a Store. You add lots of Models qua Records into Stores and then access those "Models" to pull data out. So if you have a grid you'll naturally want to have all that data locally, without having to re-query the server for each row's display.
From the first post:
Models are typically used with a Store, which is basically a
collection of Model instances.
And, of course, the Stores apparently pull info from Model qua Model here to know what they're carrying.
I found it in the documentation of sencha App Architecture Part 2
Use proxies for models:
It is generally good practice to do this as it allows you to load and
save instances of this model without needing a store. Also, when
multiple stores use this same model, you don’t have to redefine your
proxy on each one of them.
Use proxies for stores:
In Ext JS 4, multiple stores can use the same data model, even if the
stores will load their data from different sources. In our example,
the Station model will be used by the SearchResults and the Stations
store, both loading the data from a different location. One returns
search results, the other returns the user’s favorite stations. To
achieve this, one of our stores will need to override the proxy
defined on the model.

Categories