Reflux actions that commit multiple local changes to a remote source - javascript

What's a sustainable, consistent and logical way to deal with Reflux actions that commit a number changes that have only been made locally to a remote source?
The unidirectional dataflow makes perfect sense if user interaction always (or never, as in most tutorial examples) gives rise to remote changes. A typical case:
The user enters a value into a field, which triggers an action (e.g. CommentActions.setTitle(title))
The action makes a remote call, and upon success triggers the CommentActions.setTitle.completed(title) child action
The store listens through e.g. CommentStore#onSetTitleCompleted(title), and changes its state
A component is re-rendered reflecting the new state (e.g. state.title).
Another way to put this might be that every modification to the local store is mirrored by a change in the remote store. However, for various reasons, this is not always desirable. In a lot of situations, it makes sense to only call the server once editing is complete, e.g. upon pressing a "Save" or "Post" button.
I can see this happening in a multitude of situations. For example, without knowing the implementation details, it seems likely that when creating events on Facebook, all data (date, title, description, guest list etc.) is stored locally until the user hits the "Create Event" button.
It seems logical to introduce an action like CommentActions.postComment(), but that creates a difficulty: there is no obvious way for the action to access all the data that has been supplied and edited to that point, unlike the first case where each action is fed the data subject to change (e.g. the title).
The best way I've come up with so far is to make the component responsible for feeding the right data to the action. The local values are held in the store, and are thus accessible to the component through mixing in of state. When it's time for the final save, it's triggered by e.g. CommentActions.postComment(this.state.title, this.state.body, this.state.email, ...).
I suppose there are many solutions involving eschewing the Flux flow (e.g. having the store call actions) but I'm primarily interested in solutions that don't break the very foundational principles of the library :)

Related

Web interface to allow clients to write and execute conditional statements

I'm writing a web-based system using flask with react and redux that needs to have web-based clients write conditional statements that can be saved to a configuration file, but also executed in real time without restarting the server or other services.
Obviously this can be done using eval(), but obviously we won't be using that.
Any safe ways to run user conditions that call on live variables to perform calculations?
As an example they might want to perform a standard conditional calculation like:
if(a===1 && (b===2 || c===2)){
//do something
}
Where a, b, and c are values that are provided from the server to the client and change dynamically.
UPDATE based on question:
The server provides real-time updates on alarms monitored by the server. When an alarm changes state - say from no-alarm to in-alarm - it sends the new data to the client.
The client side renders this information as a list of alarms. The list can be filtered easily enough, but one issue is that you can have an alarm flood event where ~1000 alarms all come in simultaneously. You also have a few standard/common events where a particular series of alarms all change to a particular state at the same time and that indicates a particular issue/fault and hence a particular fix.
Each user is unique, so it can;t be a one-size-fits all approach and it would be useful if each user can set some basic rules that determine what message to display based on the value of any combination of alarms and their alarm state. They would use a browser form to select these condition states which they can submit to the server. This will insert a line into thier personal configuration file held on the server so that each time they log in they automatically have access to these calculation.
if an alarm changes state, it is sent to the client, which then automatically performs the calculation in the background to determine if a message needs to be displayed.
Appears like the best approach is to allow the strings, but to ensure that all strings are parsed through a super well constructed function to remove anything malicious.
I'll probably run this function many times - first on submission, and also just before executing.
If these functions are developed using an interactive GUI of dropdown buttons and auto-fill input fields that constructs the code for the user (i.e. the user doesn't actually write the code), it should be safe enough.

Three questions about Store in Flux

Following the flux concepts we can get the next assertions for which I couldn't find explanations.
Every store will receive every action.
Why? My suggestion: since a store contents some business-logic we have to provide it with all possible changes and data so the store can decide what to do with them on its own.
The data in a store must only be mutated by responding to an action.
Why? My suggestion: the reason is violation of unidirectional data-flow in case of not responding to an action.
Every time a store's data changes it must emit a "change" event.
Why? I can't get this point.
Flux is just a way of managing the data flow of your application, so it is up to the developer to make sure this actually happens. But I'll try to paint a picture of why these concepts are a part of Flux.
Every store will receive every action.
If you have only one dispatcher in your application, every store will listen to actions dispatched through that dispatcher. It is up to you whether or not the store should act on the action dispatched, but to be able to react on it the store has to know of it.
Not all actions should lead to changes in a store, though. But the dispatcher simply doesn't care, because it won't know anything about the store implementation. It's just telling all stores that this action happened, do what you want with it or go on with your life without caring.
The data in a store must only be mutated by responding to an action.
You're right that doing it with a different approach could be violation of unidirectional data-flow. Doing things this way makes sure all parts of your application has the correct state based on the actions that happens.
By not doing it this way you would let go of one of the flux strengths. Update your store based on dispatched actions, and other stores will also be aware that the action happened, and thereby react to it if they want to. If you update the store directly you will end up having no clear picture of what parts of your application that are altering the state of your store.
Every time a store's data changes it must emit a "change" event.
People often describe the stores in a flux application as the source of truth. When a store's data changes, the basis for the visualization of your data changes. You want to be confident that if my store holds a certain value, this is what my application uses as it's data.
It's related to the first quote here. The store doesn't know if a listener depends on it's data. By emitting a change, it will let all listeners know that hey, I changed. Make sure you have all my latest changes. If you don't emit change, the listener could end up displaying something based on old data.
All of these statements are related to the same thing: If an action happens in your application, don't make any assumptions about which part of your application that wants to know the details of it. Make sure everyone can act on it, if they want to.

Partial state changes for Vaadin's AbstractJavascriptComponent

I'm implementing a JavaScript-based Vaadin component that will need to show and update a relatively large data set. I'm doing this by extending AbstractJavaScriptComponent.
I'm trying to keep the JS side as "dumb" as possible, delegating user interactions to the server using RPC, and which updates the shared state. Then the JS connector wrapper's onStateChange function is called with the new state, which causes the DOM to be updated accordingly.
I have 2 problems:
I don't want to transfer the whole data set each time a small part gets updated.
I don't want to entirely rebuild the UI each time either.
I can solve the second problem by keeping the previous state and comparing parts of it to find out what changed and only make the necessary DOM changes.
But that still leaves the first problem.
Do I have to stop using Vaadin's shared state mechanism and instead only use RPC for communicating the changes to the state?
Update:
I've been doing some testing, and it certainly appears that Vaadin's shared state mechanism is horrible in terms of efficiency:
Whenever the component calls getState() in order to update some property in the state object (or even without updating anything), the whole state object is transferred. The only way to avoid this, as far as I can see, is to not use the shared state mechanism and instead use RPC calls to communicate specific state changes to client.
There are some issues with the RPC approach that will need to be resolved, for example: if you change a value multiple times within a single request/response cycle, you don't want to make the RPC call multiple times. Instead, you want only the last value to be sent just like the shared state mechanism only sends the final state in the response. You can keep dirty flags for each part of the state that you want to send separately (or just keep a copy of the previous state and compare), but then you need to somehow trigger the RPC call at the end of the request handling. How can this be done?
Any ideas on this are welcome!
Update 2:
Vaadin 8 fixes the root issue: it sends only the changed state properties. Also, it doesn't call onStateChange() on the JS connector anymore when only doing an RPC call (and not changing any state).
OP is correct in stating that shared state synchronisation is inefficient for AbstractJavaScriptComponent-based components. The entire state object is serialised and made available to the Javascript connector's onStateChange method whenever the connector is marked as dirty. Other non-javascript components handle state updates more intelligently by only sending changes. The exact place in the code where this happens is line 97 in com.vaadin.server.LegacyCommunicationManager.java
boolean supportsDiffState = !JavaScriptConnectorState.class
.isAssignableFrom(stateType);
I'm not sure why state update is handled differently for AbstractJavaScriptComponent-based components. Maybe it's to simplify the javascript connector and remove the need to assemble a complete state object from deltas. It would be great if this could be addressed in a future version.
As you suggest, you could dispense with JavaScriptComponentState completely and rely on server->client RPC for updates. Keep dirty flags in you server-side component or compare old state and new state by any mechanism you want.
To coalesce the changes and send only one RPC call for each change, you could override beforeClientResponse(boolean initial) in your server-side component. This is called just before sending a response to the client and is your chance to add a set of RPC calls to update the client-side component.
Alternatively, you could override encodeState where you have free-reign to send exactly whatever JSON you like to the client. You could choose to add a list of changes to the base JSON object returned by super.encodeSate. Your javascript connector could interpret as appropriate in its onStateChange method.
Edited to add: calling getState() in your server-side component will mark the connector as dirty. If you want to get state without marking it as dirty then use getState(false) instead.
Following our discussion about this, I've created a drop-in replacement for AbstractJavaScriptComponent that transmits state deltas and includes some extra enhancements. It's in the very early stages but should be useful.
https://github.com/emuanalytics/vaadin-enhancedjavascript
The solution is deceptively simple: basically re-enabling state difference calculation by bypassing this line of code in com.vaadin.server.LegacyCommunicationManager.java:
boolean supportsDiffState = !JavaScriptConnectorState.class
.isAssignableFrom(stateType);
The implementation of the solution is complicated by the fact that the Vaadin classes are not easily extended so I've had to copy and re-implement 6 classes.
Vaadin's shared state works exactly like you want out of the box: when a component is added to the DOM first time, the whole shared state is transferred from server to client, so that it's possible to display the component. After that, only changes are transferred. For example, one changes the caption of a visible component by calling component.setCaption("new caption"), Vaadin only transfers that new caption text to client and "merges" that to the client-side shared state instance of the component.

How to integrate Redux with very large data-sets and IndexedDB

I have an app that uses a sync API to get its data, and requires to store all the data locally.
The data set itself is very large, and I am reluctant to store it in memory, since it can contains thousands of records. Since I don't think the actual data structure is relevant, let's assume I am building an email client that needs to be accessible offline, and that I want my storage mechanism to be IndexedDB (which is async).
I know that a simple solution would be to not have the data structure as part of my state object and only populate the state with the required data (eg - store email content on state when EMAIL_OPEN action is triggered). This is quite simple, especially with redux-thunk.
However, this would mean I need to compromise 2 things:
The user data is no longer part of the "application state", although in truth it is. Since the sync behavior is complex, and removing it from the app state machine will hurt the elegance of the redux concepts (the way I understand them)
I really like the redux architecture and would like all of my logic to go through it, not just the view state.
Are there any best-practices on how to use redux with a not-in-memory state properties? The thing I find hardest to wrap my head around is that redux relies on synchronous APIs, and so I cannot replace my state object with an async state object (unless I remove redux completely and replace it with my own, async implementation and connector).
I couldn't find an answer using Google, but if there are already good resources on the subject I would love to be pointed out as well.
UPDATE:
Question was answered but wanted to give a better explantation into how I implemented it, in case someone runs into it:
The main idea is to maintain change lists of both client and server using simply redux reducers, and use a connector to listen to these change lists to update IDB, and also to update the server with client changes:
When client makes changes, use reducers to update client change list.
When server sends updates, use reducers to update server change list.
A connector listens to store, and on state change updates IDB. Also maintain internal list of items that were modified.
When updating the server, use list of modified items to pull delta from IDB and send to server.
When accessing the data, use normal actions to pull from IDB (eg using redux-thunk)
The only caveat with this approach is that since the real state is stored in IDB, so we do lose some of the value of having one state object (and also harder to rewind/fast-forward state)
I think your first hunch is correct. If(!) you can't store everything in the store, you have to store less in the store. But I believe I can make that solution sound much better:
IndexedDB just becomes another endpoint, much like any server API you consume. When you fetch data from the server, you forward it to IndexedDB, from where your store is then populated. The store gets just what it needs and caches it as long as it doesn't get too big or stale.
It's really not different than, say, Facebook consuming their API. There's never all the data for a user in the store. References are implemented with IDs and these are loaded when required.
You can keep all your logic in redux. Just create actions as usual for user actions and data changes, get the data you need and process it. The interface is still completely defined by the user data because you always have the information in the store that is needed to GET TO the rest of it when needed. It's just somewhat condensed, i. e. you only save the total number of messages or the IDs of a mailbox until the user navigates to it.

What's the idiomatic way to keep UI in sync with ajax calls when using Flux?

The general problem: Let's say I have a button with an onClick handler calling an action creator. The action does an ajax call which dispatches a message when ajax responds, and this in some way affects the UI. Given this basic pattern there's nothing stopping the user from clicking this button multiple times, and thus running the ajax call multiple times.
This is something that doesn't seem to be touched upon in the React or Flux documentation (as far as I have seen), so I've tried to come up with some methods on my own.
Here are those methods
Use lodash.throttle on a method which does an ajax call so that multiple clicks in quick succession don't create multiple calls.
Use lodash.debounce on a method so that ajax is only called once a user hasn't done any activity for a bit. This is how I'm doing semi-realtime updates of text fields on change.
Dispatch an "is updating" message to stores when the action is first called and then dispatch a "done" message when the ajax call returns. Do stuff like disabling input on the initial message and then re-enable on the second.
The third method seems to be the best in terms of functionality since it allows you to make the user interface reflect exactly what's going on, but it's also incredibly verbose. It clutters absolutely everything up with tons of extra state, handler methods, etc...
I don't feel like any of these methods are really idiomatic. What is?
Hal is pretty much correct. Dispatching multiple messages is the Fluxiest way to go.
However, I would be wary of dispatching an IS_UPDATING message. This makes reasoning about your code harder because for each AJAX action you're dispatching several actions at once.
The idiomatic solution is to split your AJAX "actions" (action-creator-actions) into three dispatched actions: MY_ACTION, MY_ACTION_SUCCESS, MY_ACTION_FAILURE, handling each instance appropriately, and tracking "pending-ness" along the way.
For example:
// MyActionCreator.js
// because this is in a closure, you can even use the promise
// or whatever you want as a sort of "ID" to handle multiple
// requests at one time.
postMessage() {
dispatch('POST_MESSAGE', { ... } );
api.slowMessagePostingAjaxThingy().then(
(success) => { dispatch('POST_MESSAGE_SUCCESS', { ... }); },
(failure) => { dispatch('POST_MESSAGE_FAILURE', { ... }); }
);
}
// MyStore.js
on('POST_MESSAGE', (payload) => { /* do stuff */ });
on('POST_MESSAGE_SUCCESS', (payload) => { /* handle success */ });
on('POST_MESSAGE_FAILURE', (payload) => { /* handle failure */ });
This gives you several benefits over your alternate solutions:
Your store is exclusively in control of whether an item is pending or not. You don't have to worry about changing UI state on actions in your UI code: you can have your UI look exclusively to a pending property of your store for truth. This is probably the biggest reason for using Flux over MVC systems.
You have a clean interface for taking your actions. It's easy to reason about and easy to attach other stores to this data (if you have a LatestMessageStore or something, it's easy to subscribe to these events). This is the benefit over using IS_UPDATING as Hal suggested.
You save your lodash calls for when they semantically make sense— like when you may be inundated with legitimate data (a text field).
You can easily switch between optimistic updates (change the store when POST_MESSAGE is called) or pessimistic updates (change the store on POST_MESSAGE_SUCCESS).
I would argue that the third method is the correct way, but I don't find it to be verbose. A lot of React code that I see written sort of misses the spirit of React with its idea of very small, composable components. When large monolithic components are created, yes, things can get very messy.
But if the button in question is its own component, then it can take care of rendering based on its state. When a user clicks the button, the state of just that component changes -- and it renders it in a way that it can't be clicked again.
Once the store has notified that component that it has changed, the component can set its state back -- and with it, re-render itself.
It's a pretty straight-forward process; it just requires thinking about pages as a collection of small, composable units.

Categories