I'm playing around with cyclejs and I'm trying to figure out what the idiomatic way to handle many sources/intents is supposed to be. I have a simple cyclejs program below in TypeScript with comments on the most relevant parts.
Are you supposed to model sources/intents as discreet events like you would in Elm or Redux, or are you supposed to be doing something a bit more clever with stream manipulation? I'm having a hard time seeing how you would avoid this event pattern when the application is large.
If this is the right way, wouldn't it just end up being a JS version of Elm with the added complexity of stream management?
import { div, DOMSource, h1, makeDOMDriver, VNode, input } from '#cycle/dom';
import { run } from '#cycle/xstream-run';
import xs, { Stream } from 'xstream';
import SearchBox, { SearchBoxProps } from './SearchBox';
export interface Sources {
DOM: DOMSource;
}
export interface Sinks {
DOM: Stream<VNode>
}
interface Model {
search: string
searchPending: {
[s: string]: boolean
}
}
interface SearchForUser {
type: 'SearchForUser'
}
interface SearchBoxUpdated {
type: 'SearchBoxUpdated',
value: string
}
type Actions = SearchForUser | SearchBoxUpdated;
/**
* Should I be mapping these into discreet events like this?
*/
function intent(domSource: DOMSource): Stream<Actions> {
return xs.merge(
domSource.select('.search-box')
.events('input')
.map((event: Event) => ({
type: 'SearchBoxUpdated',
value: ((event.target as any).value as string)
} as SearchBoxUpdated)),
domSource.select('.search-box')
.events('keypress')
.map(event => event.keyCode === 13)
.filter(result => result === true)
.map(e => ({ type: 'SearchForUser' } as SearchForUser))
)
}
function model(action$: Stream<Actions>): Stream<Model> {
const initialModel: Model = {
search: '',
searchPending: {}
};
/*
* Should I be attempting to handle events like this?
*/
return action$.fold((model, action) => {
switch (action.type) {
case 'SearchForUser':
return model;
case 'SearchBoxUpdated':
return Object.assign({}, model, { search: action.value })
}
}, initialModel)
}
function view(model$: Stream<Model>): Stream<VNode> {
return model$.map(model => {
return div([
h1('Github user search'),
input('.search-box', { value: model.search })
])
})
}
function main(sources: Sources): Sinks {
const action$ = intent(sources.DOM);
const state$ = model(action$);
return {
DOM: view(state$)
};
}
run(main, {
DOM: makeDOMDriver('#main-container')
});
In my opinion you shouldn't be multiplexing intent streams like you do (merging all the intent into a single stream).
Instead, you can try returning multiple streams your intent function.
Something like:
function intent(domSource: DOMSource): SearchBoxIntents {
const input = domSource.select("...");
const updateSearchBox$: Stream<string> = input
.events("input")
.map(/*...*/)
const searchForUser$: Stream<boolean> = input
.events("keypress")
.filter(isEnterKey)
.mapTo(true)
return { updateSearchBox$, searchForUser$ };
}
You can then map those actions to reducers in the model function, merge those reducers and finally fold them
function model({ updateSearchBox$, searchForUser$ }: SearchBoxIntents): Stream<Model> {
const updateSearchBoxReducer$ = updateSearchBox$
.map((value: string) => model => ({ ...model, search: value }))
// v for the moment this stream doesn't update the model, so you can ignore it
const searchForUserReducer$ = searchForUser$
.mapTo(model => model);
return xs.merge(updateSearchBoxReducer$, searchForUserReducer$)
.fold((model, reducer) => reducer(model), initialModel);
}
Multiple advantages to this solution:
you can type the arguments of your function and check that the right stream are passed along;
you don't need a huge switch if the number of actions increases;
you don't need actions identifiers.
In my opinion, multiplexing/demultiplexing streams is good when there is a parent/child relationship between two components. This way, the parent can only consume the events it needs to (this is more of an intuition than a general rule, it would need some more thinking :))
Related
Looking at the redux and ngrx, it looks like immer is the recommended library to produce a copy of the state before storing. Following the immer example, I add the following code to my reducer:
on(exampleActions.updateExample, (state, { example }) => {
return produce((state: ExampleType, draft: ExampleType) => {
draft.push({ example });
draft[1].done = true;
});
})
And typescript complains no-shadowed-variable, which conflicts with the example. Additionally, I am unable to return the value without return-type errors.
In cases where example is the a multi-level object:
const example = {
a: {
b: { c: 1 }
}
};
draft need to be fully de-referenced as well.
There isn't a lot of examples of immer and the createReducer integration as this a recent change for 2019. Should I disable the no-shadowed-variable rule for immer or is there a better pattern to confirm that both the state and example are properly de-referenced. example is an object of multiple levels.
Alternatively, I can avoid using immer and use ramda clone or attempt to manually deep-copy everything.
This is what ngrx-etc solves, with the mutableOn function (uses Immer)
const entityReducer = createReducer<{ entities: Record<number, { id: number; name: string }> }>(
{
entities: {},
},
mutableOn(create, (state, { type, ...entity }) => {
state.entities[entity.id] = entity
}),
mutableOn(update, (state, { id, newName }) => {
const entity = state.entities[id]
if (entity) {
entity.name = newName
}
}),
mutableOn(remove, (state, { id }) => {
delete state.entities[id]
}),
)
The source code can be found here, which should you take to the right direction.
I have a mutation like
mutation deleteRecord($id: ID) {
deleteRecord(id: $id) {
id
}
}
and in another location I have a list of elements.
Is there something better I could return from the server, and how should I update the list?
More generally, what is best practice for handling deletes in apollo/graphql?
I am not sure it is good practise style but here is how I handle the deletion of an item in react-apollo with updateQueries:
import { graphql, compose } from 'react-apollo';
import gql from 'graphql-tag';
import update from 'react-addons-update';
import _ from 'underscore';
const SceneCollectionsQuery = gql `
query SceneCollections {
myScenes: selectedScenes (excludeOwner: false, first: 24) {
edges {
node {
...SceneCollectionScene
}
}
}
}`;
const DeleteSceneMutation = gql `
mutation DeleteScene($sceneId: String!) {
deleteScene(sceneId: $sceneId) {
ok
scene {
id
active
}
}
}`;
const SceneModifierWithStateAndData = compose(
...,
graphql(DeleteSceneMutation, {
props: ({ mutate }) => ({
deleteScene: (sceneId) => mutate({
variables: { sceneId },
updateQueries: {
SceneCollections: (prev, { mutationResult }) => {
const myScenesList = prev.myScenes.edges.map((item) => item.node);
const deleteIndex = _.findIndex(myScenesList, (item) => item.id === sceneId);
if (deleteIndex < 0) {
return prev;
}
return update(prev, {
myScenes: {
edges: {
$splice: [[deleteIndex, 1]]
}
}
});
}
}
})
})
})
)(SceneModifierWithState);
Here is a similar solution that works without underscore.js. It is tested with react-apollo in version 2.1.1. and creates a component for a delete-button:
import React from "react";
import { Mutation } from "react-apollo";
const GET_TODOS = gql`
{
allTodos {
id
name
}
}
`;
const DELETE_TODO = gql`
mutation deleteTodo(
$id: ID!
) {
deleteTodo(
id: $id
) {
id
}
}
`;
const DeleteTodo = ({id}) => {
return (
<Mutation
mutation={DELETE_TODO}
update={(cache, { data: { deleteTodo } }) => {
const { allTodos } = cache.readQuery({ query: GET_TODOS });
cache.writeQuery({
query: GET_TODOS,
data: { allTodos: allTodos.filter(e => e.id !== id)}
});
}}
>
{(deleteTodo, { data }) => (
<button
onClick={e => {
deleteTodo({
variables: {
id
}
});
}}
>Delete</button>
)}
</Mutation>
);
};
export default DeleteTodo;
All those answers assume query-oriented cache management.
What if I remove user with id 1 and this user is referenced in 20 queries across the entire app? Reading answers above, I'd have to assume I will have to write code to update the cache of all of them. This would be terrible in long-term maintainability of the codebase and would make any refactoring a nightmare.
The best solution in my opinion would be something like apolloClient.removeItem({__typeName: "User", id: "1"}) that would:
replace any direct reference to this object in cache to null
filter out this item in any [User] list in any query
But it doesn't exist (yet)
It might be great idea, or it could be even worse (eg. it might break pagination)
There is interesting discussion about it: https://github.com/apollographql/apollo-client/issues/899
I would be careful with those manual query updates. It looks appetizing at first, but it won't if your app will grow. At least create a solid abstraction layer at top of it eg:
next to every query you define (eg. in the same file) - define function that clens it properly eg
const MY_QUERY = gql``;
// it's local 'cleaner' - relatively easy to maintain as you can require proper cleaner updates during code review when query will change
export function removeUserFromMyQuery(apolloClient, userId) {
// clean here
}
and then, collect all those updates and call them all in final update
function handleUserDeleted(userId, client) {
removeUserFromMyQuery(userId, client)
removeUserFromSearchQuery(userId, client)
removeIdFrom20MoreQueries(userId, client)
}
For Apollo v3 this works for me:
const [deleteExpressHelp] = useDeleteExpressHelpMutation({
update: (cache, {data}) => {
cache.evict({
id: cache.identify({
__typename: 'express_help',
id: data?.delete_express_help_by_pk?.id,
}),
});
},
});
From the new docs:
Filtering dangling references out of a cached array field (like the Deity.offspring example above) is so common that Apollo Client performs this filtering automatically for array fields that don't define a read function.
Personally, I return an int which represents the number of items deleted. Then I use the updateQueries to remove the document(s) from the cache.
I have faced the same issue choosing the appropriate return type for such mutations when the rest API associated with the mutation could return http 204, 404 or 500.
Defining and arbitrary type and then return null (types are nullable by default) does not seem right because you don't know what happened, meaning if it was successful or not.
Returning a boolean solves that issue, you know if the mutation worked or not, but you lack some information in case it didn't work, like a better error message that you could show on FE, for example, if we got a 404 we can return "Not found".
Returning a custom type feels a bit forced because it is not actually a type of your schema or business logic, it just serves to fix a "communication issue" between rest and Graphql.
I ended up returning a string. I can return the resource ID/UUID or simply "ok" in case of success and return an error message in case of error.
Not sure if this is a good practice or Graphql idiomatic.
As the doc says:
Things you should never do inside a reducer:
Mutate its arguments;
Perform side effects like API calls and routing transitions;
Call non-pure functions, e.g. Date.now() or Math.random().
If I follow the principle, there are some questions about the code orgnization (my app is a file manager).
For example,
default reducer like this:
export default function (state = initialState, action) {
const { path } = action
if (typeof path === 'undefined') {
return state
}
const ret = {
...state,
[path]: parentNode(state[path], action)
};
switch (action.type) {
case OPEN_NODE:
case GO_PATH:
ret['currentPath'] = path
break
default:
break
}
return ret
}
data struct in state[path] likes:
{
'open': false,
'path': '/tmp/some_folder',
'childNodes' : [ {'path':'/some/path', 'mode': '0755', 'isfolder': true}, ....],
'updateTime': Date.now()
}
Now I need several actions such as ADD_CHILD, DELETE_CHILD , RENAME_CHILD, MOVE_CHILD, there are two sulotions(by change state in actions or reducers):
1. All functional code in actions:
actions:
export function updateChildNodes(path, nodes) {
return {
type: UPDATE_CHILD_NODES,
path: path,
loading: false,
loaded: true,
childNodes: nodes,
};
}
export function addChild(path, node) {
return (dispatch, getState) => {
const state = getState().tree[path]
var childNodes = state.childNodes ? state.childNodes :[]
childNodes.push(node)
return dispatch(updateChildNodes(path, childNodes))
}
}
export function deleteChild(parent_path, child_node) {
return (dispatch, getState) => {
const state = getState().tree[parent_path]
var childNodes = state && state.childNodes ? state.childNodes : []
for (var i=0; i <=childNodes.length; i++){
if (childNodes[i].path == child_node.path){
childNodes.splice(i, 1)
return dispatch(updateChildNodes(parent_path, childNodes))
}
}
}
}
export function deleteNode(node) {
return (dispatch, getState) => {
// ajax call
return api.deleteChild(node.path, () => {
dispatch(deleteChild(node.parent, node))
})
}
}
.....
parentNode reducer:
function parentNode(state, action) {
switch (action.type) {
case UPDATE_CHILD_NODES:
return {
...state,
childNodes: action.childNodes
}
default:
return state;
}
}
All variable pass in parentNode from actions, parentNode just assign change to state doesn't do anything else.
All logic of remove node and add node is done by actions, only UPDATE_CHILD_NODES in parentNode.
2. Action just send data to reducer, let reducer to process
actions:
export function updateChildNodes(path, nodes) {
return {
type: UPDATE_CHILD_NODES,
path: path,
loading: false,
loaded: true,
childNodes: nodes,
};
}
export function addChild(path, node) {
return {
type: ADD_CHILD,
path: path,
node: node,
};
}
export function deleteChild(path, node) {
return {
type: DELETE_CHILD,
path: path,
node: node,
};
}
export function deleteNode(node) {
return (dispatch, getState) => {
// ajax call
return api.deleteChild(node.path, () => {
dispatch(deleteChild(node.parent, node))
})
}
}
.....
parentNode reducer:
function parentNode(state, action) {
switch (action.type) {
case DELETE_CHILD:
let childNodes = state.childNodes.slice() // have to clone obj
for (var i=0; i <=childNodes.length; i++){
if (childNodes[i].path == action.node.path){
childNodes.splice(i, 1)
}
}
return {
...state,
childNodes: childNodes
};
case ADD_CHILD:
let childNodes = state.childNodes.slice() // have to clone obj
childNodes.push(node)
return {
...state,
childNodes: childNodes
};
case UPDATE_CHILD_NODES:
return {
...state,
childNodes: action.childNodes
}
default:
return state;
}
}
In my option, the solution 2 is more readable and pretty.
But is it good to change the state by mutate an cloned obj? And when I need set updateTime by Date.now(), I have to generate it from actions and pass to reducer,so that state variables are generated in different place(But I'd like put them together...)
Any opinion for this?
From this redux discussion here:
It is best practice to place most of the logic in the action creators and leave the reducers as simple as possible (closer to your option 1)
for the following reasons:
Business logic belongs in action-creators. Reducers should be stupid and simple. In many individual cases it does not matter- but consistency is good and so it's best to consistently do this. There are a couple of reasons why:
Action-creators can be asynchronous through the use of middleware like redux-thunk. Since your application will often require asynchronous updates to your store- some "business logic" will end up in your actions.
Action-creators (more accurately the thunks they return) can use shared selectors because they have access to the complete state. Reducers cannot because they only have access to their node.
Using redux-thunk, a single action-creator can dispatch multiple actions- which makes complicated state updates simpler and encourages better code reuse.
For small apps I usually put my logic in action creators. For more complex situations you may need to consider other options. Here is a summary on pros and cons of different approaches: https://medium.com/#jeffbski/where-do-i-put-my-business-logic-in-a-react-redux-application-9253ef91ce1#.k8zh31ng5
Also, have a look at Redux middleware.
The middleware provides a third-party extension point between dispatching an action, and the moment it reaches the reducer.
This is an answer provided by Dan Abramov (author of Redux): Why do we need middleware for async flow in Redux?
And here are the official Redux docs: http://redux.js.org/docs/advanced/Middleware.html
Is this a reasonable solution for data sharing between two states/reducers?
//combineReducers
function coreReducer(state = {}, action){
let filtersState = filters(state.filters, action);
let eventsState = events(state.events, action, { filters: filtersState});
return { events: eventsState, filters : filtersState};
}
export const rootReducer = combineReducers(
{
core : coreReducer,
users
}
);
If so, how can one guarantee the order in which reducer functions are executed if both answer to the same dispatched event and the second reducing function depends on the new state of the first one?
Let's say that we dispatch a SET_FILTER event that appends to activeFilters collection in the filters Store and later changes the visibility of items in the events Store with respect to the activeFilters values.
//ActiveFilters reducer
function filtersActions(state = {}, action){
switch (action.type) {
case SET_FILTER:
return Object.assign({}, state, {
[action.filterType]: action.filter
})
case REMOVE_FILTER:
var temp = Object.assign({}, state);
delete temp[action.filterType];
return temp;
case REMOVE_ALL_FILTERS:
return {};
default:
return state
}
}
I think I found the answer - Computing Derived Data - Reselect
http://redux.js.org/docs/recipes/ComputingDerivedData.html
/--------container--------/
import {getGroupsAndMembers} from '../reducers'
const mapStateToProps = (state) => {
return {
inputValue: state.router.location.pathname.substring(1),
initialState: getGroupsAndMembers(state) <-- this one
}
}
/--------reducers--------/
export function getGroupsAndMembers(state){
let { groups, members } = JSON.parse(state)
response = {groups, members}
return response;
}
GroupsContainer.propTypes = {
//React Redux injection
pushState: PropTypes.func.isRequired,
// Injected by React Router
children: PropTypes.node,
initialState:PropTypes.object,
}
don't forget to follow the guidelines for 'connect'
export default connect(mapStateToProps,{ pushState })(GroupsContainer)
If you have two reducers, and one depend on a value from a first one, you just have to update them carefully, and the best solution will be just to use a special function, which will first set the filtering, and then query corresponding events. Also, keep in mind that if events fetching is asynchronous operation, you should also nest based on filtering type -- otherwise there is a chance of race condition, and you will have wrong events.
I have created a library redux-tiles to deal with verbosity of raw redux, so I will use it in this example:
import { createSyncTile, createTile } from 'redux-tiles';
const filtering = createSyncTile({
type: ['ui', 'filtering'],
fn: ({ params }) => params.type,
});
const events = createTile({
type: ['api', 'events'],
fn: ({ api, params }) => api.get('/events', { type: params.type }),
nesting: ({ type }) => [type],
});
// this function will just fetch events, but we will connect to apiEvents
// and filter by type
const fetchEvents = createTile({
type: ['api', 'fetchEvents'],
fn: ({ selectors, getState, dispatch, actions }) => {
const type = selectors.ui.filtering(getState());
return dispatch(actions.api.events({ type }));
},
});
The premise
I'm refactoring some Flux stores/actions/action creators to be more Fluxy (PUBLISH, PUBLISH_SUCCESS, PUBLISH_FAILURE instead of a weird IS_LOADING action), and was wondering how to structure my actions: should my action creators call single actions (PUBLISH_SUCCESS) or multiple ones (ADD_AUTHOR, ADD_BOOK, etc)?
The example
Here's a more specific example:
I have a TasksStore that holds todo items for my innovative new task-management app, and I have a poorly named action creator TaskActions that lets me fetch my varieties from the server and add new ones with an API. Kinda like this:
const TasksStore = { ... };
const TaskActions = {
fetchTasks(),
addTask()
};
What actions should I dispatch to communicate with TasksStore?
I see two options: action-creator-api-specific actions (FETCH_TASKS, FETCH_TASKS_SUCCESS, FETCH_TASKS_FAILURE, ADD_TASK, ADD_TASK_SUCCESS, & ADD_TASK_FAILURE) or reusable actions (ADD_TASK called over and over for fetch and called once for addTask()).
Basically, should my API look like this (verbose, perhaps redundant, dispatch-able actions for each action-creator-action):
const TasksStore = {
on('FETCH_SUCCESS', (tasks) => { // add tasks });
on('ADD_SUCCESS', (task) => { // add task });
};
const TaskActions = {
fetchTasks() {
dispatch('FETCH');
myApi.fetchTasks(
(success_payload) => { dispatch('FETCH_SUCCESS', success_payload) },
(failure_payload) => { dispatch('FETCH_FAILURE', failure_payload) }
);
},
addTask() {
dispatch('ADD');
myApi.addTask(
(success_payload) => { dispatch('ADD_SUCCESS', success_payload) },
(failure_payload) => { dispatch('ADD_FAILURE', failure_payload) }
);
}
};
or like this (concise, reusable dispatch-able actions):
const TasksStore = {
on('ADD', (task) => { // add task });
};
const TaskActions = {
fetchTasks() {
dispatch('FETCH');
myApi.fetchTasks(
(success_payload) => {
success_payload.forEach((task) => { dispatch('ADD', task); })
},
(failure_payload) => { dispatch('FETCH_FAILURE', failure_payload)
);
},
addTask() {
dispatch('ADD');
myApi.addTask(
(success_payload) => { dispatch('ADD', success_payload),
(failure_payload) => { dispatch('ADD_FAILURE', failure_payload)
);
}
};
or something in between?
Thanks!
We decided to go with the more verbose route:
For AJAX actions (like in the examples), we dispatch a specific "started" action, then a "success" or "failure" action:
dispatch('RETICULATE_SPLINES');
app.reticulateSpinesAndReturnPromise().then(
(success) => { dispatch('RETICULATE_SPLINES_SUCCESS'); },
(failure) => { dispatch('RETICULATE_SPLINES_FAILURE'); }
);
Why?
This decouples action creators from the actions a bit— action creators expose a uniform, grokkable API in the form of dispatched actions: ActionCreator.fetchMessages will dispatch a FETCH_MESSAGES action, not ADD or FETCH_OR_ADD or something weird. It's a lot easier to understand a decoupled system like actions and stores (decoupled because all actions travel through a dispatcher) when there are no unexpected dispatches.
It makes it a lot easier for multiple stores to listen for action-creator-actions. Does the MessageThreadsStore need to update when we .fetchMessages() but not .postMessage()? Listen for FETCH_MESSAGES_SUCCESS, not ADD which might have been called in both otherwise.
Anyway, hope that helps.