I'm currently building a Vue app that consumes data from the Contentful API. For each entry, I have a thumbnail (image) field from which I'd like to extract the prominent colours as hex values and store them in the state to be used elsewhere in the app.
Using a Vuex action (getAllProjects) to query the API, run Vibrant (node-vibrant) and commit the response to the state.
async getAllProjects({ commit }) {
let {
fields: { order: order }
} = await api.getEntry("entry");
let projects = order;
projects.forEach(p =>
Vibrant.from(`https:${p.fields.thumbnail.fields.file.url}`)
.getPalette()
.then(palette => (p.fields.accent = palette.Vibrant.hex))
);
console.log(projects);
// Commit to state
commit("setAllProjects", projects);
}
When I log the contents of projects right before I call commmit, I can see the hex values I'm after are added under the accent key. However, when I inspect the mutation payload in devtools, the accent key is missing, and so doesn't end up in the state.
How do I structure these tasks so that commit only fires after the API call and Vibrant have run in sequence?
You cannot add a property to an object in Vue and have it be reactive; you must use the Vue.set method.
Please try replacing that forEach block with the following, which adds the new property using Vue.set:
for (i=0; i<projects.length; i++)
Vibrant.from(`https:${projects[i].fields.thumbnail.fields.file.url}`)
.getPalette()
.then(palette => (Vue.set(projects[i].fields, accent, palette.Vibrant.hex)))
);
UPDATE: changing the format from forEach to a conventional for loop may be gratuitous in this case, since the assignment being made is to an object property of projects and not to a primitive.
I'm not spending a lot of time on StackOverflow, and if the above answer works, I am happy for you indeed.
But I expect from that answer you will get console warnings telling you not to mutate state directly.
Now when this happens, it's because while Vue.set(), does in fact help Vue understand reactively a change has been made, potentially deeply nested in an object.
The problem here is that since you are looping the object, changing it all the time, the commit (Mutator call) is not the one changing state - Vue.set() is actually changing it for every iteration.
Related
I have started learning Relay by writing a new Next.js application. I have so far been following the with-relay-modern example in the Next.js repo, and it has been working just fine for fetching data from the server. However, I have now moved beyond that example by adding a mutation, and things immediately stopped working.
The mutation updater looks like this:
function updateStore(
store: RecordSourceSelectorProxy,
formInstanceUuid: string,
) {
const mutation = store.getRootField("updateFormValue");
const newFormValue = mutation?.getLinkedRecord("formValue");
if (!newFormValue) {
throw "Expected new form value from server";
}
const localFormInstance = store.get(formInstanceUuid)
const localValueRecords = localFormInstance?.getLinkedRecords("values") || [];
for (const record of localValueRecords) {
if (record.getValue("partUuid") == newFormValue.getValue("partUuid")) {
console.log("Copying fields from server provided value to local value");
record.copyFieldsFrom(newFormValue);
}
}
console.debug("New state of store:", initEnvironment().getStore().getSource())
}
All it does is inject a "value" object returned from the server into a list of value objects in a local "form" object. As you can see I am dumping the state of the store on the last line, so I can confirm that the mutation worked as expected, and the local value was modified as expected.
However, the UI doesn't refresh. I have to reload the window to see the new state.
I can't for the life of me figure out what I've done wrong, so I'm starting to wonder if the example I was following only works for fetching data. I assume it's the QueryRenderer object that is responsible for refreshing the UI when the underlying store changes, and the example doesn't use one. I also can't imagine how a QueryRenderer could be added to the example without ruining SSR.
TL;DR
Does the "with-relay-modern" example work when adding mutations, or is my issue somewhere else?
I also started a Discussion on the Next.js GitHub page about this
I am using react-apollo and have been for quite some time. One thing that has already been a problem for me is the fact that refetch doesn't work when using a mutation This has been a know issue for as long as I have been using the app.
I have got round this by using the refetch prop that is available on a query.
<Query query={query} fetchPolicy={fetchPolicy} {...props}>
{({ loading, data, error, refetch }) => {
... pass down to mutation
</Query>
However I am now reading in the documentation that you recieve
an update method as part of a mutation and you should use this to update your application after a mutation.
Can you use the update function to update your UI's data and have it update after finishing a mutation? If you can, is this the standard way to do updates now?
*Using refetchQueries not working
As you can see in the image the console.info() displays that the data.status = "CREATED"; but the request coming back from the mutation directly is data.status = "PICKED"; PICKED is the correct and uptodate information in the DB.
In order of preference, your options are:
Do nothing. For regular updates to an individual node, as long as the mutation returns the mutated result, Apollo will update the cache automatically for you. When this fails to work as expected, it's usually because the query is missing the id (or _id) field. When an id field is not available, a custom dataIdFromObject function should be provided to the InMemoryCache constructor. Automatic cache updates also fail when people set the addTypename option to false.
Use update. The update function will run after your mutation completes, and lets you manipulate the cache directly. This is necessary if the mutation affects a field returning a list of nodes. Unlike simple updates, Apollo has no way to infer whether the list should be updated (and how) following the mutation, so we have to directly update the cache ourselves. This is typically necessary following create and delete mutations, but may also be needed after an update mutation if the updated node should be added or removed to some field that returns a list. The docs go into a good deal of detail explaining how to do this.
<Mutation
mutation={ADD_TODO}
update={(cache, { data: { addTodo } }) => {
const { todos } = cache.readQuery({ query: GET_TODOS });
cache.writeQuery({
query: GET_TODOS,
data: { todos: todos.concat([addTodo]) },
});
}}
>
{(addTodo) =>(...)}
</Mutation>
Use refetchQueries. Instead of updating the cache, you may also provide a refetchQueries function, which should return an array of objects representing the queries to refetch. This is generally less desirable than using update since it requires one or more additional calls to the server. However, it may be necessary if the mutation does not return enough information to correctly update the cache manually. NOTE: The returned array may also be an array of strings representing operation names, though this is not well documented.
<Mutation
mutation={ADD_TODO}
refetchQueries={() => [
{ query: TODOS_QUERY, variables: { foo: 'BAR' } },
]}
>
{(addTodo) =>(...)}
</Mutation>
Use refetch. As you already showed in your question, it's possible to use the refetch function provided by a Query component inside your Mutation component to refetch that specific query. This is fine if your Mutation component is already nested inside the Query component, but generally using refetchQueries will be a cleaner solution, particularly if multiple queries need to be refetched.
Use updateQueries. This is a legacy option that's no longer well-documented, but provided similar functionality to update before update was added. It should not be used as it may be deprecated in the future.
UPDATE:
You may also set up your schema in such a way that queries can be refetched as part of your mutation. See this article for more details.
Have kinda a unique question, in my code I have a listener to a database that loads down objects into an array.
All I do when I load it in is
AddObject(obj){
this.setState({
Data: [...this.state.Data, obj]
});
}
Pretty simple. However this listener function, there is no exact time when need data will be added. When I go to use that Data sent in Data, I went to pull it out of the Data Array, however I am worried if I try copying data out of the array, or removing the "seen" data, I will get weird behaivor if my listener function triggers and I try adding data to the array at the same time.
Is there some sort of a way to do this? I guess you could call this a shared resource
Ideally, I would have something like this:
loadDataIN(){
var LengthToGrab = this.state.Data.length;
//we need to remove this length, now any new data will be added to index 0
}
Does this make sense? basically I am trying to figure out the best way to remove data from this array, and not have to worry about overwritting, or losing data. Maybe some sort of processing que
From official doc
setState() enqueues changes to the component state and tells React
that this component and its children need to be re-rendered with the
updated state.
You don't need to worry that two kinds of situation would have conflict in the same time.
setState() enqueues the pending state before the changes be rendered.
In fact, no matter how mechanism be implemented, React is a framework of JavaScript which is working on a model event-loop.
So if you want to pull out the data from this.state.Data:
loadDataIN(){
this.setState(function(prevState, props) {
// this.fetchData = prevState.Data;
return {
Data: []
};
});
}
I have an async api call where I get an array of objects and then I map that to dynamically registered modules in my store. Something like this:
dispatch
// before this dispatch some api call happens and inside the promise
// iterate over the array of data and dispatch this action
dispatch(`list/${doctor.id}/availabilities/load`, doctor.availabilities);
The list/${doctor.id} is the dynamic module
action in availabilities module
load({ commit }, availabilities) {
const payload = {
id: availabilities.id,
firstAvailable: availabilities.firstAvailable,
timeslots: [],
};
// then a bunch of code that maps the availabilities to a specific format changing the value of payload.timeslots
commit('SET_AVAILABILITIES', payload)
}
mutation
[types.SET_TIMESLOTS](state, payload) {
console.log(payload);
state.firstAvailable = payload.firstAvailable;
state.id = payload.id;
state.timeslots = payload.timeslots;
}
When I check my logs for the console.log above each doctor has different arrays of time slots Exactly the data I want. However, in the vue developer tools and what is being rendered is just the last doctor's timeslots for all of the doctors. All of my business logic is happening in the load action and the payload in the mutation is the correct data post business logic. Anyone have any ideas why I'm seeing the last doctor's availabilities for every doctor?
It looks like you are assigning the same array (timeslots) to all doctors.
When you add an element to the array for one doctor, you mutate the array that all doctors are sharing.
However with the little code you show, it's difficult to know where is the exact problem.
Before reading:
This isnt a matter of non working code but a question on architecture. Also i am not currently using the ReactRedux library as im first trying to understand how the parts work on their own in this test app. Its as short as i could cut it but unfortunately still lengthy, please bear with me
Short Intro
I've got an array of Bottle models. Using pseudocode,a bottle is defined like so:
class Bottle{
//members
filledLiters
filledLitersCapacity
otherMember1
otherMember2
//functions
toPostableObject(){
//removes functions by running JSON.
var cloneObj = JSON.parse(JSON.stringify(this));
//removes all members we dont want to post
delete cloneObj["otherMember1"];
}
//other functions
}
I've also got a React component that displays all Bottle items.The component needs to store the previous state of all Bottle items as well ( its for animating, disregard this ).
Redux usage
There are complex operations i need to perform on some of the Bottle items using a helper class like so:
var updated_bottles = BottleHandler.performOperationsOnBottles(bottleIds)
mainStore.dispatch({type:"UPDATED_BOTTLES",updated_bottles:updated_bottles})
I dont want to update the store for every operation as i would like the store to be updated all together at the end in one go. Therefore my BottleReducer looks something like this :
var nextState = Object.assign({}, currentState);
nextState.bottles = action.updated_bottles
Where action.updated_bottles is the final state of bottles after having performed the operations.
The issue
Even though everything works, im suspicious that this is the "wrong mindset" for approaching my architecture. One of the reasons is that to avoid keeping the reference to the bottle objects and mutating the state as im performing the operations, i have to do this ugly thing:
var bottlesCloneArray = mainStore.getState().
bottleReducer.bottles.map(
a => {
var l = Object.assign({}, a);
Object.setPrototypeOf( l, Character.prototype );
return l
}
);
This is because i need a cloned array of objects that still retain their original functions ( meaning they're actual instance clones of the class )
If you can point out the flaw/flaws in my logic i'd be grateful.
P.S: The reason i need to keep "deep clones" of the class instances is so that i can keep the previous state of bottles in my React component for the reason of animating between the two states when an update in render happens.
When dealing with redux architecture it can be extremely useful to keep serialisation and immutability at the forefront of every decision, this can be difficult at first especially when you are very used to OOP
As the store's state is just a JS object it can be tempting to use it to keep track of JS instances of more complex model classes, but instead should be treated more like a DB, where you can serialise a representation of your model to and from it in an immutable manner.
Storing the data representations of your bottles in its most primitive form makes things like persistance to localStorage and rehydration of the store possible for more advanced applications that can then allow server side rendering and maybe offline use, but more importantly it makes it much more predictable and obvious what is happening and changing in your application.
Most redux apps i've seen (mine included) go down the functional route of doing away with model classes altogether and simply performing operations in the reducers directly upon the data - potentially using helpers along the way. A downside to this is that it makes for large complex reducers that lack some context.
However there is a middle ground that is perfectly reasonable if you prefer to have such helpers encapsulated into a Bottle class, but you need to think in terms of a case class, which can be created from and serialised back to the data form, and acts immutably if operated upon
Lets look at how this might work for your Bottle (typescript annotated to help show whats happening)
Bottle case class
interface IBottle {
name: string,
filledLitres: number
capacity: number
}
class Bottle implements IBottle {
// deserialisable
static fromJSON(json: IBottle): Bottle {
return new Bottle(json.name, json.filledLitres, json.capacity)
}
constructor(public readonly name: string,
public readonly filledLitres: number,
public readonly capacity: number) {}
// can still encapuslate computed properties so that is not needed to be done done manually in the views
get nameAndSize() {
return `${this.name}: ${this.capacity} Litres`
}
// note that operations are immutable, they return a new instance with the new state
fill(litres: number): Bottle {
return new Bottle(this.name, Math.min(this.filledLitres + litres, this.capacity), this.capacity)
}
drink(litres: number): Bottle {
return new Bottle(this.name, Math.max(this.filledLitres - litres, 0), this.capacity)
}
// serialisable
toJSON(): IBottle {
return {
name: this.name,
filledLitres: this.filledLitres,
capacity: this.capacity
}
}
// instances can be considered equal if properties are the same, as all are immutable
equals(bottle: Bottle): boolean {
return bottle.name === this.name &&
bottle.filledLitres === this.filledLitres &&
bottle.capacity === this.capacity
}
// cloning is easy as it is immutable
copy(): Bottle {
return new Bottle(this.name, this.filledLitres, this.capacity)
}
}
Store state
Notice it contains an array of the data representation rather than the class instance
interface IBottleStore {
bottles: Array<IBottle>
}
Bottles selector
Here we use a selector to extract data from the store and perform transformation into class instances that you can pass to your React component as a prop.
If using a lib like reselect this result will be memoized, so your instance references will remain the same until their underlying data in the store has changed.
This is important for optimising React using PureComponent, which only compares props by reference.
const bottlesSelector = (state: IBottleStore): Array<Bottle> => state.bottles.map(v => Bottle.fromJSON(v))
Bottles reducer
In your reducers you can use the Bottle class as a helper to perform operations, rather than doing everything right here in the reducer directly on the data itself
interface IDrinkAction {
type: 'drink'
name: string
litres: number
}
const bottlesReducer = (state: Array<IBottle>, action: IDrinkAction): Array<IBottle> => {
switch(action.type) {
case 'drink':
// immutably create an array of class instances from current state
return state.map(v => Bottle.fromJSON(v))
// find the correct bottle and drink from it (drink returns a new instance of Bottle so is immutable)
.map((b: Bottle): Bottle => b.name === action.name ? b.drink(action.litres) : b)
// serialise back to date form to put back in the store
.map((b: Bottle): IBottle => b.toJSON())
default:
return state
}
}
While this drink/fill example is fairly simplistic, and could be just as easily done in as many lines directly on the data in the reducer, it illustrate's that using case class's to represent the data in more real world terms can still be done, and can make it easier to understand and keep code more organised than having a giant reducer and manually computing properties in views, and as a bonus the Bottle class is also easily testable.
By acting immutably throughout, if designed correctly your React class's previous state will continue to hold a reference to your previous bottles (in their own previous state), so there is no need to somehow track that yourself for doing animations etc
If Bottle class is a react component (or inside a react component) I think you could play with componentWillUpdate(nextProps, nextState) so you can check the previous state (do not unmount your component of course).
https://reactjs.org/docs/react-component.html#componentwillupdate
Deep cloning your class doesn't seem a good idea to me.
Edit:
"I've also got a React component that displays all Bottle items."
That's where you should keep and look for your previous state. Keep all your bottle in a bottles store. And get it in your components when you need to display bottles.
Inside componentWillUpdate you can check you this.state (which is your state just before being updated, ie your previous state) and nextState passed as a parameter which is the current state
Edit2:
why would you keep an complete class in your state ? Just keep data in state. I mean just keep an object that will be updated by your reducer. If you need to have some utils functions (parser...) do not keep them in your state, treat your data in reducers before updating your state or keep your utils/parser functions in some utils file
Also your state should stay immutable. So it means you reducer should return a copy of the updated state anyway.
I've got an array of Bottle models.
I think It makes more sense to have a model of BottleCollection.
Or maybe you have one Bottle model and multiple usages of it?
class Bottle{
//members
filledLiters
filledLitersCapacity
otherMember1
otherMember2
//functions
toPostableObject(){}
}
Hm, it looks like your model represents multiple things:
a cache of persistent data (retrieved via AJAX?)
data object (dumb fields)
a temporary state for user input (data to be POSTed?)
I wouldn't call it a model. It's 3 things: API wrapper/cache, data and pending changes.
I would call it REST API wrapper, data object and application state.
There are complex operations i need to perform on some of the Bottle items using a helper class like so:
var updated_bottles =
BottleHandler.performOperationsOnBottles(bottleIds)
It looks to be the domain logic. I wouldn't place the core logic of the application under the name "helper class". I would call it "the model" or "business rules".
mainStore.dispatch({type:"UPDATED_BOTTLES", updated_bottles:updated_bottles})
That looks to be a change in application state. But I don't see the reason for it. I.e. who requested this change and why?
I dont want to update the store for every operation as i would like the store to be updated all together at the end in one go.
That's a good reasoning.
So you'll have a single action type:
mainStore.dispatch({type:"UPDATED_DATA", { updated_bottles })
However, in this case you might need to clean up old state like this:
mainStore.dispatch({type:"UPDATED_DATA", { updated_bottles: null })
The reason i need to keep "deep clones" of the class instances is so that i can keep the previous state of bottles
I think the reason is that you keep REST API cache and pending changes in a single object. If you keep cache and pending changes in separate objects you don't need clones.
Another thing to note is that your state should be a plain JavaScript object, not an instance of a class. There's no reason to keep references to functions (instance methods) in a state if you know which type of data your state contains. You can just use temporary class instances:
const newBottlesState = new BottleCollection(state.bottlesCache, state.bottlesUserChanges).performOperationsOnBottles()