I have two TaskList components that use the same query GET_TASKS.
Both use a different filter query variable which is passed down to them in props as queryVars.
I defined a standard merge function in type policies to merge the incoming and existing data together.
The TaskList component uses
const { data, fetchMore } = useQuery<Response, Variables>(GET_TASKS, { variables: queryVars })
to retrieve the data.
A Fetch more button has () => fetchMore({ variables: queryVars }) in the onClick attribute.
When I click on the Fetch more button on the left, the tasks on the right get updated as well, however, without its filter applied, so the data that come with Assigned to me filter are also put to the Assigned by me task list and vice versa.
The merge function basically rewrites every data object that uses the given query.
How do I tell Apollo to only update the data that is bound to the component where fetchMore is defined?
You should be able to add filter to the keyArgs property. This should create different cache results based on the filter.
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
tasks: {
keyArgs: ["filter"],
merge(existing, incoming, { args: { offset = 0 }}) {
//Custom merge
},
},
},
},
},
});
Related
what is the best way to update many records with different data ?
I'm doing it like this
const updateBody = JSON.parse(req.body);
try {
for (let object of updateBody) {
await prisma.comissions.upsert({
where: {
producer: object.producer,
},
update: {
rate: object.rate,
},
create: object,
});
}
I'm being able to update it, but it's taking a really long time to do so. I'm aware of transaction, but i'm not sure how to use it.
In Prisma transaction query is used in two ways.
Sequential operations: Pass an array of Prisma Client queries to be executed sequentially inside of a transaction.
Interactive transactions: Pass a function that can contain user code including Prisma Client queries, non-Prisma code, and other control flow to be executed in a transaction.
In our case we should use the interactive transaction, Because it contain user code, To use the callback function in the Prisma transaction, we need to add a preview feature to the Prisma.schema file
generator client {
provider = "prisma-client-js"
previewFeatures = ["interactiveTransactions"]
}
prisma.$transaction(async(prisma) => {
try {
for (let object of updateBody) {
await prisma.comissions.upsert({
where: {
producer: object.producer,
},
update: {
rate: object.rate,
},
create: object,
});
}
});
Is this the correct way to store projects and tasks with localstorage? I also need to get the localstorage every time the page refreshes. So how do I do that?
export function newProject(name) {
allProjects.push({
projectTitle: name,
id: crypto.randomUUID(),
tasks: []
})
getProjectId(name)
save(name, project)
}
export function save(title, task) {
localStorage.setItem(title, JSON.stringify(task))
}
project is undefined so you need to define it first.
const project = {
projectTitle: name,
id: crypto.randomUUID(),
tasks: []
}
allProjects.push(project)
getProjectId(name)
save(name, project)
to get all projects on refresh, you need to maintain an array of names in localstorage, or save all projects as array in one key.
EDIT:
So the question is not entirely well described, but i did my best. Add exports to functions you need and use your getProjectId if needed. I used an approach with separate array of ID's to maintain the list of projects.
function createNewProject(name) {
// create and return project object
return {
title: name,
id: crypto.randomUUID(),
tasks: []
};
}
function saveProject(storageKey, projectObject) {
// get current list of project keys or create new list
const allProjectKeys = JSON.parse(localStorage.getItem("allProjectKeys")) ?? [];
// add new one to the list
allProjectKeys.push(storageKey);
// save current list of project keys
localStorage.setItem("allProjectKeys", JSON.stringify(allProjectKeys));
// save project data
localStorage.setItem(storageKey, JSON.stringify(projectObject));
}
function getProjectByKey(storageKey) {
// get single project by given key
return JSON.parse(localStorage.getItem(storageKey));
}
function getAllProjects() {
// get list of all project keys, and map each of them to get actual project instead of project key
return JSON.parse(localStorage.getItem("allProjectKeys")).map(getProjectByKey);
}
const testProject = createNewProject("test");
saveProject(testProject.id, testProject);
console.log(getProjectByKey(testProject.id));
console.log(getAllProjects());
I have this query in my code which allows me to build a tag cloud for this blog front page
tagCloud:allContentfulBlogPost {
group(field: tags, limit: 8) {
fieldValue
}
}
It's passing data that I map in my component using {data.tagCloud.group.map(tag => (...))};. The code works nicely, but it won't be limited by the filter I'm passing above in the group(fields: tags, limit: 8) in my query. It renders all the tags and not only the first eight.
I've unsuccessfully tried the skip filter as well for the sake of seeing if it works.
Is this the proper way to limit the count to my mapping component in Gatsby?
The Contentful source plugin doesn't define arguments on any of the nodes it creates, unfortunately. Instead you would need to create these yourself. The easiest way to do that is through the createResolvers API.
Here's a similar example from a project of mine:
// in gatsby-node.js
exports.createResolvers = ({ createResolvers }) => {
createResolvers({
SourceArticleCollection: {
// Add articles from the selected section(s)
articles: {
type: ["SourceArticle"],
args: {
// here's where the `limit` argument is added
limit: {
type: "Int",
},
},
resolve: async (source, args, context, info) => {
// this function just needs to return the data for the field;
// in this case, I'm able to fetch a list of the top-level
// entries that match a particular condition, but in your case
// you might want to instead use the existing data in your
// `source` and just slice it in JS.
const articles = await context.nodeModel.runQuery({
query: {
filter: {
category: {
section: {
id: {
in: source.sections.map((s) => s._ref),
},
},
},
},
},
type: "SourceArticle",
})
return (articles || []).slice(0, args.limit || source.limit || 20)
},
},
},
})
}
Because resolvers run as part of the data-fetching routines that support the GraphQL API, this will run server-side at build-time and only the truncated/prepared data will be sent down to the client at request time.
I am trying to follow the example of cursor-based paginating with React Apollo (https://www.apollographql.com/docs/react/data/pagination/#cursor-based) but am struggling with how my component that rendered the original data gets the new (appended) data.
This is how we get the original data and pass it to the component:
const { data: { comments, cursor }, loading, fetchMore } = useQuery(
MORE_COMMENTS_QUERY
);
<Comments
entries={comments || []}
onLoadMore={...}
/>
What I'm unsure of is how the fetchMore function works.
onLoadMore={() =>
fetchMore({
query: MORE_COMMENTS_QUERY,
variables: { cursor: cursor },
updateQuery: (previousResult, { fetchMoreResult }) => {
const previousEntry = previousResult.entry;
const newComments = fetchMoreResult.moreComments.comments;
const newCursor = fetchMoreResult.moreComments.cursor;
return {
// By returning `cursor` here, we update the `fetchMore` function
// to the new cursor.
cursor: newCursor,
entry: {
// Put the new comments in the front of the list
comments: [...newComments, ...previousEntry.comments]
},
__typename: previousEntry.__typename
};
}
})
}
From what I understand, yes, once my component will cal this onLoadMore function (using a button's onClick for example), it will fetch the data based on a new cursor.
My question is this. I'm sorry if this is too simple and I'm not understanding something basic.
How does the component get the new data?
I know the data is there, because I console logged the newComments (in my case, it wasn't newComments, but you get the idea.) And I saw the new data! But those new comments, how are they returned to the component that needs the data? And if I click the button again, it is still stuck on the same cursor as before.
What am I missing here?
In the updateQuery function lets you modify (override) the result for the current query. At the same time your component is subscribed to the query and will get the new result. Let's play this through:
Your component is rendered for the first time, component will subscribe to the query and receive the current result of the query from the cache if there is any. If not the query starts fetching from the GraphQL server and your component gets notified about the loading state.
If the query was fetched your component will get the data once the result came in. It now shows the first x results. In the cache an entry for your query field is created. This might look something like this:
{
"Query": {
"cursor": "cursor1",
"entry": { "comments": [{ ... }, { ... }] }
}
}
// normalised
{
"Query": {
"cursor": "cursor1",
"entry": Ref("Entry:1"),
}
"Entry:1": {
comments: [Ref("Comment:1"), Ref("Comment:2")],
},
"Comment:1": { ... },
"Comment:2": { ... }
}
User clicks on load more and your query is fetched again but with the cursor value. The cursor tells the API from which entry it should start returning values. In our example after Comment with id 2.
Query result comes in and you use the updateQuery function to manually update the result of the query in the cache. The idea here is that we want to merge the old result (list) with the new result list. We already fetched 2 comments and now we want to add the two new comments. You have to return a result that is the combined result from two queries. For this we need to update the cursor value (so that we can click "load more" again and also concat the lists of comments. The value is written to the cache and our normalised cache now looks like this:
{
"Query": {
"cursor": "cursor2",
"entry": { "comments": [{ ... }, { ... }, { ... }, { ... }] }
}
}
// normalised
{
"Query": {
"cursor": "cursor2",
"entry": Ref("Entry:1"),
}
"Entry:1": {
comments: [Ref("Comment:1"), Ref("Comment:2"), Ref("Comment:3"), Ref("Comment:4")],
},
"Comment:1": { ... },
"Comment:2": { ... },
"Comment:3": { ... },
"Comment:4": { ... }
}
Since your component is subscribed to the query it will get rerendered with the new query result from the cache! The data is displayed in the UI because we merged the query so that the component gets new data just as if the result had all four comments in the first place.
It depends on how you handle the offset. I'll try to simplify an example for you.
This is a simplified component that I use successfully:
const PlayerStats = () => {
const { data, loading, fetchMore } = useQuery(CUMULATIVE_STATS, {
variables: sortVars,
})
const players = data.GetCumulativeStats
const loadMore = () => {
fetchMore({
variables: { offset: players.length },
updateQuery: (prevResult, { fetchMoreResult }) => {
if (!fetchMoreResult) return prevResult
return {
...prevResult,
GetCumulativeStats: [
...prevResult.GetCumulativeStats,
...fetchMoreResult.GetCumulativeStats,
],
}
},
})
}
My CUMULATIVE_STATS query returns 50 rows by default. I pass the length of that result array to my fetchMore query as offset. So when I execute CUMULATIVE_STATS with fetchMore, the variables of the query are both sortVars and offset.
My resolver in the backend handles the offset so that if it is, for example, 50, it ignores the first 50 results of the query and returns the next 50 from there (ie. rows 51-100).
Then in the updateQuery I have two objects available: prevResult and fetchMoreResult. At this point I just combine them using spread operator. If no new results are returned, I return the previous results.
When I have fetched more once, the results of players.length becomes 100 instead of 50. And that is my new offset and new data will be queried the next time I call fetchMore.
I have one template, let's call it Template A that prints JSON data into a table, one column includes a button which is conditionally rendered when has_violations equals true.
An example of the table:
Table
What I want to accomplish is to take the driver_id that is associated with that particular row into the router link and have it passed onto a different template file let's call it Template B.
But how can I accomplish this using Vuex Store?
Sample JSON data:
{"driver_id":1,"driver_name":"{driver_first_name}, {driver_last_name}","driver_truck":"13","driver_trailer":"83","driver_status":"driving","has_violations":false},
{"driver_id":2,"driver_name":"{driver_first_name}, {driver_last_name}","driver_truck":"58","driver_trailer":"37","driver_status":"sleeping","has_violations":true},
{"driver_id":3,"driver_name":"{driver_first_name}, {driver_last_name}","driver_truck":"80","driver_trailer":"27","driver_status":"driving","has_violations":true},
Basic steps:
Get index of row on button click.
Get index of JSON data using value from Step 1.
Store the JSON data from Step 2 into Vuex.
Send user to Template B using router.
Retrieve data from Store when in Template B
Because you did not show your exact structure, the code below is just a basic structure.
Here's the code:
/* VUEX demo */
new Vuex.Store({
state: {
driver_data: undefined
},
mutations: {
recordDriver({ state }, payload){
state.driver_data = payload;
}
}
});
/* TEMPLATE A demo */
new Vue.component('template-a', {
data: function(){
return {
// Assume this is the JSON
driverJSON: [
{ driver_id: 1, driver_name: 'John Smith' },
{ driver_id: 2, driver_name: 'Bob John' }
]
};
},
methods: {
onButtonClicked: function(e){
const button = e.target;
const td = button.parentElement;
const tr = td.parentElement;
const indexOfTr = [...tr.parentElement.children].findIndex(row => row === tr);
const dataToStore = this.driverJSON[indexOfTr];
// Store data into $store
this.$store.commit('recordDriver', dataToStore);
// After storing, direct page using $router
this.$router.go({ ... });
}
}
});
/* TEMPLATE B demo */
new Vue.component('template-b', {
data: function(){
return {
// Get driver data using $store
driver: this.$store.state.driver_data
}
}
});
I like Yong's answer, but I would rather suggest you to pass the driverID as a prop to your route and then use a VueX getter to get the violations for the particular ID.