How do I handle deletes in react-apollo - javascript

I have a mutation like
mutation deleteRecord($id: ID) {
deleteRecord(id: $id) {
id
}
}
and in another location I have a list of elements.
Is there something better I could return from the server, and how should I update the list?
More generally, what is best practice for handling deletes in apollo/graphql?

I am not sure it is good practise style but here is how I handle the deletion of an item in react-apollo with updateQueries:
import { graphql, compose } from 'react-apollo';
import gql from 'graphql-tag';
import update from 'react-addons-update';
import _ from 'underscore';
const SceneCollectionsQuery = gql `
query SceneCollections {
myScenes: selectedScenes (excludeOwner: false, first: 24) {
edges {
node {
...SceneCollectionScene
}
}
}
}`;
const DeleteSceneMutation = gql `
mutation DeleteScene($sceneId: String!) {
deleteScene(sceneId: $sceneId) {
ok
scene {
id
active
}
}
}`;
const SceneModifierWithStateAndData = compose(
...,
graphql(DeleteSceneMutation, {
props: ({ mutate }) => ({
deleteScene: (sceneId) => mutate({
variables: { sceneId },
updateQueries: {
SceneCollections: (prev, { mutationResult }) => {
const myScenesList = prev.myScenes.edges.map((item) => item.node);
const deleteIndex = _.findIndex(myScenesList, (item) => item.id === sceneId);
if (deleteIndex < 0) {
return prev;
}
return update(prev, {
myScenes: {
edges: {
$splice: [[deleteIndex, 1]]
}
}
});
}
}
})
})
})
)(SceneModifierWithState);

Here is a similar solution that works without underscore.js. It is tested with react-apollo in version 2.1.1. and creates a component for a delete-button:
import React from "react";
import { Mutation } from "react-apollo";
const GET_TODOS = gql`
{
allTodos {
id
name
}
}
`;
const DELETE_TODO = gql`
mutation deleteTodo(
$id: ID!
) {
deleteTodo(
id: $id
) {
id
}
}
`;
const DeleteTodo = ({id}) => {
return (
<Mutation
mutation={DELETE_TODO}
update={(cache, { data: { deleteTodo } }) => {
const { allTodos } = cache.readQuery({ query: GET_TODOS });
cache.writeQuery({
query: GET_TODOS,
data: { allTodos: allTodos.filter(e => e.id !== id)}
});
}}
>
{(deleteTodo, { data }) => (
<button
onClick={e => {
deleteTodo({
variables: {
id
}
});
}}
>Delete</button>
)}
</Mutation>
);
};
export default DeleteTodo;

All those answers assume query-oriented cache management.
What if I remove user with id 1 and this user is referenced in 20 queries across the entire app? Reading answers above, I'd have to assume I will have to write code to update the cache of all of them. This would be terrible in long-term maintainability of the codebase and would make any refactoring a nightmare.
The best solution in my opinion would be something like apolloClient.removeItem({__typeName: "User", id: "1"}) that would:
replace any direct reference to this object in cache to null
filter out this item in any [User] list in any query
But it doesn't exist (yet)
It might be great idea, or it could be even worse (eg. it might break pagination)
There is interesting discussion about it: https://github.com/apollographql/apollo-client/issues/899
I would be careful with those manual query updates. It looks appetizing at first, but it won't if your app will grow. At least create a solid abstraction layer at top of it eg:
next to every query you define (eg. in the same file) - define function that clens it properly eg
const MY_QUERY = gql``;
// it's local 'cleaner' - relatively easy to maintain as you can require proper cleaner updates during code review when query will change
export function removeUserFromMyQuery(apolloClient, userId) {
// clean here
}
and then, collect all those updates and call them all in final update
function handleUserDeleted(userId, client) {
removeUserFromMyQuery(userId, client)
removeUserFromSearchQuery(userId, client)
removeIdFrom20MoreQueries(userId, client)
}

For Apollo v3 this works for me:
const [deleteExpressHelp] = useDeleteExpressHelpMutation({
update: (cache, {data}) => {
cache.evict({
id: cache.identify({
__typename: 'express_help',
id: data?.delete_express_help_by_pk?.id,
}),
});
},
});
From the new docs:
Filtering dangling references out of a cached array field (like the Deity.offspring example above) is so common that Apollo Client performs this filtering automatically for array fields that don't define a read function.

Personally, I return an int which represents the number of items deleted. Then I use the updateQueries to remove the document(s) from the cache.

I have faced the same issue choosing the appropriate return type for such mutations when the rest API associated with the mutation could return http 204, 404 or 500.
Defining and arbitrary type and then return null (types are nullable by default) does not seem right because you don't know what happened, meaning if it was successful or not.
Returning a boolean solves that issue, you know if the mutation worked or not, but you lack some information in case it didn't work, like a better error message that you could show on FE, for example, if we got a 404 we can return "Not found".
Returning a custom type feels a bit forced because it is not actually a type of your schema or business logic, it just serves to fix a "communication issue" between rest and Graphql.
I ended up returning a string. I can return the resource ID/UUID or simply "ok" in case of success and return an error message in case of error.
Not sure if this is a good practice or Graphql idiomatic.

Related

How to remove unused #include's from GraphQl query?

I am currently using the Apollo GraphQl client - which sends POST requests by default - and would like to switch to using the GET request version to enable caching the response.
However quite a few requests are failing to fetch because the GET query that is sent ends up larger than 8kb.
I have constructed the queries in a way where there is 1 query per endpoint and have applied the concept of "withables" to opt into extra fields on demand. e.g.
// example query
{
query(
id: int,
withFieldA?: boolean,
withFieldB?: boolean
) {
the_endpoint(id: $id, withFieldA: $withFieldA, withFieldB: $withFieldB) {
id
title
field_a #include(id: withFieldA) {
id
// more values
}
field_b #include(id: withFieldB) {
id
// more values
}
// etc
}
}
}
// example usage
import query from 'queries/the_endpoint.gql'
apolloClient.query({ query, variables: { id: 1, withFieldA: true } })
So the problem is that while what's being requested is generally pretty small, the size of the query being sent across the network is massive because all the withables that haven't been opted into are also being included.
What I would like to do is make the query as lean as possible and only include the fields that have been opted into before sending the query across the network.
Is there a way to scrub the query before sending it?
Thanks
Solved with the help of the graphql visitor tool.
import { visit, ASTNode } from "graphql";
function clearOmittedWithables(query: ASTNode, withables: string[] = []) {
const shouldOmit = (variable: string, withables: string[]) => {
return variable.startsWith("with") && !withables.includes(variable);
};
return visit(query, {
VariableDefinition: {
leave(node) {
if (shouldOmit(node.variable.name.value, withables)) {
return null;
}
},
},
Field: {
leave(node) {
for (const directive of node?.directives || []) {
if (directive.name.value === "include") {
for (const argument of directive?.arguments || []) {
if (shouldOmit(argument.value.name.value, withables)) {
return null;
}
}
}
}
},
},
});
}

How to create pages from non-seriazable data(functions)

I have this JavaScript data file(src/test/test.js):
module.exports = {
"title": "...",
"Number": "number1",
"Number2": ({ number1 }) => number1 / 2,
}
I want to pass this file verbatim(functions preserved) to a page, so that the page can use that data to build itself. I already have the page template and everything else sorted out, I just need to find a way to pass this into the page.
The first approach I tried is requireing this file in gatsby-node.js and then passing it as pageContext.
gatsby-node.js
const path = require('path');
exports.createPages = ({actions, graphql}) => {
const { createPage } = actions;
return graphql(`
query loadQuery {
allFile(filter: {sourceInstanceName: {eq: "test"}}) {
edges {
node {
relativePath
absolutePath
}
}
}
}
`).then(result => {
if (result.errors) {
throw result.errors;
}
for (const node of result.data.allFile.edges.map(e => e.node)) {
const data = require(node.absolutePath);
createPage({
path: node.relativePath,
component: path.resolve('./src/templates/test.js'),
context: data,
});
}
});
};
gatsby-config.js
module.exports = {
plugins: [
{
resolve: `gatsby-source-filesystem`,
options: {
name: `test`,
path: `${__dirname}/src/test/`,
},
},
],
}
src/templates/test.js
import React from 'react';
const index = ({ pageContext }) => (
<p>{pageContext.Number2()}</p>
);
export default index;
However, I get this warning when running the dev server:
warn Error persisting state: ({ number1 }) => number1 / 2 could not be cloned.
If I ignore it and try to use the function anyway, Gatsby crashes with this error:
WebpackError: TypeError: pageContext.Number2 is not a function
After searching for a while, I found this:
The pageContext was always serialized so it never worked to pass a function and hence this isn't a bug. We might have not failed before though.
- Gatsby#23675
which told me this approach wouldn't work.
How could I pass this data into a page? I've considered JSON instead, however, JSON can't contain functions.
I've also tried finding a way to register a JSX object directly, however I couldn't find a way.
Regarding the main topic, as you spotted, can't be done that way because the data is serialized.
How could I pass this data into a page? I've considered JSON instead,
however, JSON can't contain functions.
Well, this is partially true. You can always do something like:
{"function":{"arguments":"a,b,c","body":"return a*b+c;"}}
And then:
let func = new Function(function.arguments, function.body);
In this case, you are (de)serializing a JSON function, creating and casting a function based on JSON parameters. This approach may work in your scenario.
Regarding the JSX, I guess you can try something like:
for (const node of result.data.allFile.edges.map(e => e.node)) {
const data = require(node.absolutePath);
createPage({
path: node.relativePath,
component: path.resolve('./src/templates/test.js'),
context:{
someComponent: () => <h1>Hi!</h1>
},
});
}
And then:
import React from 'react';
const Index = ({ pageContext: { someComponent: SomeComponent} }) => (
return <div><SomeComponent /></div>
);
export default index;
Note: I don't know if it's a typo from the question but index should be capitalized as Index
In this case, you are aliasing the someComponent as SomeComponent, which is a valid React component.

Issue with automatic UI updates in Apollo: `updateQuery` not working properly with `subscribeToMore`

I'm using GraphQL subscriptions for my chat application, but I have an issue with the UI updates.
Here are the query and the mutation I'm using:
const createMessage = gql`
mutation createMessage($text: String!, $sentById: ID!) {
createMessage(text: $text, sentById: $sentById) {
id
text
}
}
`
const allMessages = gql`
query allMessages {
allMessages {
id
text
createdAt
sentBy {
name
location {
latitude
longitude
}
}
}
}
`
Then, when exporting, I'm wrapping my Chat component like so:
export default graphql(createMessage, {name : 'createMessageMutation'})(
graphql(allMessages, {name: 'allMessagesQuery'})(Chat)
)
I'm subscribing to the allMessagesQuery in componentDidMount:
componentDidMount() {
// Subscribe to `CREATED`-mutations
this.createMessageSubscription = this.props.allMessagesQuery.subscribeToMore({
document: gql`
subscription {
Message(filter: {
mutation_in: [CREATED]
}) {
node {
id
text
createdAt
sentBy {
name
}
}
}
}
`,
updateQuery: (previousState, {subscriptionData}) => {
console.log('Chat - received subscription: ', previousState, subscriptionData)
const newMessage = subscriptionData.data.Message.node
const messages = previousState.allMessages.concat([newMessage])
console.log('Chat - new messages: ', messages.length, messages) // prints the correct array with the new message!!
return {
allMessages: messages
}
},
onError: (err) => console.error(err),
})
}
After I sent the message through the chat, the subscription is triggered successfully and I see the two logging statements that also contain the expected data. So the contents of messages are definitely correct within updateQuery!
However, the UI doesn't update automatically, in fact, all the previously displayed messages disappear.
My render method looks as follows:
render() {
console.log('Chat - render: ', this.props.allMessagesQuery)
return (
<div className='Chat'>
<ChatMessages
messages={this.props.allMessagesQuery.allMessages || []}
/>
<ChatInput
message={this.state.message}
onTextInput={(message) => this.setState({message})}
onResetText={() => this.setState({message: ''})}
onSend={this._onSend}
/>
</div>
)
}
The logging statement in render shows that initially, this.props.allMessagesQuery has the array allMessages, so everything works after the initial loading.
After the subscription is received, allMessages disappears from this.props.allMessagesQuery which is why an empty array is given to ChatMessages and nothing is rendered.
Before subscription is triggered ✅
After subscription is triggered ❌
I figured it out after digging through the docs one more time, it was a mistake on my end!
This part from the Apollo docs helped me solve the issue:
Remember: You'll need to ensure that you select IDs in every query where you need the results to be normalized.
So, adding the id fields to the returned payload of my queries and subscriptions actually helped.
const allMessages = gql`
query allMessages {
allMessages {
id
text
createdAt
sentBy {
id
name
location {
id
latitude
longitude
}
}
}
}
`
subscription {
Message(filter: {
mutation_in: [CREATED]
}) {
node {
id
text
createdAt
sentBy {
id
name
}
}
}
}
In your update Query the previousState is the allMessages query that is saved in the react apollo store.
To update the store you have to make a deep copy or use some helper e.g. reacts immutability helper. The apollo developers page gives some information about how to update the store correctly update Query.
In your case it could look like this
...
updateQuery: (previousState, { subscriptionData }) => {
const newMessage = subscriptionData.data.Message.node;
return update(previousState, {
allMessages: { $push: [newMessage] },
});
}
...

Apollo Client: Upsert mutation only modifies cache on update but not on create

I have an upsert query that gets triggered on either create or update. On update, Apollo integrates the result into the cache but on create it does not.
Here is the query:
export const UPSERT_NOTE_MUTATION = gql`
mutation upsertNote($id: ID, $body: String) {
upsertNote(id: $id, body: $body) {
id
body
}
}`
My client:
const graphqlClient = new ApolloClient({
networkInterface,
reduxRootSelector: 'apiStore',
dataIdFromObject: ({ id }) => id
});
The response from the server is identical: Both id and body are returned but Apollo isn't adding new ids into the data cache object automatically.
Is it possible to have Apollo automatically add new Objects to data without triggering a subsequent fetch?
Here is what my data store looks like:
UPDATE
According to the documentation, the function updateQueries is supposed to allow me to push a new element to my list of assets without having to trigger my origin fetch query again.
The function gets executed but whatever is returned by the function is completely ignored and the cache is not modified.
Even if I do something like this:
updateQueries: {
getUserAssets: (previousQueryResult, { mutationResult }) => {
return {};
}
}
Nothing changes.
UPDATE #2
Still can't get my assets list to update.
Inside updateQueries, here is what my previousQueryResult looks like:
updateQueries: {
getUserAssets: (previousQueryResult, { mutationResult }) => {
return {
assets: []
.concat(mutationResult.data.upsertAsset)
.concat(previousQueryResult.assets)
}
}
}
But regardless of what I return, the data store does not refresh:
For reference, here is what each asset looks like:
Have you followed the example here ?
I would write the updateQueries in the mutate like this:
updateQueries: {
getUserAssets: (previousQueryResult, { mutationResult }) => {
const newAsset = mutationResult.data.upsertAsset;
return update(prev, {
assets: {
$unshift: [newAsset],
},
});
},
}
Or with object assign instead of update from immutability-helper:
updateQueries: {
getUserAssets: (previousQueryResult, { mutationResult }) => {
const newAsset = mutationResult.data.upsertAsset;
return Object.assign({}, prev, {assets: [...previousQueryResult.assets, newAsset]});
},
}
As you state in your update, you need to use updateQueries in order to update the queries associated with this mutation. Although your question does not state what kind of query is to be updated with the result of the mutation, I assume you have something like this:
query myMadeUpQuery {
note {
id
body
}
}
which should return the list of notes currently within your system with the id and body of each of the notes. With updateQueries, your callback receives the result of the query (i.e. information about a newly inserted note) and the previous result of this query (i.e. a list of notes) and your callback has to return the new result that should be assigned to the query above.
See here for an analogous example. Essentially, without the immutability-helper that the given example uses, you could write your updateQueries callback as follows:
updateQueries: {
myMadeUpQuery: (previousQueryResult, { mutationResult }) => {
return {
note: previousQueryResult.note(mutationResult.data.upsertNode),
};
}
}

Meteor - Publish just the count for a collection

Is it possible to publish just the count for a collection to the user? I want to display the total count on the homepage, but not pass all the data to the user. This is what I tried but it does not work:
Meteor.publish('task-count', function () {
return Tasks.find().count();
});
this.route('home', {
path: '/',
waitOn: function () {
return Meteor.subscribe('task-count');
}
});
When I try this I get an endless loading animation.
Meteor.publish functions should return cursors, but here you're returning directly a Number which is the total count of documents in your Tasks collection.
Counting documents in Meteor is a surprisingly more difficult task than it appears if you want to do it the proper way : using a solution both elegant and effective.
The package ros:publish-counts (a fork of tmeasday:publish-counts) provides accurate counts for small collections (100-1000) or "nearly accurate" counts for larger collections (tens of thousands) using the fastCount option.
You can use it this way :
// server-side publish (small collection)
Meteor.publish("tasks-count",function(){
Counts.publish(this,"tasks-count",Tasks.find());
});
// server-side publish (large collection)
Meteor.publish("tasks-count",function(){
Counts.publish(this,"tasks-count",Tasks.find(), {fastCount: true});
});
// client-side use
Template.myTemplate.helpers({
tasksCount:function(){
return Counts.get("tasks-count");
}
});
You'll get a client-side reactive count as well as a server-side reasonably performant implementation.
This problem is discussed in a (paid) bullet proof Meteor lesson which is a recommended reading : https://bulletproofmeteor.com/
I would use a Meteor.call
Client:
var count; /// Global Client Variable
Meteor.startup(function () {
Meteor.call("count", function (error, result) {
count = result;
})
});
return count in some helper
Server:
Meteor.methods({
count: function () {
return Tasks.find().count();
}
})
*Note this solution would not be reactive. However if reactivity is desired it can be incorporated.
This is an old question, but I hope my answer might help others who need this info as I did.
I sometimes need some miscellaneous but reactive data to display indicators in the UI and documents count is a good example.
Create a reusable (exported) client-side only collection that won't be imported on the server (to avoid creating unnecessary database collection). Note the name passed as argument ("misc" here).
import { Mongo } from "meteor/mongo";
const Misc = new Mongo.Collection("misc");
export default Misc;
Create a publication on the server that accepts docId and the name of the key where the count will be saved (with a default value). The collection name to publish to
is the one used to create the client only collection ("misc"). The docId value does not matter much, it just needs to be unique among all Misc docs to avoid conflicts. See Meteor docs for details about the publication behavior.
import { Meteor } from "meteor/meteor";
import { check } from "meteor/check";
import { Shifts } from "../../collections";
const COLL_NAME = "misc";
/* Publish the number of shifts that need revision in a 'misc' collection
* to a document specified as `docId` and optionally to a specified `key`. */
Meteor.publish("shiftsToReviseCount", function({ docId, key = "count" }) {
check(docId, String);
check(key, String);
let initialized = false;
let count = 0;
const observer = Shifts.find(
{ needsRevision: true },
{ fields: { _id: 1 } }
).observeChanges({
added: () => {
count += 1;
if (initialized) {
this.changed(COLL_NAME, docId, { [key]: count });
}
},
removed: () => {
count -= 1;
this.changed(COLL_NAME, docId, { [key]: count });
},
});
if (!initialized) {
this.added(COLL_NAME, docId, { [key]: count });
initialized = true;
}
this.ready();
this.onStop(() => {
observer.stop();
});
});
On the client, import the collection, decide of a docId string (can be saved in a constant), subscribe to the publication and fetch the appropriate document. Voilà!
import { Meteor } from "meteor/meteor";
import { withTracker } from "meteor/react-meteor-data";
import Misc from "/collections/client/Misc";
const REVISION_COUNT_ID = "REVISION_COUNT_ID";
export default withTracker(() => {
Meteor.subscribe("shiftsToReviseCount", {
docId: REVISION_COUNT_ID,
}).ready();
const { count } = Misc.findOne(REVISION_COUNT_ID) || {};
return { count };
});

Categories