How to remove unused #include's from GraphQl query? - javascript

I am currently using the Apollo GraphQl client - which sends POST requests by default - and would like to switch to using the GET request version to enable caching the response.
However quite a few requests are failing to fetch because the GET query that is sent ends up larger than 8kb.
I have constructed the queries in a way where there is 1 query per endpoint and have applied the concept of "withables" to opt into extra fields on demand. e.g.
// example query
{
query(
id: int,
withFieldA?: boolean,
withFieldB?: boolean
) {
the_endpoint(id: $id, withFieldA: $withFieldA, withFieldB: $withFieldB) {
id
title
field_a #include(id: withFieldA) {
id
// more values
}
field_b #include(id: withFieldB) {
id
// more values
}
// etc
}
}
}
// example usage
import query from 'queries/the_endpoint.gql'
apolloClient.query({ query, variables: { id: 1, withFieldA: true } })
So the problem is that while what's being requested is generally pretty small, the size of the query being sent across the network is massive because all the withables that haven't been opted into are also being included.
What I would like to do is make the query as lean as possible and only include the fields that have been opted into before sending the query across the network.
Is there a way to scrub the query before sending it?
Thanks

Solved with the help of the graphql visitor tool.
import { visit, ASTNode } from "graphql";
function clearOmittedWithables(query: ASTNode, withables: string[] = []) {
const shouldOmit = (variable: string, withables: string[]) => {
return variable.startsWith("with") && !withables.includes(variable);
};
return visit(query, {
VariableDefinition: {
leave(node) {
if (shouldOmit(node.variable.name.value, withables)) {
return null;
}
},
},
Field: {
leave(node) {
for (const directive of node?.directives || []) {
if (directive.name.value === "include") {
for (const argument of directive?.arguments || []) {
if (shouldOmit(argument.value.name.value, withables)) {
return null;
}
}
}
}
},
},
});
}

Related

Apollo GraphQL updateQuery to typePolicy

I am beating my head against a wall. I have updated to Apollo 3, and cannot figure out how to migrate an updateQuery to a typePolicy. I am doing basic continuation based pagination, and this is how I used to merged the results of fetchMore:
await fetchMore({
query: MessagesByThreadIDQuery,
variables: {
threadId: threadId,
limit: Configuration.MessagePageSize,
continuation: token
},
updateQuery: (prev, curr) => {
// Extract our updated message page.
const last = prev.messagesByThreadId.messages ?? []
const next = curr.fetchMoreResult?.messagesByThreadId.messages ?? []
return {
messagesByThreadId: {
__typename: 'MessagesContinuation',
messages: [...last, ...next],
continuation: curr.fetchMoreResult?.messagesByThreadId.continuation
}
}
}
I have made an attempt to write the merge typePolicy myself, but it just continually loads and throws errors about duplicate identifiers in the Apollo cache. Here is what my typePolicy looks like for my query.
typePolicies: {
Query: {
fields: {
messagesByThreadId: {
keyArgs: false,
merge: (existing, incoming, args): IMessagesContinuation => {
const typedExisting: IMessagesContinuation | undefined = existing
const typedIncoming: IMessagesContinuation | undefined = incoming
const existingMessages = (typedExisting?.messages ?? [])
const incomingMessages = (typedIncoming?.messages ?? [])
const result = existing ? {
__typename: 'MessageContinuation',
messages: [...existingMessages, ...incomingMessages],
continuation: typedIncoming?.continuation
} : incoming
return result
}
}
}
}
}
So I was able to solve my use-case. It seems way harder than it really needs to be. I essentially have to attempt to locate existing items matching the incoming and overwrite them, as well as add any new items that don't yet exist in the cache.
I also have to only apply this logic if a continuation token was provided, because if it's null or undefined, I should just use the incoming value because that indicates that we are doing an initial load.
My document is shaped like this:
{
"items": [{ id: string, ...others }],
"continuation": "some_token_value"
}
I created a generic type policy that I can use for all my documents that have a similar shape. It allows me to specify the name of the items property, what the key args are that I want to cache on, and the name of the graphql type.
export function ContinuationPolicy(keyArgs: Array<string>, itemPropertyKey: string, typeName: string) {
return {
keyArgs,
merge(existing: any, incoming: any, args: any) {
if (!!existing && !!args.args?.continuation) {
const existingItems = (existing ? existing[itemPropertyKey] : [])
const incomingItems = (incoming ? incoming[itemPropertyKey] : [])
let items: Array<any> = [...existingItems]
for (let i = 0; i < incomingItems.length; i++) {
const current = incomingItems[i] as any
const found = items.findIndex(m => m.__ref === current.__ref)
if (found > -1) {
items[found] === current
} else {
items = [...items, current]
}
}
// This new data is a continuation of the last data.
return {
__typename: typeName,
[itemPropertyKey]: items,
continuation: incoming.continuation
}
} else {
// When we have no existing data in the cache, we'll just use the incoming data.
return incoming
}
}
}
}

How to use equals in a FilterExpression for DyanmoDb API

I am having a lot of trouble scanning and then using FilterExpression to filter based on a single value. I have looked at the api documentation and other stack overflow questions, but am still having trouble figuring the proper syntax for this. Since I am also using react and javascript for the first time, this may be a problem with my understanding of those.
Below is what I am trying to use as a filter expression. uploadId is the field name in the Dynamo database table and event.pathParameters.id is the variable that should resolve to the value that the scan results are filtered on.
FilterExpression: "uploadId = :event.pathParameters.id"
Below is the code in within context:
import * as dynamoDbLib from "./libs/dynamodb-lib";
import { success, failure } from "./libs/response-lib";
export async function main(event, context, callback) {
const params = {
TableName: "uploads",
FilterExpression: "uploadId = :event.pathParameters.id"
};
try {
const result = await dynamoDbLib.call("scan", params);
if (result.Item) {
// Return the retrieved item
callback(null, success(result.Item));
} else {
callback(null, failure({ status: false, error: "Item not found." }));
}
} catch (e) {
callback(null, failure({ status: false }));
}
}
Thank you for your help!
Always use Expression with ExpressionAttributeValues. params should look like this.
const params = {
TableName: "uploads",
FilterExpression: "uploadId = :uid",
ExpressionAttributeValues: {
":uid" : {S: event.pathParameters.id} //DynamoDB Attribute Value structure. S refer to String, N refer to Number, etc..
}
};

Apollo Client: Upsert mutation only modifies cache on update but not on create

I have an upsert query that gets triggered on either create or update. On update, Apollo integrates the result into the cache but on create it does not.
Here is the query:
export const UPSERT_NOTE_MUTATION = gql`
mutation upsertNote($id: ID, $body: String) {
upsertNote(id: $id, body: $body) {
id
body
}
}`
My client:
const graphqlClient = new ApolloClient({
networkInterface,
reduxRootSelector: 'apiStore',
dataIdFromObject: ({ id }) => id
});
The response from the server is identical: Both id and body are returned but Apollo isn't adding new ids into the data cache object automatically.
Is it possible to have Apollo automatically add new Objects to data without triggering a subsequent fetch?
Here is what my data store looks like:
UPDATE
According to the documentation, the function updateQueries is supposed to allow me to push a new element to my list of assets without having to trigger my origin fetch query again.
The function gets executed but whatever is returned by the function is completely ignored and the cache is not modified.
Even if I do something like this:
updateQueries: {
getUserAssets: (previousQueryResult, { mutationResult }) => {
return {};
}
}
Nothing changes.
UPDATE #2
Still can't get my assets list to update.
Inside updateQueries, here is what my previousQueryResult looks like:
updateQueries: {
getUserAssets: (previousQueryResult, { mutationResult }) => {
return {
assets: []
.concat(mutationResult.data.upsertAsset)
.concat(previousQueryResult.assets)
}
}
}
But regardless of what I return, the data store does not refresh:
For reference, here is what each asset looks like:
Have you followed the example here ?
I would write the updateQueries in the mutate like this:
updateQueries: {
getUserAssets: (previousQueryResult, { mutationResult }) => {
const newAsset = mutationResult.data.upsertAsset;
return update(prev, {
assets: {
$unshift: [newAsset],
},
});
},
}
Or with object assign instead of update from immutability-helper:
updateQueries: {
getUserAssets: (previousQueryResult, { mutationResult }) => {
const newAsset = mutationResult.data.upsertAsset;
return Object.assign({}, prev, {assets: [...previousQueryResult.assets, newAsset]});
},
}
As you state in your update, you need to use updateQueries in order to update the queries associated with this mutation. Although your question does not state what kind of query is to be updated with the result of the mutation, I assume you have something like this:
query myMadeUpQuery {
note {
id
body
}
}
which should return the list of notes currently within your system with the id and body of each of the notes. With updateQueries, your callback receives the result of the query (i.e. information about a newly inserted note) and the previous result of this query (i.e. a list of notes) and your callback has to return the new result that should be assigned to the query above.
See here for an analogous example. Essentially, without the immutability-helper that the given example uses, you could write your updateQueries callback as follows:
updateQueries: {
myMadeUpQuery: (previousQueryResult, { mutationResult }) => {
return {
note: previousQueryResult.note(mutationResult.data.upsertNode),
};
}
}

How do I handle deletes in react-apollo

I have a mutation like
mutation deleteRecord($id: ID) {
deleteRecord(id: $id) {
id
}
}
and in another location I have a list of elements.
Is there something better I could return from the server, and how should I update the list?
More generally, what is best practice for handling deletes in apollo/graphql?
I am not sure it is good practise style but here is how I handle the deletion of an item in react-apollo with updateQueries:
import { graphql, compose } from 'react-apollo';
import gql from 'graphql-tag';
import update from 'react-addons-update';
import _ from 'underscore';
const SceneCollectionsQuery = gql `
query SceneCollections {
myScenes: selectedScenes (excludeOwner: false, first: 24) {
edges {
node {
...SceneCollectionScene
}
}
}
}`;
const DeleteSceneMutation = gql `
mutation DeleteScene($sceneId: String!) {
deleteScene(sceneId: $sceneId) {
ok
scene {
id
active
}
}
}`;
const SceneModifierWithStateAndData = compose(
...,
graphql(DeleteSceneMutation, {
props: ({ mutate }) => ({
deleteScene: (sceneId) => mutate({
variables: { sceneId },
updateQueries: {
SceneCollections: (prev, { mutationResult }) => {
const myScenesList = prev.myScenes.edges.map((item) => item.node);
const deleteIndex = _.findIndex(myScenesList, (item) => item.id === sceneId);
if (deleteIndex < 0) {
return prev;
}
return update(prev, {
myScenes: {
edges: {
$splice: [[deleteIndex, 1]]
}
}
});
}
}
})
})
})
)(SceneModifierWithState);
Here is a similar solution that works without underscore.js. It is tested with react-apollo in version 2.1.1. and creates a component for a delete-button:
import React from "react";
import { Mutation } from "react-apollo";
const GET_TODOS = gql`
{
allTodos {
id
name
}
}
`;
const DELETE_TODO = gql`
mutation deleteTodo(
$id: ID!
) {
deleteTodo(
id: $id
) {
id
}
}
`;
const DeleteTodo = ({id}) => {
return (
<Mutation
mutation={DELETE_TODO}
update={(cache, { data: { deleteTodo } }) => {
const { allTodos } = cache.readQuery({ query: GET_TODOS });
cache.writeQuery({
query: GET_TODOS,
data: { allTodos: allTodos.filter(e => e.id !== id)}
});
}}
>
{(deleteTodo, { data }) => (
<button
onClick={e => {
deleteTodo({
variables: {
id
}
});
}}
>Delete</button>
)}
</Mutation>
);
};
export default DeleteTodo;
All those answers assume query-oriented cache management.
What if I remove user with id 1 and this user is referenced in 20 queries across the entire app? Reading answers above, I'd have to assume I will have to write code to update the cache of all of them. This would be terrible in long-term maintainability of the codebase and would make any refactoring a nightmare.
The best solution in my opinion would be something like apolloClient.removeItem({__typeName: "User", id: "1"}) that would:
replace any direct reference to this object in cache to null
filter out this item in any [User] list in any query
But it doesn't exist (yet)
It might be great idea, or it could be even worse (eg. it might break pagination)
There is interesting discussion about it: https://github.com/apollographql/apollo-client/issues/899
I would be careful with those manual query updates. It looks appetizing at first, but it won't if your app will grow. At least create a solid abstraction layer at top of it eg:
next to every query you define (eg. in the same file) - define function that clens it properly eg
const MY_QUERY = gql``;
// it's local 'cleaner' - relatively easy to maintain as you can require proper cleaner updates during code review when query will change
export function removeUserFromMyQuery(apolloClient, userId) {
// clean here
}
and then, collect all those updates and call them all in final update
function handleUserDeleted(userId, client) {
removeUserFromMyQuery(userId, client)
removeUserFromSearchQuery(userId, client)
removeIdFrom20MoreQueries(userId, client)
}
For Apollo v3 this works for me:
const [deleteExpressHelp] = useDeleteExpressHelpMutation({
update: (cache, {data}) => {
cache.evict({
id: cache.identify({
__typename: 'express_help',
id: data?.delete_express_help_by_pk?.id,
}),
});
},
});
From the new docs:
Filtering dangling references out of a cached array field (like the Deity.offspring example above) is so common that Apollo Client performs this filtering automatically for array fields that don't define a read function.
Personally, I return an int which represents the number of items deleted. Then I use the updateQueries to remove the document(s) from the cache.
I have faced the same issue choosing the appropriate return type for such mutations when the rest API associated with the mutation could return http 204, 404 or 500.
Defining and arbitrary type and then return null (types are nullable by default) does not seem right because you don't know what happened, meaning if it was successful or not.
Returning a boolean solves that issue, you know if the mutation worked or not, but you lack some information in case it didn't work, like a better error message that you could show on FE, for example, if we got a 404 we can return "Not found".
Returning a custom type feels a bit forced because it is not actually a type of your schema or business logic, it just serves to fix a "communication issue" between rest and Graphql.
I ended up returning a string. I can return the resource ID/UUID or simply "ok" in case of success and return an error message in case of error.
Not sure if this is a good practice or Graphql idiomatic.

Meteor - Publish just the count for a collection

Is it possible to publish just the count for a collection to the user? I want to display the total count on the homepage, but not pass all the data to the user. This is what I tried but it does not work:
Meteor.publish('task-count', function () {
return Tasks.find().count();
});
this.route('home', {
path: '/',
waitOn: function () {
return Meteor.subscribe('task-count');
}
});
When I try this I get an endless loading animation.
Meteor.publish functions should return cursors, but here you're returning directly a Number which is the total count of documents in your Tasks collection.
Counting documents in Meteor is a surprisingly more difficult task than it appears if you want to do it the proper way : using a solution both elegant and effective.
The package ros:publish-counts (a fork of tmeasday:publish-counts) provides accurate counts for small collections (100-1000) or "nearly accurate" counts for larger collections (tens of thousands) using the fastCount option.
You can use it this way :
// server-side publish (small collection)
Meteor.publish("tasks-count",function(){
Counts.publish(this,"tasks-count",Tasks.find());
});
// server-side publish (large collection)
Meteor.publish("tasks-count",function(){
Counts.publish(this,"tasks-count",Tasks.find(), {fastCount: true});
});
// client-side use
Template.myTemplate.helpers({
tasksCount:function(){
return Counts.get("tasks-count");
}
});
You'll get a client-side reactive count as well as a server-side reasonably performant implementation.
This problem is discussed in a (paid) bullet proof Meteor lesson which is a recommended reading : https://bulletproofmeteor.com/
I would use a Meteor.call
Client:
var count; /// Global Client Variable
Meteor.startup(function () {
Meteor.call("count", function (error, result) {
count = result;
})
});
return count in some helper
Server:
Meteor.methods({
count: function () {
return Tasks.find().count();
}
})
*Note this solution would not be reactive. However if reactivity is desired it can be incorporated.
This is an old question, but I hope my answer might help others who need this info as I did.
I sometimes need some miscellaneous but reactive data to display indicators in the UI and documents count is a good example.
Create a reusable (exported) client-side only collection that won't be imported on the server (to avoid creating unnecessary database collection). Note the name passed as argument ("misc" here).
import { Mongo } from "meteor/mongo";
const Misc = new Mongo.Collection("misc");
export default Misc;
Create a publication on the server that accepts docId and the name of the key where the count will be saved (with a default value). The collection name to publish to
is the one used to create the client only collection ("misc"). The docId value does not matter much, it just needs to be unique among all Misc docs to avoid conflicts. See Meteor docs for details about the publication behavior.
import { Meteor } from "meteor/meteor";
import { check } from "meteor/check";
import { Shifts } from "../../collections";
const COLL_NAME = "misc";
/* Publish the number of shifts that need revision in a 'misc' collection
* to a document specified as `docId` and optionally to a specified `key`. */
Meteor.publish("shiftsToReviseCount", function({ docId, key = "count" }) {
check(docId, String);
check(key, String);
let initialized = false;
let count = 0;
const observer = Shifts.find(
{ needsRevision: true },
{ fields: { _id: 1 } }
).observeChanges({
added: () => {
count += 1;
if (initialized) {
this.changed(COLL_NAME, docId, { [key]: count });
}
},
removed: () => {
count -= 1;
this.changed(COLL_NAME, docId, { [key]: count });
},
});
if (!initialized) {
this.added(COLL_NAME, docId, { [key]: count });
initialized = true;
}
this.ready();
this.onStop(() => {
observer.stop();
});
});
On the client, import the collection, decide of a docId string (can be saved in a constant), subscribe to the publication and fetch the appropriate document. VoilĂ !
import { Meteor } from "meteor/meteor";
import { withTracker } from "meteor/react-meteor-data";
import Misc from "/collections/client/Misc";
const REVISION_COUNT_ID = "REVISION_COUNT_ID";
export default withTracker(() => {
Meteor.subscribe("shiftsToReviseCount", {
docId: REVISION_COUNT_ID,
}).ready();
const { count } = Misc.findOne(REVISION_COUNT_ID) || {};
return { count };
});

Categories