Meteor - Publish just the count for a collection - javascript

Is it possible to publish just the count for a collection to the user? I want to display the total count on the homepage, but not pass all the data to the user. This is what I tried but it does not work:
Meteor.publish('task-count', function () {
return Tasks.find().count();
});
this.route('home', {
path: '/',
waitOn: function () {
return Meteor.subscribe('task-count');
}
});
When I try this I get an endless loading animation.

Meteor.publish functions should return cursors, but here you're returning directly a Number which is the total count of documents in your Tasks collection.
Counting documents in Meteor is a surprisingly more difficult task than it appears if you want to do it the proper way : using a solution both elegant and effective.
The package ros:publish-counts (a fork of tmeasday:publish-counts) provides accurate counts for small collections (100-1000) or "nearly accurate" counts for larger collections (tens of thousands) using the fastCount option.
You can use it this way :
// server-side publish (small collection)
Meteor.publish("tasks-count",function(){
Counts.publish(this,"tasks-count",Tasks.find());
});
// server-side publish (large collection)
Meteor.publish("tasks-count",function(){
Counts.publish(this,"tasks-count",Tasks.find(), {fastCount: true});
});
// client-side use
Template.myTemplate.helpers({
tasksCount:function(){
return Counts.get("tasks-count");
}
});
You'll get a client-side reactive count as well as a server-side reasonably performant implementation.
This problem is discussed in a (paid) bullet proof Meteor lesson which is a recommended reading : https://bulletproofmeteor.com/

I would use a Meteor.call
Client:
var count; /// Global Client Variable
Meteor.startup(function () {
Meteor.call("count", function (error, result) {
count = result;
})
});
return count in some helper
Server:
Meteor.methods({
count: function () {
return Tasks.find().count();
}
})
*Note this solution would not be reactive. However if reactivity is desired it can be incorporated.

This is an old question, but I hope my answer might help others who need this info as I did.
I sometimes need some miscellaneous but reactive data to display indicators in the UI and documents count is a good example.
Create a reusable (exported) client-side only collection that won't be imported on the server (to avoid creating unnecessary database collection). Note the name passed as argument ("misc" here).
import { Mongo } from "meteor/mongo";
const Misc = new Mongo.Collection("misc");
export default Misc;
Create a publication on the server that accepts docId and the name of the key where the count will be saved (with a default value). The collection name to publish to
is the one used to create the client only collection ("misc"). The docId value does not matter much, it just needs to be unique among all Misc docs to avoid conflicts. See Meteor docs for details about the publication behavior.
import { Meteor } from "meteor/meteor";
import { check } from "meteor/check";
import { Shifts } from "../../collections";
const COLL_NAME = "misc";
/* Publish the number of shifts that need revision in a 'misc' collection
* to a document specified as `docId` and optionally to a specified `key`. */
Meteor.publish("shiftsToReviseCount", function({ docId, key = "count" }) {
check(docId, String);
check(key, String);
let initialized = false;
let count = 0;
const observer = Shifts.find(
{ needsRevision: true },
{ fields: { _id: 1 } }
).observeChanges({
added: () => {
count += 1;
if (initialized) {
this.changed(COLL_NAME, docId, { [key]: count });
}
},
removed: () => {
count -= 1;
this.changed(COLL_NAME, docId, { [key]: count });
},
});
if (!initialized) {
this.added(COLL_NAME, docId, { [key]: count });
initialized = true;
}
this.ready();
this.onStop(() => {
observer.stop();
});
});
On the client, import the collection, decide of a docId string (can be saved in a constant), subscribe to the publication and fetch the appropriate document. VoilĂ !
import { Meteor } from "meteor/meteor";
import { withTracker } from "meteor/react-meteor-data";
import Misc from "/collections/client/Misc";
const REVISION_COUNT_ID = "REVISION_COUNT_ID";
export default withTracker(() => {
Meteor.subscribe("shiftsToReviseCount", {
docId: REVISION_COUNT_ID,
}).ready();
const { count } = Misc.findOne(REVISION_COUNT_ID) || {};
return { count };
});

Related

How can i write a function that extends the firestore-mock npm module adding the firebase startAfter function for query cursor pagination?

I am adding query pagination to my GCF query function and am writing unit tests for the pagination query, I am writing a mock function using Node, and the firestore-mock npm library. Note, I can't use Sinon and no third party modules or libraries are allowed aside from firestore-mock.
I am extending the firestore-mock library to add query pagination to our firestore query. Write now, I have 3 methods that extend Firestore-mock that I need to write. Two of which are completed.
orderBy
limit
startAfter
So far I have successfully written mocks methods for limit and orderBy but am stuck on how to implement startAfter
Ideally this startAfter function would have the same functionality as the firebase built in startAfter function.
The request I am using for the query pagination has the following structure:
const paginatedQuery = await f.getQueryPagination({
customerId: ['62005'],
startDate: '2021-11-04T00:00:00.000Z',
endDate: '2021-11-05T00:00:00.000Z',
next: '1657667147.865000000',
},
'wal-com', ['62005'],
logger
);
I have looked at the docs for the startAfter cursor query and in my request I want the startAfter method to set the cursor to be set to the document that starts with the timestamp that next was converted from.
So for the above example: next: '1657667147.865000000' would be equivelant to the next document having a eventDateTime value of
TimeStamp: {
_seconds: 1657667147
_nanoSeconds: 865000000
}
EDIT: Here is the code I have right now for startAfter but it doesn't work as intended
const FirebaseMock = require('firebase-mock');
const QueryMock = require('firestore-mock/mock_constructors/QueryMock');
QueryMock.prototype._startAfter = function(doc) {
const startAfter = doc;
const startAfterId = startAfter.id;
const startAfterData = startAfter.data();
const startFinder = function(doc) {
const docId = doc.id;
const docData = doc.data();
if (docId === startAfterId) {
return true;
}
if (docData === startAfterData) {
return true;
}
return false;
};
const buildStartFinder = function() {
return startFinder;
};
return buildStartFinder;
};
QueryMock.prototype.startAfter = function(value) {
// if value is passed as a parameter
// set the cursor for the query to the document with the timestamp equal to value
if (value) {
if (value.constructor.name === 'DocumentSnapshot') {
this._docs = this.firestore._startAfter(value, this._docs, this.id);
} else if (value.constructor.name === 'Timestamp') {
this._docs = this.firestore._startAfter(
new TimestampMock(value.seconds, value.nanoseconds),
this._docs,
this.id
);
} else {
throw new Error(
'startAfter() only accepts a DocumentSnapshot or a Timestamp'
);
}
}
};
module.exports = {
FirestoreMock
};

How to remove unused #include's from GraphQl query?

I am currently using the Apollo GraphQl client - which sends POST requests by default - and would like to switch to using the GET request version to enable caching the response.
However quite a few requests are failing to fetch because the GET query that is sent ends up larger than 8kb.
I have constructed the queries in a way where there is 1 query per endpoint and have applied the concept of "withables" to opt into extra fields on demand. e.g.
// example query
{
query(
id: int,
withFieldA?: boolean,
withFieldB?: boolean
) {
the_endpoint(id: $id, withFieldA: $withFieldA, withFieldB: $withFieldB) {
id
title
field_a #include(id: withFieldA) {
id
// more values
}
field_b #include(id: withFieldB) {
id
// more values
}
// etc
}
}
}
// example usage
import query from 'queries/the_endpoint.gql'
apolloClient.query({ query, variables: { id: 1, withFieldA: true } })
So the problem is that while what's being requested is generally pretty small, the size of the query being sent across the network is massive because all the withables that haven't been opted into are also being included.
What I would like to do is make the query as lean as possible and only include the fields that have been opted into before sending the query across the network.
Is there a way to scrub the query before sending it?
Thanks
Solved with the help of the graphql visitor tool.
import { visit, ASTNode } from "graphql";
function clearOmittedWithables(query: ASTNode, withables: string[] = []) {
const shouldOmit = (variable: string, withables: string[]) => {
return variable.startsWith("with") && !withables.includes(variable);
};
return visit(query, {
VariableDefinition: {
leave(node) {
if (shouldOmit(node.variable.name.value, withables)) {
return null;
}
},
},
Field: {
leave(node) {
for (const directive of node?.directives || []) {
if (directive.name.value === "include") {
for (const argument of directive?.arguments || []) {
if (shouldOmit(argument.value.name.value, withables)) {
return null;
}
}
}
}
},
},
});
}

Why do I see stale data even after invalidating my queries?

I have created a function which adds a specific item to my diary. 9/10 times everything works, which means that there is nothing wrong with the code?
However rarely I add the item to my diary, but I don't see the update values, even thought I activated queryClient.invalidateQueries() method, the value is updated on my server, because when I manually refresh I see the updated diary again.
Does this mean that by the time I activate invalidatequeries method, the update has not reached my server and that is why I am seeing stale data? But what would I do in that case?
Here is the function:
const newAddItemFunction = () => {
const day = newDiary?.[currentDay];
if (day && selectedMealNumber && selectedItem) {
setSavingItem(true);
NewAddItemToDiary({
day,
selectedMealNumber,
selectedItem,
});
queryClient.invalidateQueries(["currentDiary"]).then(() => {
toast.success(`${selectedItem.product_name} has been added`);
});
router.push("/diary");
}
};
Here is my custom hook(useFirestoreQuery is just custom wrapped useQuery hook for firebase):
export const useGetCollectionDiary = () => {
const user = useAuthUser(["user"], auth);
const ref = collection(
firestore,
"currentDiary",
user.data?.uid ?? "_STUB_",
"days"
);
return useFirestoreQuery(
["currentDiary"],
ref,
{
subscribe: false,
},
{
select: (data) => {
let fullDaysArray = [] as Day[];
data.docs.map((docSnapshot) => {
const { id } = docSnapshot;
let data = docSnapshot.data() as Day;
data.documentId = id;
fullDaysArray.push(data);
});
fullDaysArray.sort((a, b) => a.order - b.order);
return fullDaysArray;
},
enabled: !!user.data?.uid,
}
);
};
NewAddItemToDiary function is just firebase call to set document:
//...json calculations
setDoc(
doc(
firestore,
"currentDiary",
auth.currentUser.uid,
"days",
day.documentId
),
newDiaryWithAddedItem
);
9/10 times everything works, which means that there is nothing wrong with the code?
It indicates to me that there is something wrong with the code that only manifests in edge cases like race conditions.
You haven't shared the code of what NewAddItemToDiary is doing, but I assume it's asynchronous code that fires off a mutation. If that is the case, it looks like you fire off the mutation, and then invalidate the query without waiting for the query to finish:
NewAddItemToDiary({
day,
selectedMealNumber,
selectedItem,
});
queryClient.invalidateQueries(["currentDiary"]).then(() => {
toast.success(`${selectedItem.product_name} has been added`);
});
Mutations in react-query have callbacks like onSuccess or onSettled where you should be doing the invalidation, or, if you use mutateAsync, you can await the mutation and then invalidate. This is how all the examples in the docs are doing it:
// When this mutation succeeds, invalidate any queries with the `todos` or `reminders` query key
const mutation = useMutation(addTodo, {
onSuccess: () => {
queryClient.invalidateQueries('todos')
queryClient.invalidateQueries('reminders')
},
})

Apollo Client: Upsert mutation only modifies cache on update but not on create

I have an upsert query that gets triggered on either create or update. On update, Apollo integrates the result into the cache but on create it does not.
Here is the query:
export const UPSERT_NOTE_MUTATION = gql`
mutation upsertNote($id: ID, $body: String) {
upsertNote(id: $id, body: $body) {
id
body
}
}`
My client:
const graphqlClient = new ApolloClient({
networkInterface,
reduxRootSelector: 'apiStore',
dataIdFromObject: ({ id }) => id
});
The response from the server is identical: Both id and body are returned but Apollo isn't adding new ids into the data cache object automatically.
Is it possible to have Apollo automatically add new Objects to data without triggering a subsequent fetch?
Here is what my data store looks like:
UPDATE
According to the documentation, the function updateQueries is supposed to allow me to push a new element to my list of assets without having to trigger my origin fetch query again.
The function gets executed but whatever is returned by the function is completely ignored and the cache is not modified.
Even if I do something like this:
updateQueries: {
getUserAssets: (previousQueryResult, { mutationResult }) => {
return {};
}
}
Nothing changes.
UPDATE #2
Still can't get my assets list to update.
Inside updateQueries, here is what my previousQueryResult looks like:
updateQueries: {
getUserAssets: (previousQueryResult, { mutationResult }) => {
return {
assets: []
.concat(mutationResult.data.upsertAsset)
.concat(previousQueryResult.assets)
}
}
}
But regardless of what I return, the data store does not refresh:
For reference, here is what each asset looks like:
Have you followed the example here ?
I would write the updateQueries in the mutate like this:
updateQueries: {
getUserAssets: (previousQueryResult, { mutationResult }) => {
const newAsset = mutationResult.data.upsertAsset;
return update(prev, {
assets: {
$unshift: [newAsset],
},
});
},
}
Or with object assign instead of update from immutability-helper:
updateQueries: {
getUserAssets: (previousQueryResult, { mutationResult }) => {
const newAsset = mutationResult.data.upsertAsset;
return Object.assign({}, prev, {assets: [...previousQueryResult.assets, newAsset]});
},
}
As you state in your update, you need to use updateQueries in order to update the queries associated with this mutation. Although your question does not state what kind of query is to be updated with the result of the mutation, I assume you have something like this:
query myMadeUpQuery {
note {
id
body
}
}
which should return the list of notes currently within your system with the id and body of each of the notes. With updateQueries, your callback receives the result of the query (i.e. information about a newly inserted note) and the previous result of this query (i.e. a list of notes) and your callback has to return the new result that should be assigned to the query above.
See here for an analogous example. Essentially, without the immutability-helper that the given example uses, you could write your updateQueries callback as follows:
updateQueries: {
myMadeUpQuery: (previousQueryResult, { mutationResult }) => {
return {
note: previousQueryResult.note(mutationResult.data.upsertNode),
};
}
}

How do I handle deletes in react-apollo

I have a mutation like
mutation deleteRecord($id: ID) {
deleteRecord(id: $id) {
id
}
}
and in another location I have a list of elements.
Is there something better I could return from the server, and how should I update the list?
More generally, what is best practice for handling deletes in apollo/graphql?
I am not sure it is good practise style but here is how I handle the deletion of an item in react-apollo with updateQueries:
import { graphql, compose } from 'react-apollo';
import gql from 'graphql-tag';
import update from 'react-addons-update';
import _ from 'underscore';
const SceneCollectionsQuery = gql `
query SceneCollections {
myScenes: selectedScenes (excludeOwner: false, first: 24) {
edges {
node {
...SceneCollectionScene
}
}
}
}`;
const DeleteSceneMutation = gql `
mutation DeleteScene($sceneId: String!) {
deleteScene(sceneId: $sceneId) {
ok
scene {
id
active
}
}
}`;
const SceneModifierWithStateAndData = compose(
...,
graphql(DeleteSceneMutation, {
props: ({ mutate }) => ({
deleteScene: (sceneId) => mutate({
variables: { sceneId },
updateQueries: {
SceneCollections: (prev, { mutationResult }) => {
const myScenesList = prev.myScenes.edges.map((item) => item.node);
const deleteIndex = _.findIndex(myScenesList, (item) => item.id === sceneId);
if (deleteIndex < 0) {
return prev;
}
return update(prev, {
myScenes: {
edges: {
$splice: [[deleteIndex, 1]]
}
}
});
}
}
})
})
})
)(SceneModifierWithState);
Here is a similar solution that works without underscore.js. It is tested with react-apollo in version 2.1.1. and creates a component for a delete-button:
import React from "react";
import { Mutation } from "react-apollo";
const GET_TODOS = gql`
{
allTodos {
id
name
}
}
`;
const DELETE_TODO = gql`
mutation deleteTodo(
$id: ID!
) {
deleteTodo(
id: $id
) {
id
}
}
`;
const DeleteTodo = ({id}) => {
return (
<Mutation
mutation={DELETE_TODO}
update={(cache, { data: { deleteTodo } }) => {
const { allTodos } = cache.readQuery({ query: GET_TODOS });
cache.writeQuery({
query: GET_TODOS,
data: { allTodos: allTodos.filter(e => e.id !== id)}
});
}}
>
{(deleteTodo, { data }) => (
<button
onClick={e => {
deleteTodo({
variables: {
id
}
});
}}
>Delete</button>
)}
</Mutation>
);
};
export default DeleteTodo;
All those answers assume query-oriented cache management.
What if I remove user with id 1 and this user is referenced in 20 queries across the entire app? Reading answers above, I'd have to assume I will have to write code to update the cache of all of them. This would be terrible in long-term maintainability of the codebase and would make any refactoring a nightmare.
The best solution in my opinion would be something like apolloClient.removeItem({__typeName: "User", id: "1"}) that would:
replace any direct reference to this object in cache to null
filter out this item in any [User] list in any query
But it doesn't exist (yet)
It might be great idea, or it could be even worse (eg. it might break pagination)
There is interesting discussion about it: https://github.com/apollographql/apollo-client/issues/899
I would be careful with those manual query updates. It looks appetizing at first, but it won't if your app will grow. At least create a solid abstraction layer at top of it eg:
next to every query you define (eg. in the same file) - define function that clens it properly eg
const MY_QUERY = gql``;
// it's local 'cleaner' - relatively easy to maintain as you can require proper cleaner updates during code review when query will change
export function removeUserFromMyQuery(apolloClient, userId) {
// clean here
}
and then, collect all those updates and call them all in final update
function handleUserDeleted(userId, client) {
removeUserFromMyQuery(userId, client)
removeUserFromSearchQuery(userId, client)
removeIdFrom20MoreQueries(userId, client)
}
For Apollo v3 this works for me:
const [deleteExpressHelp] = useDeleteExpressHelpMutation({
update: (cache, {data}) => {
cache.evict({
id: cache.identify({
__typename: 'express_help',
id: data?.delete_express_help_by_pk?.id,
}),
});
},
});
From the new docs:
Filtering dangling references out of a cached array field (like the Deity.offspring example above) is so common that Apollo Client performs this filtering automatically for array fields that don't define a read function.
Personally, I return an int which represents the number of items deleted. Then I use the updateQueries to remove the document(s) from the cache.
I have faced the same issue choosing the appropriate return type for such mutations when the rest API associated with the mutation could return http 204, 404 or 500.
Defining and arbitrary type and then return null (types are nullable by default) does not seem right because you don't know what happened, meaning if it was successful or not.
Returning a boolean solves that issue, you know if the mutation worked or not, but you lack some information in case it didn't work, like a better error message that you could show on FE, for example, if we got a 404 we can return "Not found".
Returning a custom type feels a bit forced because it is not actually a type of your schema or business logic, it just serves to fix a "communication issue" between rest and Graphql.
I ended up returning a string. I can return the resource ID/UUID or simply "ok" in case of success and return an error message in case of error.
Not sure if this is a good practice or Graphql idiomatic.

Categories