I'm working on a web application that will visualize data from my Firebase database. But first, I want to be able to "count" the total number of users with a given data so that I can then use that count number in my graphs.
For reference, my database looks like this:
Because I expect separate totals for the required keys, I'm guessing that I'll need separate counters for each one. I've started writing a cloud function to keep track of when a new user is created:
import * as functions from 'firebase-functions'
export const onMessageCreate = functions.database
.ref('/students/{studentID}')
.onCreate((snapshot, context) => {
const userData = snapshot.val()
const afterGrad = userData.afterGrad
const gender = userData.gender
const gradDate = userData.gradDate
const program = userData.program
const race = userData.race
const timeToComplete = userData.timeToComplete
})
But now, I'm extremely lost at how I should go about creating counters. Would something like this suffice, with an individual counter for each constant?
import * as functions from 'firebase-functions'
var counterAfterGrad;
export const onMessageCreate = functions.database
.ref('/students/{studentID}')
.onCreate((snapshot, context) => {
const userData = snapshot.val()
const afterGrad = userData.afterGrad
var counterAfterGrad++
})
Or should I be thinking about using a transaction in this case? I'm really not sure of the best way, and would really appreciate some help.
Yes, you should use a transaction. See the documentation here: https://firebase.google.com/docs/database/web/read-and-write#save_data_as_transactions and https://firebase.google.com/docs/reference/js/firebase.database.Reference#transaction
For counting the overall number of users you could do as follows:
export const onMessageCreate = functions.database
.ref('/students/{studentID}')
.onCreate((snapshot, context) => {
const userData = snapshot.val()
const afterGrad = userData.afterGrad
const allUsersCounterRef = admin
.database()
.ref('allUsersCounter');
return allUsersCounterRef
.transaction(counter_value => {
return (counter_value || 0) + 1;
})
})
Note that you may have to take into consideration the deletion of a user.
You could very well have several counters, for example by "gender" (male/female) and by "program". You would then use an object in the transaction as follows:
exports.onMessageCreate = functions.database
.ref('/students/{studentID}')
.onCreate((snapshot, context) => {
const userData = snapshot.val();
const countersRef = admin.database().ref('counters');
return countersRef.transaction(currentData => {
currentData[userData.gender] = (currentData[userData.gender] || 0) + 1;
currentData[userData.program] = (currentData[userData.program] || 0) + 1;
return currentData;
});
});
Related
I will have product filter on ecommerce page. I'm not sure how can I pass dynamic number of parameters in my query? Because sometimes will query use 1 parameter sometimes 6 and more.
Example
const fetchProducts = async (size, color, weight, deep,) => {
const parsed = await ky(
`http://localhost:3000/api/products/get?size=${size}&color=${color}`
).json();
return parsed;
};
const { data, status } = useQuery([size, color,], () => fetchPosts(size, color));
Sometimes will be 10 different parameters from the product filter, sometimes just 1...
How can I handle this dynamically? I will need then put the filter on prisma backend.
You can use qs package to handle all params.
Example:
const fetchProducts = (params)=>{
const query = qs.stringify(params);
const baseUrl = "http://localhost:3000/api/products/get?";
const parsed = await ky(baseUrl + query);
....
}
const params = {color:"red", size:"big"}
const query = qs.stringify(params);
//output: "color=red&size=big"
The function shown below puzzles me for two reasons:
the function execution terminates before all output is given
the function execution takes more than 3 minutes; a very long time (so long, that it might not be because of the "cold starts" issue only).
When searching for bestpractices I found a hint, that background acitivities are slowed down after function execution is terminated (https://cloud.google.com/functions/docs/bestpractices/tips#do_not_start_background_activities).
How can I create a function, that terminates after all output is created and avoids background activity?
Is there any way how to speed up the get() processing?
screenshot of firebase functions dashboard
screensthot of firestore showing the document created to trigger the function
Please have a look on the code:
// The Cloud Functions for Firebase SDK to create Cloud Functions .
const functions = require("firebase-functions");
// The Firebase Admin SDK to access Firestore.
const admin = require("firebase-admin");
admin.initializeApp();
const db = admin.firestore();
exports.evaluateScore = functions
.region('europe-west1')
.firestore
.document('organizations/{orgId}/persons/{personId}')
.onWrite(async (snap, context) => {
const newDocument = snap.after.exists ? snap.after.data() : null;
const oldDocument = snap.before.exists ? snap.before.data() : null;
console.log(`lastName: '${newDocument.personLastName}'; id: '${snap.after.id}'`);
// if only newDocument exists
if (newDocument != null && oldDocument == null ) {
const arrayNameSplit = snap.after.ref.path.split('/');
var orgId = arrayNameSplit[arrayNameSplit.length -3];
var listOfProfiles = newDocument.listOfProfiles;
console.log(`listOfProfiles: `, JSON.stringify(listOfProfiles));
for (let i = 0; i < listOfProfiles.length; i++) {
db.collection('organizations').doc(orgId).collection('profiles').doc(listOfProfiles[i]).get()
.then(docRef => {
const profile = docRef.data();
console.log(i, ' profileTitle:', JSON.stringify(profile.profileTitle))
}).catch(e => {
console.error('something went wrong', e)
});
}
}
});
You have asynchronous calls in your code, but are not telling the Cloud Functions runtime about that (through the return value). It is very likely that your database get() calls don't even complete at this stage.
To fix that problem, you can use await inside the loop or Promise.all:
exports.evaluateScore = functions
.region('europe-west1')
.firestore
.document('organizations/{orgId}/persons/{personId}')
.onWrite(async (snap, context) => {
const newDocument = snap.after.exists ? snap.after.data() : null;
const oldDocument = snap.before.exists ? snap.before.data() : null;
console.log(`lastName: '${newDocument.personLastName}'; id: '${snap.after.id}'`);
// if only newDocument exists
if (newDocument != null && oldDocument == null ) {
const arrayNameSplit = snap.after.ref.path.split('/');
var orgId = arrayNameSplit[arrayNameSplit.length -3];
var listOfProfiles = newDocument.listOfProfiles;
console.log(`listOfProfiles: `, JSON.stringify(listOfProfiles));
for (let i = 0; i < listOfProfiles.length; i++) {
const docRef = await db.collection('organizations').doc(orgId).collection('profiles').doc(listOfProfiles[i]).get();
const profile = docRef.data();
console.log(i, ' profileTitle:', JSON.stringify(profile.profileTitle))
}
}
});
There may be more problems with your code, so I recommend reading the documentation on sync, async and promises, and how to create a minimal, complete, verifiable example for future questions.
I want to deleted a child after a certain time. I know that you need Firebase function to achief this. This is what I got so far:
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp();
exports.removeOldMessages = functions.https.onRequest((req, res) => {
const timeNow = Date.now();
const Ref = admin.database().ref('/Feed');
Ref.once('value', (snapshot) => {
snapshot.forEach((child) => {
if (1000*(Number(child.val()['timestamp']) + Number(child.val()['duration'])) >= timeNow) {
child.ref.set(null);
}
});
});
return res.status(200).end();
});
I want to deleted the child when the duration is over (the duration is in seconds). This is my structure:
Thanks!
You're sending a response to the caller at the end of the function, which will be executed before the data from the database is returned. And Cloud Functions will stop executing your code straight after that res.status(200).end(), so the database cleanup never happens.
To prevent this, only send a response to the caller after all data has been deleted from the database:
exports.removeOldMessages = functions.https.onRequest((req, res) => {
const timeNow = Date.now();
const Ref = admin.database().ref('/Feed');
return Ref.once('value', (snapshot) => {
let updates = [];
snapshot.forEach((child) => {
if (1000*(child.val().timestamp + child.val().duration) >= timeNow) {
updates[child.key] = null;
}
});
return Ref.update(updates).then(() => {
return res.status(200).end();
});
});
});
I highly recommend storing an additional property in your child nodes though, with the precalculated value of timestamp + duration. By having such a property, you can run a query on the nodes that have expired, instead of having to read all child nodes and then filtering in code.
For an example of this, see my answer to Delete firebase data older than 2 hours, and the Cloud Functions example that was based on that.
I am trying to calculate the product rating for my products in my realtime database using a firebase cloud function. What am I missing since am getting errors in the logs after deploying
I have deployed the code but still on rating added no change happens
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp(functions.config().firebase);
//firebase db calculations for rating average
exports.productAverage = functions.database.ref('/Products/{product_id}/rating')
.onCreate((snapshot, context) => {
return admin.database().ref('/Products/{product_id}/rating').once('value')
.then((snapshot) => {
let sum = 0;
snapshot.forEach(child => {
sum = sum + child.val();
});
let productRating = sum / snapshot.numChildren();
return admin.database().ref('/Products/{product_id}').child('productRating').set(productRating);
});
});
I expect each time a productRating node is added, the average updates a node on the database with productRating
There are a few things that immediately jump out:
You're triggering on onCreate. But you want to recalculate the average whenever any rating is added (or removed or updated), you'll want to trigger on onWrite.
You're reloading the same data in your function that is pass in already. That is wasteful, so let's remove that.
With these two changes, you'd end up with something like:
exports.productAverage = functions.database.ref('/Products/{product_id}/rating')
.onWrite((change, context) => {
let snapshot = change.after;
let sum = 0;
snapshot.forEach(child => {
sum = sum + child.val();
});
let productRating = sum / snapshot.numChildren();
return snapshot.ref.parent.child('productRating').set(productRating);
});
Let's see the next situation:
If we create an user, we have to create a new client, a new user, and a new, inital project for the user.
db = {
users: {},
clients: {},
projects: {}
};
const usersRef = firebase.database().ref("/users");
const clientsRef = firebase.database().ref("/clients");
const projectsRef = firebase.database().ref("/projects");
To keep the code clean, and separated, we can create three functions:
const newUserToDb = name => {
const newUser = usersRef.push();
newUser.set({name});
};
const newClientToDb = name => {
const newClient = clientsRef.push();
newClient.set({name});
};
const newProjectToDb = name => {
const newProject = projectsRef.push();
newProject.set({name});
};
const createUserToDb = (userName, clientName, projectName) => {
newUserToDb(userName);
newClientToDb(clientName);
newProjectToDb(projectName);
};
To make all the changes in one place, but make the code less separated:
const createUserToDb = (userName, clientName, projectName) => {
const userId = usersRef.push().key;
const clientId = clientsRef.push().key;
const projectId = projectsRef.push().key;
const updates = {};
updates[`/users/${userId}`] = userName;
updates[`/clients/${clientId}`] = clientName;
updates[`/projects/${projectId}`] = projectName;
firebase.database().ref().update(updates);
};
Is there any important difference between the two solutions above? Which is more efficient?
The important difference to the above approach is atomicity. In the first scenario, the individual collection or documents update will succeed or fail without affecting other updates. In the second scenario, all the updates will succeed else none will.
I don't think efficiency is the right term to be used for comparing the above scenarios, its more of the business/use case which will define which one you need to use
The first way seems more separated and explicit which would probably be easier for other developers to understand.