Add exceptions on firebase cloud functions triggers - javascript

In a social app, I have a cloud function that updates a 'likes' counter whenever a post is liked, which is to say whenever the following reference is updated:
/likes/{postId}/{userid}
'countOfLikes' is written at the same level of the wildcard {userId} :
exports.countLikeChange = functions.database.ref('/likes/{postId}/{userid}').onWrite(event => {
const collectionRef = event.data.ref.parent;
var counterRef = collectionRef.child('countOfLikes');
return collectionRef.once('value').then(messagesData => counterRef.set(messagesData.numChildren() - 1));
});
With my current code, when a user like a post, the function is triggered first to update countOfLike, which in turn trigger the same function to do the same action...
Is there a way to specify an exclusion so that the function will not be triggered if {userId} == 'countOfLikes' ?
I know that I could use onCreate instead of onWrite, but you must know that in my app, one user could also remove his 'like'.

There is no way to exclude a specific child from triggering a function.
If you find that need, it typically means you've combined types of data that should be kept separate.
For example, I would expect your like counts to be in a separate node altogether, e.g. /likeCounts/$postId. If you structure it like that, updating the count for a new like won't retrigger the function.

Related

Pg-promise: does changing the search_path inside a db.task() also change it for queries running outside?

Let's say I have the following code:
db.task(t => {
return t.none('set search_path to myschema').then(() => {
return t.any('select * from mytable').then(results => {
return t.none('set search_path to originalschema').then(() => {
return results
})
})
})
})
Could a query outside of db.task(), that happened to run in between of the two search_path changes inside db.task(), actually access the data in 'myschema' instead of 'originalschema'?
Could a query outside of db.task(), that happened to run in between of the two search_path changes inside db.task(), actually access the data in 'myschema' instead of 'originalschema'?
No.
SET search_path is a session-based operation, i.e. it applies only to the current connection, which the task allocates exclusively for the entire duration of its execution.
Once the task has finished, it releases the connection back to the pool. At that point, any query that gets that same connection will be working with the alternative schema, unless it is another task that sets the schema again. This gets tricky, if you are setting schema in just one task, and generally not recommended.
Here's how it should be instead:
If you want to access a special-case schema inside just one task, best is to specify the schema name explicitly in the query.
If you want to set custom schema(s) dynamically, for the entire app, best is to use option schema, of the Initialization Options. This will propagate the schema automatically through all new connections.
If you want to set schema statically, there are queries for setting schema permanently.
Addition:
And if you have a very special case, whereby you have a task that needs to run reusable queries inside an alternative schema, then you would set the schema in the beginning of the task, and then restore it to the default schema at the end, so any other query that picks up that connection later won't try to use the wrong schema.
Extra:
Example below creates your own task method (I called it taskEx), consistent across the entire protocol, which accepts new option schema, to set the optional schema inside the task:
const initOptions = {
extend(obj) {
obj.taskEx = function () {
const args = pgp.utils.taskArgs(arguments); // parse arguments
const {schema} = args.options;
delete args.options.schema; // to avoid error thrown
if (schema) {
return obj.task.call(this, args.options, t => {
return t.none('SET search_path to $1:name', [schema])
.then(args.cb.bind(t, t));
});
}
return obj.task.apply(this, args);
}
}
});
const pgp = require('pg-promise')(initOptions);
So you can use anywhere in your code:
const schema = 'public';
// or as an array: ['public', 'my_schema'];
db.taskEx({schema}, t => {
// schema set inside task already;
});
Note that taskEx implementation assumes that the schema is fully dynamic. If it is static, then there is no point re-issuing SET search_path on every task execution, and you would want to do it only for fresh connections, based on the following check:
const isFreshConnection = t.ctx.useCount === 0;
However, in that case you would be better off using initialization option schema instead, as explained earlier.

Block additional Firestore triggers from executing in a Firestore trigger

Is there any way to specify something like "FINAL" when you execute a Firestore trigger so an app with a lot of triggers doesn't create recursive updates?
For a simple example here is a trigger that adds the created_date field to every document that is created:
exports.setCreatedDate = functions.firestore
.document('{collectionID}/{docID}')
.onCreate((change, context) => {
const patch = {created_at: Date.now()};
return change.ref.set(patch, {merge: true});
});
And another trigger that does the same when a document is updated:
exports.setUpdatedDate = functions.firestore
.document('{collectionID}/{docID}')
.onUpdate((change, context) => {
const patch = {updated_at: Date.now()};
return change.after.ref.set(patch, {merge: true});
});
Is there any built in way (I have a couple hacky workarounds already) to block the update from firing after we set the created_at time?
We are moving as much of the application logic for an enterprise app that has to scale to 100 million+ records, and cascading triggers are causing huge spikes in Firestore requests.
One obvious solution is to use single triggers, then have a stack of functions that are called, but I love the simplicity of using strictly core functionality and being able to write as many isolated triggers as we like.
Is there any way to specify something like "FINAL" when you execute a Firestore trigger so an app with a lot of triggers doesn't create recursive updates?
No.
Is there any built in way (I have a couple hacky workarounds already) to block the update from firing after we set the created_at time?
No.
You will have to write code in your function to determine if the invocation should not make any more changes, and just return early. It's common to either check to see if the change has already been made by looking at existing fields in the document.

Cloud Functions - Preventing triggers on certain fields

I have an app that people can request food , then I have orders, each order have an status for that order so I made a function that will send a user notification each time the status of this order changes, the status can be changed by the store which take the order and by the user when the order is done.
But I want to limit the execution of this function to just 2 fields, the userId and the status fields, because now I have onUpdated, this trigger will always launch whenever an update is made to this document.
I'm planning to update other fields than uid and status for this document and I dont want the trigger to relaunch again and send a notification to the user if not needed.
Is there anyway to limit the trigger by just certain fields in the document?
exports.onOrderUpdated = functions.firestore
.document('orders/{orderId}').onUpdate((change, context) => {
var db = admin.firestore();
try{
const orderDataSnap = change.after.data();
var userId = orderDataSnap.uid;
var orderStatus = orderDataSnap.status;
}catch(error){
return handleErrorToUser(error);
}
Here I only want to execute this function only when userId and status changes in that document
Is there anyway to do this ?
Thanks
According to the documentation of Change, there are before and after snapshots.
You can call the data() method on each of these these and check if the userId and status are both equal in the before and after copies. If they are, just return out of the function early.
No there isn't any possibility to trigger a Cloud Function only if some specific fields of a document are changed. As explained by samdy1 in his answer you can detect, within the Cloud Function, which field(s) has(ve) changed but for that the Cloud Function needs to be triggered.
One solution would be to write a document to another dedicated collection, in parallel to the change.
For example, if you are updating the document with a new status, you write a doc to a collection statusUpdates with the ID of the parent order document and the status value, and you trigger a Cloud Function based on this document creation.
Of course, it implies a document creation and it has a cost (In addition to the CF triggering). It's up to you to do the math, depending on the frequency of the updates, to calculate if this approach will be cheaper than the approach consisting in triggering the Cloud Function for the order document for nothing.

Require module that uses a singleton array

I want to create a really basic CRUD (sort-of) example app, to see how things work.
I want to store items (items of a shopping-list) in an array, using functions defined in my listService.js such as addItem(item), getAllItems() and so on.
My problems come when using the same module (listService.js) in different files, because it creates the array, in which it stores the data, multiple times, and I want it to be like a static "global" (but not a global variable) array.
listService.js looks like this:
const items = [];
function addItem (item) {
items.push(item);
}
function getItems () {
return items;
}
module.exports = {addItem, getItems};
and I want to use it in mainWindowScript.js and addWindowScript.js, in addWindowScript.js to add the elements I want to add to the array, and in mainWindowScript.js to get the elements and put them in a table. (I will implement later on Observer pattern to deal with adding in table when needed)
addWindowScript.js looks something like this:
const electron = require('electron');
const {ipcRenderer} = electron;
const service = require('../../service/listService.js');
const form = document.querySelector('form');
form.addEventListener('submit', submitForm);
function submitForm(e) {
e.preventDefault();
const item = document.querySelector("#item").value;
service.addItem(item);
console.log(service.getItems());
// This prints well all the items I add
// ...
}
and mainWindowScript.js like this:
const electron = require('electron');
const service = require('../../service/listService.js');
const buttonShowAll = document.querySelector("#showAllBtn")
buttonShowAll.addEventListener("click", () => {
console.log(service.getItems());
// This just shows an empty array, after I add the items in the add window
});
In Java or C#, or C++ or whatever I would just create a Class for each of those and in main I'd create an instance of the Service and pass a reference of it to the windows. How can I do something similar here ?
When I first wrote the example (from a youtube video) I handled this by
sending messages through the ipcRenderer to the main module, and then sending it forward to the other window, but I don't want to deal with this every time there's a signal from one window to another.
ipcRenderer.send('item:add', item);
and in main
ipcMain.on('item:add', (event, item) => {
mainWindow.webContents.send('item:add', item);
})
So, to sum up, I want to do something like : require the module, use the function wherever the place and have only one instance of the object.
require the module, use the function wherever the place and have only one instance of the object.
TL:DR - no, that isn't possible.
Long version: Nature of Electron is multi process, code you runs in main process (node.js side) and renderer (chromium browser) is runnning in different process. So even you require same module file, object created memory in each process is different. There is no way to share object between process except synchrnonize objects via ipc communication. There are couple of handful synchronization logic modules out there, or you could write your module do those job like
module.js
if (//main process)
// setup object
//listen changes from renderer, update object, broadcast to renderer again
else (//rendere process)
//send changes to main
//listen changes from main
but either cases you can't get away from ipc.

Cloud Functions for Firebase to index Firebase Database Objects in Algolia

I went through docs, github repositories but nothing worked for me yet.
My datastructure:
App {
posts : {
<post_keys> : {
auth_name : "name",
text : "some text" //and many other fields
}
}
}
1) Github repository : If I use this, I only get one field from one function, if I need all the fields, I would need to write separate functions for each, which is a bad approach.
2) Algolia Official Docs for Node.js : This cannot be deployed as a cloud function, but it does what I intend to do.
How can I write a function that can be deployed on Firebase and gets the whole object indexed with its key in Algolia?
Okay so I went ahead to create a Firebase Cloud function in order to index all objects in the Algolia index. This is the solution:
What you were doing is something like this:
exports.indexentry = functions.database.ref('/blog-posts/{blogid}/text').onWrite(event => {
What you should do is the following:
exports.indexentry = functions.database.ref('/blog-posts/{blogid}').onWrite(event => {
const index = client.initIndex(ALGOLIA_POSTS_INDEX_NAME);
var firebaseObject = event.data.val();
firebaseObject.objectID = event.params.blogid;
return index.saveObject(firebaseObject).then(
() => event.data.adminRef.parent.child('last_index_timestamp').set(
Date.parse(event.timestamp)));
});
The difference is in the first line: In the first case, you only listen to text changes, hence you only get the data containing the text change.
In the second case, you get the whole object since you listen to changes in all of the blog object (notice how /text was removed).
I tested it and it works for me: whole object including author was indexed in Algolia.

Categories