Get only new "child added" after page load - javascript

I have on a website facebook style notification in top right corner. I show there up to 5 latest notifications. I do initial pulling with child_added and also after same firebaseRef child_added listening for new notifications.
Now I'd like to play a sound on new notification and a little number of new notifications.
The only thing I can't figure is how to distinguish when was a new notification and when was it already seen, a.k.a page reload? Is there any other approach than making some new property read?
I was looking around and found some old answers from 2012 with suggestions limitToLast(1) which doesn't help in my case.
EDIT:
https://stackoverflow.com/a/27693310/633154 This #Kato answers recommends to listen only to new notifications which time is more than current Firebase time Firebase.ServerValue.TIMESTAMP. This seems the way to go, but I am creating a new notification with REST API and myself setting timestamp as my server's UTC. So there may be some minor inconsistencies. Shouldn't be a big deal
EDIT 2:
With this query, I'm getting correctly up to 5 last notifications on page load and no new notifications are coming afterwards
notifRef.limitToLast(5).once("value", function(snapshot) {
snapshot.forEach(function(data) {
addNotifications(data.val());
});
});
In the above linked other SO thread #Kato's answer doesn't work, notifRef.orderBy is not a function.
I have tried multiple other versions according to doc
https://www.firebase.com/docs/web/guide/retrieving-data.html#section-queries
My structure is same
{
"messages": {
"$messageid": { // firebase generated key 'JqcEWLFJrl1eaed5naN'
"sender": "kato",
"message": "hello world"
"timestamp": 1433036536108 // Firebase.ServerValue.TIMESTAMP
}
}
}
Here is what i tried to do and errors I'm getting:
var queryRef = notifRef.orderByKey().startAt(Firebase.ServerValue.TIMESTAMP);
Error:Query: When ordering by key, the argument passed to startAt(), endAt(),or equalTo() must be a string.
var queryRef = notifRef.orderByChild('timestamp').startAt(Firebase.ServerValue.TIMESTAMP);
Error: Query: First argument passed to startAt(), endAt(), or equalTo() cannot be an object.
In the documentation I have not seen that to startAt anything but the element position is passed (integer) but not a firebase timestamp object, that's why such error.
Only below compiles, just having startAt without ordering, but it's not shooting any new notifications!
var queryRef = notifRef.startAt(Firebase.ServerValue.TIMESTAMP);
queryRef.on('child_added', function(snap) {
console.log(snap.val());
addNotifications(snap.val());
// TODO clean up if more than 5 notifications
});
Any idea where could be the problem? What is the correct way to listen only to newer notifications than current timestamp?
EDIT 3
Here is my final solution
notifRef.limitToLast(5).once("value", function(snapshot) {
var lastKey = null; // at least 1 key is always present
var count = 0; // because startAt is inclusive, we have to ignore first child_added
snapshot.forEach(function(data) {
addNotifications(data.val());
lastKey = data.key();
});
checkNotifications();
notifRef.orderByKey().startAt(lastKey).on('child_added', function(snap) {
if (count > 0) {
console.log(snap.val());
addNotifications(snap.val());
// TODO clean up if more than 5 notifications
checkNotifications();
}
count++;
});
});
I don't trust browser time, so had to go first by querying last 5 existing keys, and after that passing to startAt the last key I received. notifRef.orderByKey().startAt(lastKey) can't be outside notifRef.limitToLast(5).once("value" because according to doc, once is queried last so the lastKey js variable passed to startAt would be always null.
Also need to have the count variable, because startAt is taking inclusive, but because it was already there, I need to ignore the first one.
Also with this solution when there are more than 5 notifications, I query my backend with checkNotifications only once at the end when notifications are received with once query. Otherwise on child_added it would do up to 5 times on every page load.
If there is anything that could be optimized, please tell

One solution would be to have your local client listen for the last 5 latest notifications via ref.limitToLast(5).on('child_added', ...) and then only render them to the user if some timestamp field on each of those notifications is newer than your local timestamp on the machine.
When writing those notifications from other clients, you could include a timestamp field as specified via Firebase.ServerValue.TIMESTAMP, which will use the server's notion of the Unix timestamp. Readers of that data could then compare that timestamp to their local clock to make the aforementioned determination.

Related

Filter collection by lastModified

I need to fetch sub-set of documents in Firestore collection modified after some moment. I tried going theses ways:
It seems that native filtering can work only with some real fields in stored document - i.e. nevertheless Firestore API internally has DocumentSnapshot.getUpdateTime() I cannot use this information in my query.
I tried adding my _lastModifiedAt 'service field' via server-side firestore cloud function, but ... that updating of _lastModifiedAt causes recursive invocation of the onWrite() function. I.e. is does also not work as needed (recursion finally stops with Error: quota exceeded (Function invocations : per 100 seconds)).
Are there other ideas how to filter collection by 'lastModifiedTime'?
Here is my 'cloud function' for reference
It would work if I could identify who is modifying the document, i.e. ignore own updates of _lastModified field, but I see no way to check for this
_lastModifiedBy is set to null because of current inability of Firestore to provide auth information (see here)
exports.updateLastModifiedBy = functions.firestore.document('/{collId}/{documentId}').onWrite(event => {
console.log(event.data.data());
var lastModified = {
_lastModifiedBy: null,
_lastModifiedAt: now
}
return event.data.ref.set(lastModified, {merge: true});
});
I've found the way to prevent recursion while updating '_lastModifiedAt'.
Note: this will not work reliably if client can also update '_lastModifiedAt'. It does not matter much in my environment, but in general case I think writing to '_lastModifiedAt' should be allowed only to service accounts.
exports.updateLastModifiedBy = functions.firestore.document('/{collId}/{documentId}').onWrite(event => {
var doc = event.data.data();
var prevDoc = event.data.previous.data();
if( doc && prevDoc && (doc._lastModifiedAt != prevDoc._lastModifiedAt) )
// this is my own change
return 0;
var lastModified = getLastModified(event);
return event.data.ref.set(lastModified, {merge: true});
});
Update: Warning - updating lastModified in onWrite() event causes infinite recursion when trying to delete all documents in Firebase console. This happens because onWrite() is also triggered for delete and writing lastModified into deleted document actually resurrects it. That document propagates back into console and is tried to be deleted once again, indefinitely (until WEB page is closed).
To fix that issue above mentioned code has to be specified individually for onCreate() and onUpdate().
How about letting the client write the timestamp with FieldValue.serverTimestamp() and then validate that the value written is equal to time in security rules?
Also see Mike's answer here for an example: Firestore Security Rules: If timestamp (FieldValue.serverTimestamp) equals now
You could try the following function, which will not update the _lastModifiedAt if it has been marked as modified within the last 5 seconds. This should ensure that this function only runs once, per update (as long as you don't update more than once in 5 seconds).
exports.updateLastModifiedBy = functions.firestore.document('/{collId}/{documentId}').onWrite(event => {
console.log(event.data.data());
if ((Date.now() - 5000) < event.data.data()._lastModifiedAt) {return null};
var lastModified = {
_lastModifiedBy: null,
_lastModifiedAt: now
}
return event.data.ref.set(lastModified, {merge: true});
});

Is it possible to fire a delete trigger by adjusting TTL value in CosmosDB?

I have created a pre-delete trigger by using Script Explorer in Azure portal. The below trigger is written in JavaScript:
function markReminderAsPastDue() {
var collection = getContext().getCollection();
var request = getContext().getRequest();
var docToCreate = request.getBody();
docToCreate["pastDue"] = true;
docToCreate["id"] = "";
var accepted = collection.createDocument(collection.getSelfLink(),
docToCreate,
function (err, documentCreated) {
if (err) throw new Error('Error' + err.message);
});
if (!accepted) throw new Error("Document creation not accepted");
}
I set the TTL value for each document in the associated collection. So the TTL value is not equal to -1 and documents are deleted automatically once the time expires.If I delete the document manually, the pre-delete trigger is fired. However, when the document is deleted implicitly because of the TTL value, the trigger is not fired. What should I do to fix this problem? Is it possible to make triggers fire with TTL value?
As far as I know, there's no callback mechanism of any sort in Azure Cosmos DB for TTL. The TTL enforcement is just a background thread that queries every second for documents that have expired, then deletes them.
Combined with your needs, I'd suggest you just mimicking the TTL operation in your application layer, where you can perform whatever extra business logic.
You can set an update time attribute in each document, which updates the update times for each modification. Then do a polling mechanism at your application layer and iterate through the database every once in a while to find out the data when the update time expires and delete them.
In order to reduce the pressure on your application layer, you can give the action of deleting data to Cosmos DB Stored Procedure.
Hope it helps you.

Node.js Socket IO: How to continuously save socket data to MongoDB

I'm building a 3D game in the browser using THREE.js. Lots of fun, but I came across the following situation:
An object in my 3D scene is continuously moving around, driven by user input. I need to save the object's position to my database in real-time.
Let's start at the front-end. Angular.js is watching my object's position using its built-in $watch functionality. The object's position can change multiple times per second.
On each change, I emit an event to the backend Node.js server using Socket IO, like so:
socket.emit('update', {
id: id,
position: position
});
In the back-end, the event is caught and immediatly emitted to other members in the same Socket IO Room. This way, everyone in this room will have the most real-time update possible.
Now, because the event can happen multiple times per second, I don't want to update my MongoDB collection on each change, since this would cause a lot of overhead. Instead, I'm looking for a way of incidentally saving data to the database.
I've came up with a solution by using Node.js setInterval function, which saves data every 1000ms. For each distinct id (which is unique per object) received on the backend, a new key is created on an JavaScript object, thus keeping track of changes on a per-object basis.
The (simplified) code on the backend:
let update_queue = new Object();
// ...
// Update Event
socket.on('update', (msg) => {
// Flag Changes
if (!update_queue[msg.id]) update_queue[msg.id] = { changes: true };
// Set Interval Timer
if (!update_queue[msg.id].timer) {
update_queue[msg.id].timer = setInterval(() => {
if (!update_queue[msg.id].changes) {
clearInterval(update_queue[msg.id].timer);
return;
}
// This saves data to MongoDB
Object3DCollection.update(msg.id, msg.position)
.then((res) => {
console.log('saved');
});
// Unflag Changes
update_queue[msg.id].changes = false;
}, 1000);
}
// Immediate Broadcast to Socket Room
socket.broadcast.to('some_room').emit('object_updated', msg);
});
The Question
Is this a proper way of handling very frequent socket data and still saving it to a database? Or are there any other suggestions/solutions that are more robuust or work better.
Note
I do not want to wait for my object to be saved to the database and then emit the saved data to the rest of the socket room. The delay of database write operations is not suitable for the real-time game situation I'm dealing with.
Thanks in advance! All suggestions/solutions are appreciated and will be considered.

Child_changed downloads everything on page load

var notifRef = new Firebase(webSocketBaseURL + "/notifications/custom:" + userId);
var $results = $("#notifications_nav"); //div in nav for notifcations
notifRef.limitToLast(5).once("value", function(snapshot) {
var lastKey = null; // at least 1 key is always present
var count = 0; // because startAt is inclusive, we have to ignore first child_added
snapshot.forEach(function(data) {
addNotifications(data.val());
lastKey = data.key();
});
checkNotifications();
notifRef.orderByKey().startAt(lastKey).on('child_added', function(snap) {
if (count > 0) {
addNotifications(snap.val());
checkNotifications();
}
count++;
});
});
notifRef.on('child_changed', function(snapshot) {
var notification = snapshot.val();
var existingNotif = $results.find('.top-cart-item').eq(0);
if (notification.type == existingNotif.data("type") && notification.link == existingNotif.find("a").attr('href') && notification.message == existingNotif.find("a").text()) {
existingNotif.find(".top-cart-item-price.time").livestamp(moment(parseInt(notification.timestamp)).unix());
existingNotif.find(".top-cart-item-quantity").text("x " + notification.count);
checkNotifications(); // append +1 to new notifications
}
});
Hello, here is basically problem
When I have the child_changed event, on page load all the data from the firebase URL is loaded. Why is child_changed loading at all on page load? Should it be just listening if some of the data is changed and then only shooting notification?
I checked the frames under network tab in inspect element and indeed, when notifRef.on('child_changed') is commented out, only 5 last notifications are downloaded as it is in notifRef.limitToLast(5).once("value".
Let me know if you want to know more details or what am obvious thing am I missing here?
EDIT:
What is happening in code it is like facebook top right corner notification area for 5 latest notifications.
notifRef.limitToLast(5).once("value" this is pulling 5 latest notifications. I'm doing there a loop to get the last key. Now next part is how I realized your most common asked question: how to get children that were added after the page load?
Because in previous part in limitToLast(5) I got the latest key added, with notifRef.orderByKey().startAt(lastKey).on('child_added' I am listening to only new added children, since its ordered by key and started from the last one.
Now is the tricky part, since one notification type is new message from user and if user sends multiple times, then instead of adding every time new child, the last key just has a new incrementing property count which means how many new messages are received from user. But this is only in the case when the last notification was a message from that user. If last notification was not a message from that user and next notification is a new message from user, then it just adds new child.
Let's slice-and-dice your code a bit, to see what's going on:
var notifRef = new Firebase(webSocketBaseURL + "/notifications/custom:" + userId);
notifRef.on('child_changed', function(snapshot) {...
Note that you're not ordering or limiting the notifRef in any way here. So you're listening for child_changed events on all children at that location.
But the way you're mixing value and child_ events and the different queries are tricky for me to parse, so I might have misunderstood your use-case.

Subscribe to a count of an existing collection

I need to keep track of a counter of a collection with a huge number of documents that's constantly being updated. (Think a giant list of logs). What I don't want to do is to have the server send me a list of 250k documents. I just want to see a counter rising.
I found a very similar question here, and I've also looked into the .observeChanges() in the docs but once again, it seems that .observe() as well as .observeChanges() actually return the whole set before tracking what's been added, changed or deleted.
In the above example, the "added" function will fire once per every document returned to increment a counter.
This is unacceptable with a large set - I only want to keep track of a change in the count as I understand .count() bypasses the fetching of the entire set of documents. The former example involves counting only documents related to a room, which isn't something I want (or was able to reproduce and get working, for that matter)
I've gotta be missing something simple, I've been stumped for hours.
Would really appreciate any feedback.
You could accomplish this with the meteor-streams smart package by Arunoda. It lets you do pub/sub without needing the database, so one thing you could send over is a reactive number, for instance.
Alternatively, and this is slightly more hacky but useful if you've got a number of things you need to count or something similar, you could have a separate "Statistics" collection (name it whatever) with a document containing that count.
There is an example in the documentation about this use case. I've modified it to your particular question:
// server: publish the current size of a collection
Meteor.publish("nbLogs", function () {
var self = this;
var count = 0;
var initializing = true;
var handle = Messages.find({}).observeChanges({
added: function (id) {
count++;
if (!initializing)
self.changed("counts", roomId, {nbLogs: count});
},
removed: function (id) {
count--;
self.changed("counts", roomId, {nbLogs: count});
}
// don't care about moved or changed
});
// Observe only returns after the initial added callbacks have
// run. Now return an initial value and mark the subscription
// as ready.
initializing = false;
self.added("counts", roomId, {nbLogs: count});
self.ready();
// Stop observing the cursor when client unsubs.
// Stopping a subscription automatically takes
// care of sending the client any removed messages.
self.onStop(function () {
handle.stop();
});
});
// client: declare collection to hold count object
Counts = new Meteor.Collection("counts");
// client: subscribe to the count for the current room
Meteor.subscribe("nbLogs");
// client: use the new collection
Deps.autorun(function() {
console.log("nbLogs: " + Counts.findOne().nbLogs);
});
There might be some higher level ways to do this in the future.

Categories