Receiving notifications of child_added from that moment on - javascript

I am using Firebase both on the Browser client-side and on the server side NodeJS in order to make the site SEO friendly. And so, I'm rendering a list of items on the server side, and then listening to new updates on the client side.
I was not able to find a proper chaining for on('child_added') that will start listening to child_added notification from beginning a specific ID.
With endAt() or startAt(), these functions are endpoint inclusive, and so they return the last child as well.
Some other answers seem to indicate using endAt().limit(1) but this seems flawed, as there could be multiple child_added in between the gap. Also endAt() is inclusive and so the endAt().limit(1) still returns the last added child.

You could do a once('value'), then start the .on('child_added') listener in the once() callback, and compare the names of new children with those read initially in the once() call.
This is a little ugly though, and requires keeping a bunch of children names in memory. If you describe your use case in a little more detail, it might help people think of a better solution. How is your site designed such that you need to load only future additions, and can't throw out the query endpoints?

You could push() your own dummy data, then startAt(null, the-name-of-the-dummy).on('child_added'), and throw out the first result.

Related

Firebase Realtime Database document ordering

I am listening for new Firebase Realtime Database documents with code something like this:
firebase.database().ref(path)
.orderByChild('timestamp')
.on('child_added', snap => {
...
});
where timestamp is set on the server with firebase.database.ServerValue.TIMESTAMP. I would like to have documents always handled in timestamp order, but I am aware that documents I add locally may arrive in the above code out of order.
I can check for and fix mis-ordered arrivals but I'd prefer not to if there is some way to have this not happen. I know about this answer (and answers that link to it) but I believe that applies to an earlier API without ordering methods like orderByChild.
I believe that I should be able to get timestamp order if I always add documents using a transaction and pass false in the applyLocally argument. I am wondering if it also works to add documents from a separate Javascript context on the same client (e.g. from a Web Worker) without a transaction.
Will either or both of these approaches guarantee timestamp ordering? Is there any other way to achieve this? Among approaches that work, is one clearly superior or are there trade-offs among them?
The local estimate/latency compensation event is only fired on the client that performs the write operation. So if you perform a write operation in a different context, the original context will only see the operation when it comes from the server.
You might even be able to accomplish this by using two FirebaseApp instances, although I couldn't get that working in a quick test here myself.

Wait for response from emitted message?

I'm having a trouble wrapping my head around following concept.
I'm sending OSC messages to query status of instruments in Ableton, so I have emmiter/receiver combo going on. Now, thing is that I'd like to avoid having to keep up some sort of global state and wrap everything around this.
and I do communicate with Ableto in following fashion:
sender.emit("/live/device", queryData);
receiver.on("/live/device", function(responseData){
// process response here...
})
So you can tell that I'm not really sure when I got data back and cannot really sequence new queries based on responses.
What I'd like to do is to simply
query number of instruments on ONE certain channel
get number back
query parameters of each instrument of that channel based on first query
receive parameters back
But problem is that I have no idea how to wrap eventListeners to respond to these queries, or rather how to sequence them in way that is non-blocking and yet still avoiding having some sort of global state going on.
Querying data and storing Promises to be resolved by eventListener seems like a solution, but then I'm stuck on how to pass them back to sequence.
After some research, it seems that this kind of behaving breaks the whole concept of event listeners, but then I suppose the whole point is to have some global state to keep track of what is going on, right?
Event listeners are telling you some asynchronous action coming from a user action or any other interrupt. Depending on the API you are facing, they might have re-used event listeners for replies instead of providing a promise or callback return for the send API. If the server has multiple clients interacting with it, it might want to tell all clients at the same time when their state changes as well.
If you are sure there is no way to directly provide a callback in the send method for a reply to your request or a request does not yield a promise that resolves with the reply at some point, there are usually workarounds.
Option 1: Send context, receive it back
There are APIs that allow sending a "context" object or string to the API. The API then sends this context to the event listeners whenever it answers this specific question along with their payload. This way, the context part of their payload can be checked if it's the answer to the request. You could write your own little wrapper functions for a more direct send/reply pattern then.
Option 2: Figure out the result data, if it fits your request
If the resulting data has something specific to match on, like keys on a JSON object, it may be possible to find out what the request was.
Option 3: Use state on your side to keep track of everything
In most cases where I have seen such APIs, the server didn't care much about requests and only sent out their current state if it was changed by some kind of request. The client needs to replicate the state of the server by listening to all events, if it wants to show the current server state.
In most situations where I faced this issue, I thought about Option 1 or 2 but ended up with Option 3 anyways: Other clients or hardware switches might interfere with my client UI and change the server state without me listening on that change. That way I would loose information that invalidates my UI, so I would need to listen and replicate the state of the server/machine/hardware anyways.

Should updates to Firstore items in AngularFire be done through the AngularFirestoreCollection?

In my app, I have a list that requires an "or" condition. But, as the docs say:
In this case, you should create a separate query for each OR condition and merge the query results in your app.
As a result, in my service, I'm managing two queries and surfacing them as a single observable list to consumers.
The problem comes in with updating. I have the choice of doing extra work to match up the item needing update to the correct collection so I can do the following:
myCollection.doc(item.id).update(item);
or I can make this much more simple and just:
angularFirestore.doc(`path/to/${item.id}`).update(item);
I'm operating under the assumption that the first method will result in faster updates as I'm using the same reference that it would optimistically update instantly. And that the latter will be slower in that it would be more round about by updating the persistence layer and then the collection referencing getting notified about later (probably still a small time).
All of the above is assumption, however. I back this just with a few random instances where I've seen it take a second or two for an update or delete to show up in an other part of the view, but I haven't been able to actually inspect the process.
Does anyone know if the above is correct? Should I be doing the extra work to write through the collection references or does angularfire(and/or firestore) handle this and make them effectively the same operation under the hood?
AngularFire2 is a thin wrapper around RxFire, which itself is a relatively thin wrapper around the Firebase JavaScript SDK.
There should be no significant performance difference between updating a document through AngularFire or updating it directly through the JavaScript SDK. In both cases the majority of the time is spent in the JavaScript SDK, and on the wire between the client and server. For this reason I typically update directly through the JavaScript SDK, since it's often a bit more direct and the AngularFire abstraction has little advantage for me in write operations. Given that AngularFire is built on top of this SDK, it picks up the changes instantly even when they're not made through AngularFire.
If you have an instance where this does not seem to be the case, I recommend creating a question with the minimal, complete/standalone code that reproduces that problem.

Node.js, MongoDB, and Concurrency

I'm working on a game prototype and worried about the following case: Browser does AJAX to Node.JS, which has to do several MongoDB operations using async.series.
What prevents multiple requests at the same time causing the database issues? New events (i.e. db operations) seem like they could be run out of order or in between the async.series steps.
In other words, what happens if a user does AJAX calls very quickly, before the prior ones have finished their async.series. Hopefully that makes sense.
If this is indeed an issue, what is the proper way to handle it?
First and foremost, #fmodos's comment should be completely disregarded. It is wrong on many levels but most simply you could have any number of nodes running (say on Heroku) and there is no guarantee that subsequent requests will hit the same node.
Now, I'm going to answer your question by asking more questions. (You really didn't give me a choice here)
What are these operations doing? Inserting documents? Updating existing documents? Removing documents? This is very important because if all you're doing is simply inserting documents then why does it matter if one finishes for before the other? If you're updating documents then you should NOT be issuing a find, grabbing a ref to the object, and then calling save. (I'm making the assumption you're using mongoose, if you're not, I would) Instead what you should be doing is using built in mongo functions like $inc which properly handle concurrent requests.
http://docs.mongodb.org/manual/reference/operator/update/inc/
Does that help at all? If not, please let me know and I will give it another shot.
Mongo has database wide read/write locks. It gives preference to writes of the same collection first then fulfills reads. So, if by chance, you have Bill writing to the db and Joe is reading at the same time, Bill's write will execute first while Joe waits until the write is complete and then he is given all the data (including Bill's).

Is Mongoose not scalable with document array editing and version control?

I am developing a web application with Node.js and MongoDB/Mongoose. Our most used Model, Record, has many subdocument arrays. Some of these, for instance, include "Comment", "Bookings", and "Subscribers".
In the client side application, whenever the user hits the "delete" button it fires off an AJAX request to the delete route for that specific comment. The problem I am running into is that, when many of these AJAX calls come in at once, Mongoose fails with a "Document not found" error on some (but not all) of the calls.
This only happens when the calls are made rapidly and many at a time. I think this is due to the version in Mongoose causing document conflicts. Our current process for a delete is:
Fetch the document using Record.findById()
Remove the subdocument from the appropriate array (using, say, comment.remove())
Call record.save()
I have found a solution where I can manually update the collection using Record.findByIdAndUpdate and then using the $pull operator. However, this means we can't use any of mongoose's middleware and loose the version control entirely. And the more I think about it, the more I realize situations where this would happen and I would have to use Mongoose's wrapper functions like findByIdAndUpdate or findAndRemove. The only other solution I can think of would be to put the removal attempt into a while loop and hope it works, which seems like a very poor fix.
Using the Mongoose wrappers doesn't really solve my problem as it won't allow me to use any sort of Middleware or hooks at all then, which is basically one of the huge benefits of using Mongoose.
Does this mean that Mongoose is essentially useless for anything of with rapid editing and I might as well just use native MongoDB drivers? Am I misunderstanding Mongoose's limitations?
How could I solve this problem?
Mongoose's versioned document array editing is not scalable for the simple reason that it's not an atomic operation. As a result, the more array edit activity you have, the more likely it is that two edits will collide and you'll suffer the overhead of retry/recovery from that in your code.
For scalable document array manipulation, you have to use update with the atomic array update operators: $pull[All], $push[All], $pop, $addToSet, and $. Of course, you can also use these operators with the atomic findAndModify-based methods of findByIdAndUpdate and findOneAndUpdate if you also need the original or resulting doc.
As you mentioned, the big downside of using update instead of findOne+save is that none of your Mongoose middleware and validation is executed during an update. But I don't see that you have any choice if you want a scalable system. I'd much rather manually duplicate some middleware and validation logic for the update case than have to suffer the scalability penalties of using Mongoose's versioned document array editing. Hey, at least you still get the benefits of Mongoose's schema-based type casting on updates!
I think, from our own experiences, the answer to your question is "yes". Mongoose is not scalable for rapid array-based updates.
Background
We're experiencing the same issue at HabitRPG. After a recent surge in user growth (bringing our DB to 6gb), we started experiencing VersionError for many array-based updates (background on VersionError). ensureIndex({_id:1,__v1:1}) helped a bit, but that tapered as yet more users joined. It would appear to me Mongoose is indeed not scalable for array-based updates. You can see our whole investigation process here.
Solution
If you can afford moving from an array to an object, do that. Eg, comments: Schema.Types.Array => comments: Schema.Types.Mixed, and sort by post.comments.{ID}.date, or even a manual post.comments.{ID}.position if necessary.
If you're stuck with arrays:
db.collection.ensureIndex({_id:1,__v:1})
Use your methods described above. You won't benefit from hooks and validations, but there are worse things.
I would strongly suggest pulling those arrays out into new collections. For example, a Comments collection where each document has a record ID to indicate where it belongs. This is a much more scalable solution.
You are correct, Mongoose's array operations are not atomic and therefore do not scale well.
I thought of another idea, which I'm not certain about but seems worth offering: soft-delete.
Mongoose is very concerned about array-structure changes because they make future changes ambiguous. But if you were to just tag a comment subdocument with comment.deleted=true then you might be able to do more such operations without encountering conflicts. Then you could have a cron task that goes through and actually removes those comments.
Oh, an additional idea is to use some sort of memory cache, so if an record has been accessed/edited in the last few minutes, it's available without having to pull it from the server, which means that two requests coming in at the same time are going to be modifying the same object.
Note: I'm not actually sure that either of these are good ideas in general or that they'll solve your problem, so go ahead and edit/comment/downvote if they're bad :)

Categories