would minimongo cache across subscriptions? - javascript

If I have a subscription inside an Tracker.autorun(), the publish takes a variable selector, means that every time, the return may vary, would minimongo cache all the docs returned from all the publications? or each time, it clears all its documents and only preserve the returned docs from previous publication?

Meteor is clever enough to keep track of the current document set that each client has for each publisher. When the publisher reruns, it knows to only send the difference between the sets. Let's use the following sequence as an example:
subscribe for posts: a,b,c
rerun the subscription for posts b,c,d
server sends a removed message for a and an added message for d.
Note this will not happen if you stopped the subscription prior to rerunning it.

Related

Efficiency or cost of many calls to Firebase

My app produces pages of 20 content items that may include items liked by the current user and other users. I use Firebase Realtime Database to keep track of the likes but the content is stored elsewhere and the pages are server-rendered, so a completely new page is served each time. I need to identify and mark the liked items and show the status of each item in real time:
Its number of likes, and
Whether it has been liked by the current user.
To get the number of likes in real time, I loop through the items with this function, modified from the Firebase Friendlypix demo:
function registerForLikesCount(postId, likesCallback=null) {
const likedRef = firebase.database().ref(`/liked/${postId}`);
likedRef.on("value", function(snapshot) {
if (snapshot.val()) {
likesCallback(snapshot.val().count);
}
});
}
That's 20 calls for each page plus an event listener is set, right? It's fast but I don't know about the cost. What resources are used for all those listeners a) if nothing happens, or b) if a like is registered and transmitted to say, 100 concurrent users?
To keep track of whether the current logged-in user has liked any of the items, I've tried two ways:
For each page, grab the user's likes from their Firebase node. That's one call to Firebase, possibly grabbing a few dozen IDs (not more) and, during the above-mentioned loop, check if any of the IDs are included.
Or, using a snippet from Friendlypix, for another 20 calls to Firebase:
function registerToUserLike(postId, callback) {
// Load and listen to new Likes.
const likesRef = firebase.database().ref(`likes/${postId}/${firebase.auth().currentUser.uid}`);
likesRef.on('value', (data) => callback(!!data.val()));
}
registerToUserLike has the advantage of keeping any of the user's open tabs or devices updated and shows off the magic of realtime, but at what price?
I would like to understand what resources are consumed by this activity in order to estimate the cost of running my application.
The overhead on the Firebase protocol for all these listeners is minimal, each sends the path it wants to listen to the server (not a paid operation for Firebase, but your mobile provider may charge for the bandwidth) and then receives the data from the path it listens to and updates to the data at that path.
The number of calls is not a significant part of the cost here, so the only way to reduce the cost would be to listen to less data. In a nutshell: it matters very little whether you have 20 calls listening to 1 node, or 1 call listening to 20 nodes in Firebase.
For more on why this is, see my answer to the questions about Speed up fetching posts for my social network app by using query instead of observing a single event repeatedly

Firebase real-time efficiency question: How to set up listeners for a chat app

I have a chat app using Firebase as a realtime database and React Native. I'm trying to figure out the most efficient way to set up the listener for chat messages from Firebase in terms of minimizing read operations and transferring data. Here is my data structure:
- messages
- chatId
- messageId
- sentBy
- timestamp
- text
As I see it I have 2 options, either ref.on("child_added) or ref.on("value")
If I use ref.on("child_added"), the advantage is that when a new message is sent then only the newest message is retrieved. The problem though is that when the conversation is loaded the read operation is called for each message in the chat. If a conversation is hundreds of messages long, then that's hundreds of read operations.
The other option is to use ref.on("value"). The problem here is that on every new message added, the entire conversation is resent instead of just the most recent message. The advantage is that when the conversation is loaded, only one read operation is called to transfer the entire conversation to the screen.
I want some combination of the two of these in which when the conversation is loaded, there is one read operation that brings the entire contents of the conversation, AND when a new child node is added (a new message) only that message is transmitted to the listener. How can I achieve this?
firebaser here
There is no difference between the wire traffic for a value listener and child_ listeners on the same location/query. If you check the Network tab of your browser, you can see exactly what is sent retrieved, and you'll see that it's exactly the same between the listener types.
The difference between value and child_* events is purely made client-side to make it easier for you to update the UI. In fact, even when you attach both value and child_* listeners to the same query/location, Firebase will only retrieve the data only once.
The common way to do what you want is to attach both child_* and value listeners to the query/location. Since the value listener is guaranteed to be fired last, you can use that fact to detect when the initial load is done.
Something like:
var chatRef = firebase.database().ref("messages/chatId");
var initialLoadDone = false;
chatRef.on("child_added", (snapshot) => {
if (initialLoadDone) {
...
}
});
chatRef.once("value", (snapshot) => {
snapshot.forEach((messageSnapshot) => {
...
});
initialLoadDone = true;
});
Suggestion: Use Firestore. It maintains a cache of your data and efficiently handles such scenarios.
You can use ref.once('value') to get current nodes only once and then ref.on('child_added') for subsequent additions. More performance notes.
Edit: I believe Firebase Database handles this efficiently by just ref.on('value'). On checking the network tab after adding a new node to my database, I notified the amount of data that got transferred was very low. This might mean that firebase by default caches your previous data. Would recommend you to look at your network tab and take decisions as such or wait from someone from their team show directions.

Limiting entries in Javascript Object

Alrighty! I'm working on small chat add-on for my website, and when a user logs on they'll see the chat history, I'm using a Javascript Object to store all messages in my NodeJS server, now I'd like it so whenever more than fifty entries are in the Object it adds the latest message and removes the oldest, I'd like this to limit my server from handling a lot of messages every time a user logs on. How would I be doing this?
Here's how I store my messages,
var messages = {
"session":[
]
};
messages.session.push(
{
"name":user.name,
"message":safe_tags_replace(m.msg),
"image":user.avatar,
"user":user.steamid,
"rank":user.rank,
}
);
I could also just do loading the last fifty messages in the JSON Object but whenever I run my server for a long time without restarting it this Object will become extremly big, would this be a problem?
Since you are pushing elements to the end of your array, you could just use array shift() to remove the first element of the array if needed. e.g.
var MAX_MESSAGES = 50;
if (messages.session.length > MAX_MESSAGES) { messages.session.shift(); }
To answer the second part of your question:
The more data you hold, the more physical memory you consume on the client machine, obviously. Which can - by itself - be a problem, especially for mobile devices and on old hardware. Also; having huge arrays will impact performance on lookup, iteration, some insert operations and sorting.
Storing JSON objects that contain chat history in the server is not a good idea. For one you are taking up memory that will be held up for an indefinite period. If you have multiple clients all taking to each other, these objects will continue to grow are eventually impact performance. Secondly once the server is restarted, or after your clenan up these objects, the chat history is lost.
The ideal solution is to store message in a database; a simple solution is mongoDB. Whenever a user logs in to the app. Query the db for that users chat history (here you can define how far back you want to go) and send them an initial response contain this data. Then whenever a message is sent, insert that message into the table/collection for future reference. This way the server is only responsible for sending chat history during the initial signon. After that the client is responsible for maintaining any added new message.

Does Meteor.publish return old data when subscribe?

In Microscope, pagination part. /10 will load 10 posts, /20 will load 20 posts.
So first subscribe('posts', {limits: 10}), publish will return 10 posts, then
subscribe('posts', {limits: 20}), publish will return all 20 posts, or only return new 10 posts?
Meteor is clever enough to keep track of the current document set that each client has for each publisher. When the publisher reruns, it knows to only send the difference between the sets. Let's use the following sequence as an example:
subscribe for posts: a,b,c
rerun the subscription for posts b,c,d
server sends a removed message for a and an added message for d.
Note this will not happen if you stopped the subscription prior to rerunning it.
If I understand, the question is basically: "does meteor resend records that are already on the client when re-subscribing". The answer to that is no and can be found in Meteor docs for subscribe
Meteor is smart enough to avoid wasteful unsubscribing/resubscribing

MEAN.JS setInterval process for event loop (gets data from another server)

I have a mean.js server running that will allow a user to check their profile. I want to have a setInterval like process running every second, which based on a condition, retrieve data from another server and update the mongoDB (simple-polling / long-polling). This updates the values that the user sees as well.
Q : Is this event loop allowed on nodejs, if so, where does the logic go that would start the interval when the server starts? or can events only be caused by actions (eg, the user clicking their profile to view the data).
Q: What are the implications of having both ends reading and writing to the same DB? Will the collisions just overwrite each other or fault. Is there info on how much read/write would overload it?
I think you can safely do a mongoDB cronjob to update every x day/hour/minutes. In the case of user profile, I assume thats not a critical data which require you to update your DB in real time.
If you need to update in real time, then do a DB replication. Then you point it to a new DB thats replicated on a real time.

Categories