Node.js Socket IO: How to continuously save socket data to MongoDB - javascript

I'm building a 3D game in the browser using THREE.js. Lots of fun, but I came across the following situation:
An object in my 3D scene is continuously moving around, driven by user input. I need to save the object's position to my database in real-time.
Let's start at the front-end. Angular.js is watching my object's position using its built-in $watch functionality. The object's position can change multiple times per second.
On each change, I emit an event to the backend Node.js server using Socket IO, like so:
socket.emit('update', {
id: id,
position: position
});
In the back-end, the event is caught and immediatly emitted to other members in the same Socket IO Room. This way, everyone in this room will have the most real-time update possible.
Now, because the event can happen multiple times per second, I don't want to update my MongoDB collection on each change, since this would cause a lot of overhead. Instead, I'm looking for a way of incidentally saving data to the database.
I've came up with a solution by using Node.js setInterval function, which saves data every 1000ms. For each distinct id (which is unique per object) received on the backend, a new key is created on an JavaScript object, thus keeping track of changes on a per-object basis.
The (simplified) code on the backend:
let update_queue = new Object();
// ...
// Update Event
socket.on('update', (msg) => {
// Flag Changes
if (!update_queue[msg.id]) update_queue[msg.id] = { changes: true };
// Set Interval Timer
if (!update_queue[msg.id].timer) {
update_queue[msg.id].timer = setInterval(() => {
if (!update_queue[msg.id].changes) {
clearInterval(update_queue[msg.id].timer);
return;
}
// This saves data to MongoDB
Object3DCollection.update(msg.id, msg.position)
.then((res) => {
console.log('saved');
});
// Unflag Changes
update_queue[msg.id].changes = false;
}, 1000);
}
// Immediate Broadcast to Socket Room
socket.broadcast.to('some_room').emit('object_updated', msg);
});
The Question
Is this a proper way of handling very frequent socket data and still saving it to a database? Or are there any other suggestions/solutions that are more robuust or work better.
Note
I do not want to wait for my object to be saved to the database and then emit the saved data to the rest of the socket room. The delay of database write operations is not suitable for the real-time game situation I'm dealing with.
Thanks in advance! All suggestions/solutions are appreciated and will be considered.

Related

Emitting to all clients when a variable reaches a specific value

I'm trying to create a basic TCG style game with Node/Vue/Socket.io and can't seem to figure out how to emit to both clients when a "ready" count = 2 but with different data, I'll explain a little below...
The sequence of events is as such:
player connects -> server sends player a "deck" -> player clicks ready to start and also sends back their first 'card'.. Then the server should send out to each player the other players first card. (Note my emit events don't have the correct titles atm - they were already written up on the front end so just kept them the same)
On connection I've pushed to an array called sockets, that I was using for testing. Then in the "ready" event I created an array called "firstCards" that I'm pushing the socket event data to then adding a .socket property to it (to signify who's who), then incrementing ready.
I've had a little bit of a play around with a few different methods but I can only seem to get the last card sent to both clients as opposed to each client getting the other clients first.. I also tried just putting the "if" statement outside of the socket event (as you will see below with the comment on the brackets/curly braces) which doesn't seem to work either.
I haven't tried this kind of asymmetric data transfer before and unsure if that is even the correct term... or whether this is even the correct way to do so, any help would be much appreciated!
This is the code I'm using so far:
socket.on('ready-up', function (card)
{
console.log(`Player ${socket.id} is ready`);
ready++;
console.log(ready);
card.socket = socket.id;
firstCards.push(card);
console.log(firstCards);
});
if (ready == 2)
{
for (let i = 0; i < sockets.length; i++)
{
io.to(sockets[i]).emit('p2hand', "Both players ready");
let opp = sockets.find(element => element != socket.id);
console.log(`Socket ID is: ${socket.id}`);
console.log(`Opp ID is: ${opp}`);
let card = firstCards.find(element => element.socket == opp)
console.log(card);
io.to(opp).emit('reveal',
{
'name': card.name,
'hp': card.hp,
'mp': card.mp,
'skills': card.skills,
'icon': card.icon
});
// io.to(opp).emit('reveal', card);
ready = 0;
}
}
// });
So I figured this one out for anyone who may end up wanting to do what I was trying to do....
I decided that upon connection, both clients join a room called "game1".
The server will then emit "firstCards" to that room.
After that it was just a case of making sure the player-client know which was the opponents card... Now I could have used the .name property for this, but I decided to add an "id" property using the socket.id instead due to the possibility of the same card being drawn for both players.
I'm thinking that all server-client interactions will now have to carry this property for any other cards in the game such as items, spells, etc

Is it possible to fire a delete trigger by adjusting TTL value in CosmosDB?

I have created a pre-delete trigger by using Script Explorer in Azure portal. The below trigger is written in JavaScript:
function markReminderAsPastDue() {
var collection = getContext().getCollection();
var request = getContext().getRequest();
var docToCreate = request.getBody();
docToCreate["pastDue"] = true;
docToCreate["id"] = "";
var accepted = collection.createDocument(collection.getSelfLink(),
docToCreate,
function (err, documentCreated) {
if (err) throw new Error('Error' + err.message);
});
if (!accepted) throw new Error("Document creation not accepted");
}
I set the TTL value for each document in the associated collection. So the TTL value is not equal to -1 and documents are deleted automatically once the time expires.If I delete the document manually, the pre-delete trigger is fired. However, when the document is deleted implicitly because of the TTL value, the trigger is not fired. What should I do to fix this problem? Is it possible to make triggers fire with TTL value?
As far as I know, there's no callback mechanism of any sort in Azure Cosmos DB for TTL. The TTL enforcement is just a background thread that queries every second for documents that have expired, then deletes them.
Combined with your needs, I'd suggest you just mimicking the TTL operation in your application layer, where you can perform whatever extra business logic.
You can set an update time attribute in each document, which updates the update times for each modification. Then do a polling mechanism at your application layer and iterate through the database every once in a while to find out the data when the update time expires and delete them.
In order to reduce the pressure on your application layer, you can give the action of deleting data to Cosmos DB Stored Procedure.
Hope it helps you.

Meteor Sub/Pub with Simulation where pub is slow to update

I have a pub that wraps and external API. Client subs the external api pub. There is a 'ACTIVATE' button they can push to activate a billing method. Button calls an update method that updates the collection. The pub updates the external api. Simulation runs and updates the client collection. Button changes to 'DEACTIVATE' as expected. This is where the issue comes in. The external api takes some time to return with the updated doc. Within 100-200ms of the button turning to 'DEACTIVATE' it will flip back to 'ACTIVATE' and then 500ms latter back to 'DEACTIVATE' where it should be assuming there were no issues with the external api.
I'm sure I could come up with some hacky solution to deal with this in the client but wondering if there is a way to tell the simulation/client collection that the pub is slow and to not update quite as often? Thus, giving the pub/external api more time to complete it's updates.
This turned out to be really simple.
Client side simulation alone is not enough. The trick is to do server side simulation as well. To accomplish this first setup a hook to the Meteor.publish this object something like this.
_initServer() {
if (Meteor.isServer) {
console.log(`Server initializing external collection "${this.name}"`)
let self = this
Meteor.publish(this.name, function (selector, options) {
check(selector, Match.Optional(Match.OneOf(undefined, null, Object)))
check(options, Match.Optional(Match.OneOf(undefined, null, Object)))
self.publication = this
self._externalApi.fetchAll()
.then((docs)=>docs.forEach((doc)=>this.added(self.name, doc._id, doc)))
.then(()=>this.ready())
// todo handle error
.catch((error)=>console.error(`${self.name}._initServer: self._externalApi.fetchAll`, error))
})
}
}
Then in your update function you can simulate on both the client and server like so:
this.update = new ValidatedMethod({
name: `${self.name}.update`,
validate: (validators && validators.update) ? validators.update : self.updateSchema.validator({clean: true}),
run(doc) {
console.log(`${self.name}.update `, doc)
if (Meteor.isServer && self._externalApi.update) {
// server side simulation
self.changed(doc)
self._externalApi.update(doc._id, doc)
.then(self.changed)
.catch((error)=>handleError(`${self.name}.update`, 'externalApi.update', error))
} else {
// client side simulation
self.collection.update(doc._id, {$set: doc})
}
},
})
Apologizes if this is over simplified these examples are from a large library we use for external api's.

Will Rx.Observable.groupBy clean up empty streams?

In a Node application I'm trying to process a stream of events using RxJS. The event stream is a list of changes to many documents. I'm using groupBy to partition the stream into new streams by documentId. But I'm wondering, once a document is closed on the client and no new events are added to the stream for that documentId, will groupBy dispose of that document's stream once it is empty? If not, how would I manually do that? I want to avoid a memory leak caused by new documents streams being create but never destroyed.
What I'd suggest doing:
instead of just having a documentChanges observable, have a documentEvents observable.
Clients will send documentOpened events when they open a document, documentChanged events when they change a document and documentClosed events when they close a document.
By sending all 3 types of events through the same observable, you establish and guarantee an ordering. If a client that sends documentOpened, documentChanged, documentClosed events in that order, then your server will see them in that order. Note there won't be any guarantees about the order of events sent by 2 different clients. This will just let you ensure that the events sent by a particular client will be in order.
And then, this is how you'd use groupByUntil:
documentEvents
.groupByUntil(
function (e) { return e.documentId; }, // key
null, // element
function (group) { // duration selector
var documentId = group.key;
return group.filter(function (e) { return e.eventType === 'documentClosed'; });
})
.flatMap(function (eventsForDocument) {
var documentId = eventsForDocument.key;
return eventsForDocument.whatever(...);
})
.subscribe(...);
Another option that is a lot simpler: you can just expire the group after an idle period. Depending on what you are doing with the events this may be more than sufficient. This example expires a group if the document has not been edited in 5 minutes. If more edits come in then a new group is spun up.
var idleTime = 5 * 60 * 1000;
events
.groupByUntil(
function(e) { return e.documentId; },
null,
function(g) { return g.debounce(idleTime); })
.flatMap...
Since you included the .NET tag, I'll cover Rx.NET as well.
Your question is a phrased a bit incorrectly. Streams are empty if and only if they never have an event. So, they can't become empty. A stream that isn't emitting data doesn't typically consume much in the way of resources though.
In .NET, groups will not terminate until the source terminates. We use 'GroupByUntil` which allows you to specify a durationSelector stream for each group. Observable.Timer often works well for this.
This means that you may get multiple non-concurrent streams with the same key appearing over time, but if (as is often the case) your group streams are flattened at some point, it won't matter.
In rxjs, we also have groupByUntil.
In Rx-Java, the groupByUntil method, which behaved similarly, was rolled into groupBy - see https://github.com/ReactiveX/RxJava/pull/1727 and https://github.com/benjchristensen/RxJava/commit/b9302956832e3e77579f63fd9db25aa60eb4192a for more details.
http://reactivex.io/documentation/operators/groupby.html says:
If you unsubscribe from one of the GroupedObservables, that GroupedObservable will be terminated. If the source Observable later emits an item whose key matches the GroupedObservable that was terminated in this way, groupBy will create and emit a new GroupedObservable to match the key.
So, in Rx-Java you must unsubscribe from a grouped observable stream to terminate it. takeUntil with a timer stream can work for this.
Addendum:
In response to your comment, a stream will not terminate until a downstream operator unsubscribes from it. The duration selector of groupByUntil would cause termination. If a document will not be opened again once closed, then you can just send a "documentclosed" event into the stream and use a regular groupBy with a takeWhile testing for the "documentClosed".
The reason why it's important the document is not opened again is because with groupBy (in rx-js, and rx.net) a new group will not be created if an already seen key reappears.
If this is a problem, then you will need to use groupByUntil and use a published stream to watch for the documentClosed event - using a published stream will ensure you don't get subscription side effects.

Subscribe to a count of an existing collection

I need to keep track of a counter of a collection with a huge number of documents that's constantly being updated. (Think a giant list of logs). What I don't want to do is to have the server send me a list of 250k documents. I just want to see a counter rising.
I found a very similar question here, and I've also looked into the .observeChanges() in the docs but once again, it seems that .observe() as well as .observeChanges() actually return the whole set before tracking what's been added, changed or deleted.
In the above example, the "added" function will fire once per every document returned to increment a counter.
This is unacceptable with a large set - I only want to keep track of a change in the count as I understand .count() bypasses the fetching of the entire set of documents. The former example involves counting only documents related to a room, which isn't something I want (or was able to reproduce and get working, for that matter)
I've gotta be missing something simple, I've been stumped for hours.
Would really appreciate any feedback.
You could accomplish this with the meteor-streams smart package by Arunoda. It lets you do pub/sub without needing the database, so one thing you could send over is a reactive number, for instance.
Alternatively, and this is slightly more hacky but useful if you've got a number of things you need to count or something similar, you could have a separate "Statistics" collection (name it whatever) with a document containing that count.
There is an example in the documentation about this use case. I've modified it to your particular question:
// server: publish the current size of a collection
Meteor.publish("nbLogs", function () {
var self = this;
var count = 0;
var initializing = true;
var handle = Messages.find({}).observeChanges({
added: function (id) {
count++;
if (!initializing)
self.changed("counts", roomId, {nbLogs: count});
},
removed: function (id) {
count--;
self.changed("counts", roomId, {nbLogs: count});
}
// don't care about moved or changed
});
// Observe only returns after the initial added callbacks have
// run. Now return an initial value and mark the subscription
// as ready.
initializing = false;
self.added("counts", roomId, {nbLogs: count});
self.ready();
// Stop observing the cursor when client unsubs.
// Stopping a subscription automatically takes
// care of sending the client any removed messages.
self.onStop(function () {
handle.stop();
});
});
// client: declare collection to hold count object
Counts = new Meteor.Collection("counts");
// client: subscribe to the count for the current room
Meteor.subscribe("nbLogs");
// client: use the new collection
Deps.autorun(function() {
console.log("nbLogs: " + Counts.findOne().nbLogs);
});
There might be some higher level ways to do this in the future.

Categories