Subscription in Tracker.autorun causes publish callback to fire multiple times - javascript

I'm working in a ReactJS and Meteor project and I found a strange behavior I'm gonna describe here:
There is a Tracker.autorun block with a Meteor.subscribe call inside. So far, so good. In the server side, there is a matching Meteor.publish which declares a callback.
As far as I understand, the Meteor.publish callback should fire once for each subscription received, but somehow this callback is firing 3~4 times for a single subscription.
In my last test the Tracker.autorun block executed 4 times, the subscribe only executed 1 single time and the callback fired 4 times.
The Meteor.subscribe only runs once, even the tracker runs several times. How could it cause the callback to fire more the once?
Does it make sense?
Do you know what could explain such behavior?
If you need any other information, just let me know.
Thanks in advance
Meteor.publish('current-user', function currentUser(credentials) {
return Users.find();
});
Tracker.autorun((c) => {
if (!currentUserHandler) {
currentUserHandler = Meteor.subscribe('current-user', this.credentials);
}
});

You should expect that the autorun will fire twice as a normal condition, once without data, and the second with some data.
That is to allow you to show a "loading" state before the data arrives.
You are subscribing to the users collection, which is a special collection. Meteor uses it for authentication, and also to record session activity. You are doing a Users.find(), which is an unfiltered query on the whole users collection, so any modification to any user will cause it to fire. You also won't be able to see all of the users records (for security reasons).
It's probable that you are storing additional data on the users record, hence the need for you to subscribe to it. I would recommend that you consider storing this data in another collection, such as 'members', 'visitors', 'profiles' or whatever name suits you. Things are likely to work better that way.

Related

Detecting changes on database table column status

I am having a project in Laravel. In database I have a status column, which shows if exam is started or not. I had an idea in the waiting room checking every single second if the status was changed or not, if changed to 1, when the exam starts, but I am so new to Laravel and everything else, that I even don't get the main idea how I could do this, I don't ask for any code, just for the lead, to move on. yeah, hope someones gets me. Thanks if someone answers me.
Check about laravel cron jobs. You will need a class implementing ShouldQueue interface and using Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
With regards to the storage of the jobs i do recommend Redis or SQS.
In order to keep monitoring the queue in production think about installing supervisor.
Further information here: Queues
Your plan can work, it is called polling.
Basically, you will want to call
setInterval(function() {
//your code here
}, 1000);
setInterval is a function that receives two parameter. The first is a callback function, that will periodically be executed and the second is the length of the period in milliseconds (1000 milliseconds is a second).
Now, you will need to implement your callback function (Javascript, of course) to send an AJAX request to a Laravel action. You will need to look into XMLHttpRequest and its usages, or you can use some libraries to simplify your task, like jQuery or Axios.
On Laravel's side you will need to implement an action and a Route for it. (read this: https://appdividend.com/2022/01/22/laravel-ajax/)
Your Laravel will need to load data from your database, you can use Eloquent for this purpose or raw queries and then respond the POST request with the result.
Now, in your Javascript at the AJAX request's code you will need to have a callback function (yes, a callback inside a callback) which will handle the response and the the changes.
What about leveraging Observers? Also instead of having a status boolean, you could take a similar approach that Laravel has done for soft deletes and set exam_started_at. This way you can also keep track of time stamp and state all in one column. Also, observers are immediate rather than pushing them into a queue. Then generate a websocket event that can report back to your front end, if needed.
check out Laravel observer and soft delete documentation.
I know you specified "when the column on db changes..." but if it's not a strict-requirement you might want to consider implementing event-based architecture. Laravel has support for model events, which essentially allows you to run certain assertions and controls when a model created, updated, deleted etc.
class Exam extends Model
protected static function booted()
{
static::updated(function ($exam) {
if($exam->status=='your-desired-status'){
//your actions
}
//you can even in cooperate change controls
if ($exam->isDirty('status')){
//means status column changed
});
}
}
Of course this solution applies only if Database in question is in Laravel's reach. If database data changes outside the Laravel application these event listeners won't help at all.

Wait for response from emitted message?

I'm having a trouble wrapping my head around following concept.
I'm sending OSC messages to query status of instruments in Ableton, so I have emmiter/receiver combo going on. Now, thing is that I'd like to avoid having to keep up some sort of global state and wrap everything around this.
and I do communicate with Ableto in following fashion:
sender.emit("/live/device", queryData);
receiver.on("/live/device", function(responseData){
// process response here...
})
So you can tell that I'm not really sure when I got data back and cannot really sequence new queries based on responses.
What I'd like to do is to simply
query number of instruments on ONE certain channel
get number back
query parameters of each instrument of that channel based on first query
receive parameters back
But problem is that I have no idea how to wrap eventListeners to respond to these queries, or rather how to sequence them in way that is non-blocking and yet still avoiding having some sort of global state going on.
Querying data and storing Promises to be resolved by eventListener seems like a solution, but then I'm stuck on how to pass them back to sequence.
After some research, it seems that this kind of behaving breaks the whole concept of event listeners, but then I suppose the whole point is to have some global state to keep track of what is going on, right?
Event listeners are telling you some asynchronous action coming from a user action or any other interrupt. Depending on the API you are facing, they might have re-used event listeners for replies instead of providing a promise or callback return for the send API. If the server has multiple clients interacting with it, it might want to tell all clients at the same time when their state changes as well.
If you are sure there is no way to directly provide a callback in the send method for a reply to your request or a request does not yield a promise that resolves with the reply at some point, there are usually workarounds.
Option 1: Send context, receive it back
There are APIs that allow sending a "context" object or string to the API. The API then sends this context to the event listeners whenever it answers this specific question along with their payload. This way, the context part of their payload can be checked if it's the answer to the request. You could write your own little wrapper functions for a more direct send/reply pattern then.
Option 2: Figure out the result data, if it fits your request
If the resulting data has something specific to match on, like keys on a JSON object, it may be possible to find out what the request was.
Option 3: Use state on your side to keep track of everything
In most cases where I have seen such APIs, the server didn't care much about requests and only sent out their current state if it was changed by some kind of request. The client needs to replicate the state of the server by listening to all events, if it wants to show the current server state.
In most situations where I faced this issue, I thought about Option 1 or 2 but ended up with Option 3 anyways: Other clients or hardware switches might interfere with my client UI and change the server state without me listening on that change. That way I would loose information that invalidates my UI, so I would need to listen and replicate the state of the server/machine/hardware anyways.

Access same setTimeout() instance from multiple Node.js instances

Our API needs to send data to Zapier if some specific data was modified in our DB.
For example, we have a company table and if the name or the address field was modified, we trigger the Zapier hook.
Sometimes our API receives multiple change requests in a few minutes, but we don't want to trigger the Zapier hook multiple times (since it is quite expensive), so we call a setTimeout() (and overwrites the existing setTimeout) on each modify requests , with a 5000ms delay.
It works fine, and there are no multiple Zapier hook calls even if we get a lot modify requests from client in this 5000ms period.
Now - since our traffic is growing - we'd like to set up multiple node.js instances behind some load balancer.
But in this case the different Node.js instances can not use - and overwrite - the same setTimeout instance, which would cause a lot useless Zapier calls.
Could you guys help us, how to solve this problem - while remaining scalable?
If you want to keep a state between separate instances you should consider, from an infrastructure point of view, some locking mechanism such as Redis.
Whenever you want to run the Zapier call, if no lock is active, you set one on Redis, all other calls won't be triggered as it is locked, whenever the setTimeout callback runs, you disable the Lock.
Beware that Redis might become a SPOF, I don't know where you are hosting your services, but that might be an important point to consider.
Edit:
The lock on Redis might have a reference to the last piece of info you want to update. So on the first request you set the data to be saved on Redis, wait 5 seconds, and update. If any modifications were made in that time frame, it will be stored on Redis, that way you'll only update on 5 second intervals, you'll need to add some extra logic here though. Example:
function zapierUpdate(data) {
if (isLocked()) {
// Locked! We will update the data that needs to be saved on the
// next setTimeout callback
updateLockData(data);
} else {
// First lock and save data.
lock(data);
// and update in 5 seconds
setTimeout(function(){
// getLockData fetches the data on Redis and releases the lock
var newData = getLockData();
// Update the latest data that might have been updated.
callZapierNow(newData);
},5000);
}
}

How do Meteor.subscribe and MyCollection.find* operations interact?

I've been following lots of meteor examples and working through discover meteor, and now I'm left with lots of questions. I understand subscribe and fetch are ways to get "reactivity" to work properly, but I still feel unsure about the relationship between find operations and subscriptions/fetch. I'll try to ask some questions in order to probe for some holistic/conceptual answers.
Question Set 1:
In the following example we are fetching 1 object and we are subscribing to changes on it:
Meteor.subscribe('mycollection', someID);
Mycollection.findOne(someID);
Does order of operations matter here?
When does this subscription "expire"?
Question Set 2:
In some cases we want to foreign key lookup and use fetch like this:
MyCollection2.find({myCollection1Id: doc1Id}).fetch();
Do we need also need a MyColletion2.subscribe when using fetch?
How does subscribe work with "foreign keys"?
Is fetch ~= to a subscription?
Question Set 3:
What is an appropriate use of Tracker.autorun?
Why/when should I use it instead of subscribe or fetch?
what happens when you subscribe and find/fetch
The client calls subscribe which informs the server that the client wants to see a particular set of documents.
The server accepts or rejects the subscription request and publishes the matching set of documents.
Sometime later (after network delay) the documents arrive on the client. They are stored in a database in the browser called minimongo.
A subsequent fetch/find on the collection in which the aforementioned documents are stored will query minimongo (not the server).
If the subscribed document set changes, the server will publish a new copy to the client.
Recommended reading: understanding meteor publications and subscriptions.
question 1
The order matters. You can't see documents that you haven't subscribed for (assuming autopublish is off). However, as I point out in common mistakes, subscriptions don't block. So a subscription followed by an immediate fetch is should return undefined.
Subscriptions don't stop on their own. Here's the breakdown:
A global subscription (one made outside of your router or template) will never stop until you call its stop method.
A route subscription (iron router) will stop when the route changes (with a few caveats).
A template subscription will stop when the template is destroyed.
question 2
This should be mostly explained by the first section of my answer. You'll need both sets of documents in order to join them on the client. You may publish both sets at once from the server, or individually - this is a complex topic and depends on your use case.
question 3
These two are somewhat orthogonal. An autorun is a way for you to create a reactive computation (a function which runs whenever its reactive variables change) - see the section on reactivity from the docs. A find/fetch or a subscribe could happen inside of an autorun depending on your use case. This probably will become more clear once you learn more about how meteor works.
Essentially, when you subscribe to a dataset, it fills minimongo with that data, which is stored in the window's local storage. This is what populates the client's instance of that Mongo with data, otherwise, basically all queries will return undefined data or empty lists.
To summarize: Subscribe and Publish are used to give different data to different users. The most common example would be giving different data based on roles. Say, for instance, you have a web application where you can see a "public" and a "friend" profile.
Meteor.publish('user_profile', function (userId) {
if (Roles.userIsInRole(this.userId, 'can-view', userId)) {
return Meteor.users.find(userId, {
fields: {
public: 1,
profile: 1,
friends: 1,
interests: 1
}
});
} else {
return Meteor.users.find(userId, {
fields: { public: 1 }
});
}
});
Now if you logged in as a user who was not friends with this user, and did Meteor.subscribe('user_profile', 'theidofuser'), and did Meteor.users.findOne(), you would only see their public profile. If you added yourself to the can-view role of the user group, you would be able to see public, profile, friends, and interests. It's essentially for security.
Knowing that, here's how the answers to your questions breaks down:
Order of operations matters, in the sense that you will get undefined unless it's in a reactive block (like Tracker.autorun or Template.helpers).
you still need to use the subscribe when using fetch. All fetch really does is return an array instead of a cursor. To publish with foreign keys is a pretty advanced problem at times, I recommend using reywood:publish-composite, which handles the annoying details for you
Tracker.autorun watches reactive variables within the block and will rerun the function when one of them changes. You don't really ever use it instead of subscribing, you just use it to watch the variables in your scope.

IndexedDb transaction auto-commit behavior in edge cases

Tx is committed when :
request success callback returns
- that means that multiple requests can be executed within transaction boundaries only when next request is executed from success callback of the previous one
when your task returns to event loop
It means that if no requests are submitted to it, it is not committed until it returns to event loop. These facts pose 2 problematic states :
placing a new IDB request by enqueuing a new task to event loop queue from within the success callback of previous request instead of submitting new request synchronously
in that case the first success callback immediately returns but another IDB request has been scheduled
are all the asynchronous requests executed within the single initial transaction? This is quite essential in case you want to implement result pulling with back-pressure where consumer gives you a feedback in form of a Future that it is ready to consume another response
creating a ReadWrite tx, not placing any requests against it and creating another one before returning to event loop
does creating a new one implicitly commits the previous tx ? If not, serious write lock starvations might occur, because :
If multiple "readwrite" transactions are attempting to access the same
object store (i.e. if they have overlapping scope), the transaction
that was created first must be the transaction which gets access to
the object store first. Due to the requirements in the previous
paragraph, this also means that it is the only transaction which has
access to the object store until the transaction is finished.
The example of enqueuing a new task to event loop queue from within the success callback by recursive request submission with back-pressure :
function recursiveFn(key) {
val req = store.get(key)
req.onsuccess = function() {
observer.onNext(req.result).onsuccess { recursiveFn(nextKey) }
}
}
Observer#onNext // returns Future[Ack] Ack is either Continue or Cancel
Now can onsuccess or onNext do a setTimeout(0) or not to make the whole thing be part of one transaction?
Bonus question :
I think that ReadOnly transactions are exposed to the consumer/user just because it would be hard to detect the end of a batch read if you recursively submit new requests from the success callback of the previous one right? Otherwise I don't see any other reason for them to be exposed to a user, right ?
I'm not sure I understand your question completely but I'll offer an answer on whether you can safely use IDB transaction events to move a state machine.
Yes and no. Yes in theory, no in practice.
I think you understand the transaction lifetime. But to rehash:
The lifetime of a transactions lasts as long as it's referenced: it's "active" so long as it's being referenced, after which it is said to be "finished" and the transaction is committed.
In theory, oncomplete should fire whenever a transaction successfully commits. There's a useful tip in the spec on this that suggests how you could loop:
To determine if a transaction has completed successfully, listen to the transaction’s complete event rather than the IDBObjectStore.add request’s success event, because the transaction may still fail after the success event fires.
To safely use this mechanism be sure to watch for non-success events including onblocked and onabort as well.
Practically speaking, I've found transactions to be flakey when long-lived or done consecutively in batches (as you've noted in another IDB comment). I'm generally not engineering my apps to require tricky behavior because, no matter how the spec says it should behavior, I'm seeing wonky transactions in both Firefox and Chromium (but mostly Blink, interestingly) especially when multiple tabs are open.
I spent many weeks rewriting dash to reuse transactions for supposed performance gains. In the end it could not pass even my basic write tests and I was forced to abandon simultaneous/queued/consecutive transactions and rewrite once again. This time I picked a one-transaction-at-a-time model which is slower but, for me, more reliable (and suggest to avoid my lib and use something like ydn for bulk inserts).
I'm not sure on your application requirements, but in my humble opinion tying in your I/O into your event loop seems like a disastrous idea. If I needed an event loop as what I understand to be the term I would definitely use requestAnimationFrame() and throttle that callback if I needed fewer ticks than one per ~33 milliseconds.

Categories