Meteor Keeping track of server side var on the client side - javascript

first question here but i really don't know where to go. I cannot find anything that help me on google.
i'm doing huge processing server side and i would like to keep track of the state and show it on the client side.
For that purpose i have a variable that i'm updating as the process go through. To keep track of it i'm using that client side:
Template.importJson.onCreated(function () {
Session.set('import_datas', null);
this.autorun(function(){
Meteor.call('readImportState', function(err, response) {
console.log(response);
if (response !== undefined) {
Session.set('importingMessage',response);
}
});
})
});
I'm reading it from template that way (in template.mytemplate.helpers):
readImportState: function() {
return Session.get('importingMessage');
},
And here is the server side code to be called by meteor.call:
readImportState: function() {
console.log(IMPORT_STATE);
return IMPORT_STATE;
}
The client grab the value at start but it is never updated later....
What am i missing here?
If somebody could point me in the right direction that would be awesome.
Thank you :)

TL;DR
As of this writing, the only easy way to share reactive state between the server and the client is to use the publish/subscribe mechanism. Other solutions will be like fighting an uphill battle.
In-memory State
Here's the (incorrect) solution you are looking for:
When the job starts, write to some in-memory state on the server. This probably looks like a global or file scoped variable like jobStates, where jobStates is an object with user ids as its keys, and state strings as its values.
The client should periodically poll the server for the current state. Note an autorun doesn't work for Meteor.call (there is no reactive state forcing the autorun to execute again) - you'd need to actually poll every N seconds via setInterval.
When the job completes, modify jobStates.
When the client sees a completed state, inform the user and cancel the setInterval.
Because the server could restart for any number of reasons while the job is running (and consequently forget its in-memory state), we'll need to build in some fault tolerance for both the state and the job itself. Write the job state to the database whenever it changes. When the server starts, we'll read this state back into jobStates.
The model above assumes only a single server is running. If there exist multiple server instances, each one will need to observe the collection in order to write to its own jobStates. Alternatively, the method from (2) should just read the database instead of actually keeping jobStates in memory.
This approach is complicated and error prone. Furthermore, it requires writing the state to the database anyway in order to handle restarts and multiple server instances.
Publish/Subscribe
As the job state changes, write the current state to the database. This could be to a separate collection just for job states, or it could be a collection with all the metadata used to execute the job (helpful for fault tolerance), or it could be to the document the job is producing (if any).
Publish the necessary document(s) to the client.
Subscribe for the document(s) on the client and use a simple find or findOne in a template to display the state to the user.
Optional: clean up the state document(s) periodically using with something like synced cron.
As you can see, the publish/subscribe mechanism is considerably easier to implement because most of the work is done for you by meteor.

Related

Detecting changes on database table column status

I am having a project in Laravel. In database I have a status column, which shows if exam is started or not. I had an idea in the waiting room checking every single second if the status was changed or not, if changed to 1, when the exam starts, but I am so new to Laravel and everything else, that I even don't get the main idea how I could do this, I don't ask for any code, just for the lead, to move on. yeah, hope someones gets me. Thanks if someone answers me.
Check about laravel cron jobs. You will need a class implementing ShouldQueue interface and using Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
With regards to the storage of the jobs i do recommend Redis or SQS.
In order to keep monitoring the queue in production think about installing supervisor.
Further information here: Queues
Your plan can work, it is called polling.
Basically, you will want to call
setInterval(function() {
//your code here
}, 1000);
setInterval is a function that receives two parameter. The first is a callback function, that will periodically be executed and the second is the length of the period in milliseconds (1000 milliseconds is a second).
Now, you will need to implement your callback function (Javascript, of course) to send an AJAX request to a Laravel action. You will need to look into XMLHttpRequest and its usages, or you can use some libraries to simplify your task, like jQuery or Axios.
On Laravel's side you will need to implement an action and a Route for it. (read this: https://appdividend.com/2022/01/22/laravel-ajax/)
Your Laravel will need to load data from your database, you can use Eloquent for this purpose or raw queries and then respond the POST request with the result.
Now, in your Javascript at the AJAX request's code you will need to have a callback function (yes, a callback inside a callback) which will handle the response and the the changes.
What about leveraging Observers? Also instead of having a status boolean, you could take a similar approach that Laravel has done for soft deletes and set exam_started_at. This way you can also keep track of time stamp and state all in one column. Also, observers are immediate rather than pushing them into a queue. Then generate a websocket event that can report back to your front end, if needed.
check out Laravel observer and soft delete documentation.
I know you specified "when the column on db changes..." but if it's not a strict-requirement you might want to consider implementing event-based architecture. Laravel has support for model events, which essentially allows you to run certain assertions and controls when a model created, updated, deleted etc.
class Exam extends Model
protected static function booted()
{
static::updated(function ($exam) {
if($exam->status=='your-desired-status'){
//your actions
}
//you can even in cooperate change controls
if ($exam->isDirty('status')){
//means status column changed
});
}
}
Of course this solution applies only if Database in question is in Laravel's reach. If database data changes outside the Laravel application these event listeners won't help at all.

How to integrate Redux with very large data-sets and IndexedDB

I have an app that uses a sync API to get its data, and requires to store all the data locally.
The data set itself is very large, and I am reluctant to store it in memory, since it can contains thousands of records. Since I don't think the actual data structure is relevant, let's assume I am building an email client that needs to be accessible offline, and that I want my storage mechanism to be IndexedDB (which is async).
I know that a simple solution would be to not have the data structure as part of my state object and only populate the state with the required data (eg - store email content on state when EMAIL_OPEN action is triggered). This is quite simple, especially with redux-thunk.
However, this would mean I need to compromise 2 things:
The user data is no longer part of the "application state", although in truth it is. Since the sync behavior is complex, and removing it from the app state machine will hurt the elegance of the redux concepts (the way I understand them)
I really like the redux architecture and would like all of my logic to go through it, not just the view state.
Are there any best-practices on how to use redux with a not-in-memory state properties? The thing I find hardest to wrap my head around is that redux relies on synchronous APIs, and so I cannot replace my state object with an async state object (unless I remove redux completely and replace it with my own, async implementation and connector).
I couldn't find an answer using Google, but if there are already good resources on the subject I would love to be pointed out as well.
UPDATE:
Question was answered but wanted to give a better explantation into how I implemented it, in case someone runs into it:
The main idea is to maintain change lists of both client and server using simply redux reducers, and use a connector to listen to these change lists to update IDB, and also to update the server with client changes:
When client makes changes, use reducers to update client change list.
When server sends updates, use reducers to update server change list.
A connector listens to store, and on state change updates IDB. Also maintain internal list of items that were modified.
When updating the server, use list of modified items to pull delta from IDB and send to server.
When accessing the data, use normal actions to pull from IDB (eg using redux-thunk)
The only caveat with this approach is that since the real state is stored in IDB, so we do lose some of the value of having one state object (and also harder to rewind/fast-forward state)
I think your first hunch is correct. If(!) you can't store everything in the store, you have to store less in the store. But I believe I can make that solution sound much better:
IndexedDB just becomes another endpoint, much like any server API you consume. When you fetch data from the server, you forward it to IndexedDB, from where your store is then populated. The store gets just what it needs and caches it as long as it doesn't get too big or stale.
It's really not different than, say, Facebook consuming their API. There's never all the data for a user in the store. References are implemented with IDs and these are loaded when required.
You can keep all your logic in redux. Just create actions as usual for user actions and data changes, get the data you need and process it. The interface is still completely defined by the user data because you always have the information in the store that is needed to GET TO the rest of it when needed. It's just somewhat condensed, i. e. you only save the total number of messages or the IDs of a mailbox until the user navigates to it.

How do Meteor.subscribe and MyCollection.find* operations interact?

I've been following lots of meteor examples and working through discover meteor, and now I'm left with lots of questions. I understand subscribe and fetch are ways to get "reactivity" to work properly, but I still feel unsure about the relationship between find operations and subscriptions/fetch. I'll try to ask some questions in order to probe for some holistic/conceptual answers.
Question Set 1:
In the following example we are fetching 1 object and we are subscribing to changes on it:
Meteor.subscribe('mycollection', someID);
Mycollection.findOne(someID);
Does order of operations matter here?
When does this subscription "expire"?
Question Set 2:
In some cases we want to foreign key lookup and use fetch like this:
MyCollection2.find({myCollection1Id: doc1Id}).fetch();
Do we need also need a MyColletion2.subscribe when using fetch?
How does subscribe work with "foreign keys"?
Is fetch ~= to a subscription?
Question Set 3:
What is an appropriate use of Tracker.autorun?
Why/when should I use it instead of subscribe or fetch?
what happens when you subscribe and find/fetch
The client calls subscribe which informs the server that the client wants to see a particular set of documents.
The server accepts or rejects the subscription request and publishes the matching set of documents.
Sometime later (after network delay) the documents arrive on the client. They are stored in a database in the browser called minimongo.
A subsequent fetch/find on the collection in which the aforementioned documents are stored will query minimongo (not the server).
If the subscribed document set changes, the server will publish a new copy to the client.
Recommended reading: understanding meteor publications and subscriptions.
question 1
The order matters. You can't see documents that you haven't subscribed for (assuming autopublish is off). However, as I point out in common mistakes, subscriptions don't block. So a subscription followed by an immediate fetch is should return undefined.
Subscriptions don't stop on their own. Here's the breakdown:
A global subscription (one made outside of your router or template) will never stop until you call its stop method.
A route subscription (iron router) will stop when the route changes (with a few caveats).
A template subscription will stop when the template is destroyed.
question 2
This should be mostly explained by the first section of my answer. You'll need both sets of documents in order to join them on the client. You may publish both sets at once from the server, or individually - this is a complex topic and depends on your use case.
question 3
These two are somewhat orthogonal. An autorun is a way for you to create a reactive computation (a function which runs whenever its reactive variables change) - see the section on reactivity from the docs. A find/fetch or a subscribe could happen inside of an autorun depending on your use case. This probably will become more clear once you learn more about how meteor works.
Essentially, when you subscribe to a dataset, it fills minimongo with that data, which is stored in the window's local storage. This is what populates the client's instance of that Mongo with data, otherwise, basically all queries will return undefined data or empty lists.
To summarize: Subscribe and Publish are used to give different data to different users. The most common example would be giving different data based on roles. Say, for instance, you have a web application where you can see a "public" and a "friend" profile.
Meteor.publish('user_profile', function (userId) {
if (Roles.userIsInRole(this.userId, 'can-view', userId)) {
return Meteor.users.find(userId, {
fields: {
public: 1,
profile: 1,
friends: 1,
interests: 1
}
});
} else {
return Meteor.users.find(userId, {
fields: { public: 1 }
});
}
});
Now if you logged in as a user who was not friends with this user, and did Meteor.subscribe('user_profile', 'theidofuser'), and did Meteor.users.findOne(), you would only see their public profile. If you added yourself to the can-view role of the user group, you would be able to see public, profile, friends, and interests. It's essentially for security.
Knowing that, here's how the answers to your questions breaks down:
Order of operations matters, in the sense that you will get undefined unless it's in a reactive block (like Tracker.autorun or Template.helpers).
you still need to use the subscribe when using fetch. All fetch really does is return an array instead of a cursor. To publish with foreign keys is a pretty advanced problem at times, I recommend using reywood:publish-composite, which handles the annoying details for you
Tracker.autorun watches reactive variables within the block and will rerun the function when one of them changes. You don't really ever use it instead of subscribing, you just use it to watch the variables in your scope.

Where do long-running processes live in a React + Redux application?

Where should long running processes "live" in a react+redux app?
For a simple example, consider a class which sends and receives messages over a websocket:
class WebsocketStreamer {
sendMessage(message) {
this.socket.send(…);
}
onMessageReceive(event) {
this.dispatch({
type: "STREAMER_RECV",
message: event.data,
})
}
}
How should the lifecycle of this class be managed?
My first instinct is to keep it on the store:
var stores = {
streamer: function(state={}, action) {
if (action.type == "##INIT")
return { streamer: new WebsocketStreamer() }
if (action.type == "STREAMER_SEND")
state.streamer.sendMessage(action.message)
return state;
}
}
But, aside from being a bit strange, there's also no way for the WebsocketStreamer to get access to the dispatch() function, and it breaks hot reloading.
Another potential solution is to keep it in a global somewhere:
const streamer = new WebsocketStreamer();
But that has obvious testability implications, and breaks hot reloading too.
So, where should a long running process live in a react + redux app?
Note: I realize that this simple example could be built with just stores + action providers. But I would specifically like to know where long-lived processes should exist in situations where they exist.
In my experience, there are two options. First, you can pass store to any non-Redux code and dispatch actions from here. I've did this with socket connection and all was fine. Second, if you need socket or whatever to change with redux actions, it looks like a good idea to put connection and it's management in custom middleware. You'll have access to store API as well as will be informed on all actions dispatching, so could do anything you need to.
I'm doing something similar with websockets. In my case, I simply wrap the websocket client in a React component that renders null and inject it as close to the root as possible.
<App>
<WebSocketClientThingy handlers={configuredHandlers}/>
....
</App>
Here's a quick example. It's pretty naive, but get's stuff done.
https://github.com/trbngr/react-example-pusher
Quick note:
The websocket doesn't live in the store. It's simply there and publishes actions.
EDIT:
I decided to explore setting the client (long-lived object) into the global state. I gotta say that I'm a fan of this approach.
https://github.com/trbngr/react-example-pusher/tree/client_as_state
I opensource a demo issue tracker with long running ops using React/Redux/Node,all the involved code but is open sourced and MIT.
Sometimes I need to pull or push the repo and depending on the connection this might take a long time, this is where the next long running operation comes in.
Overall the key points of the approach are:
A redux store with the active operations and its status.
A redux store with events of the operations
Initialize the operations store with all the ongoing operations during the page
initialization
Use a http events server to update a operation status, either data, error, complete, progress, etc.
Connect the components like buttons to the ongoing operation status.
For each component keep a list of involved operations and parameters, if the operation and parameter match... change the button state to loading/done/etc..
Change the status of the operation store with the events update or request results (I am using a GraphQL and all the mutations returns a "Operation" type)
The involved repositories are:
https://github.com/vicjicaman/tracker-common ---> GO TO ---> src/pkg/app-operation/src
https://github.com/vicjicaman/tracker-app-events
https://github.com/vicjicaman/tracker-operation
https://github.com/vicjicaman/tracker-events
This is how it looks like running:
https://user-images.githubusercontent.com/36018976/60389724-4e0aa280-9ac7-11e9-9129-b8e31b455c50.gif
Keeping the state this way also helps you to:
The ongoing operations state is keep even if the window is refreshed
If you open two or more windows and perform operations in one window, the state of the running operations and UI for all the windows are sync as well.
I hope the approach or code helps.

dojo.store.Observable, JSON REST and queryEngine

Does anybody know how to use the JsonRest store in dojo witn an Observable weapper, like the one in dojo.store.Observable?
What do I need, server side, to implement the store and make it work as an Observable one? What about the client side?
The documentation says http://dojotoolkit.org/reference-guide/1.7/dojo/store/Observable.html
If you are using a server side store like the JsonRest store, you will need to provide a queryEngine in order for the update objects to be properly included or excluded from queries. If a queryEngine is not available, observe listener will be called with an undefined index.
But, I have no idea what they mean. I have never created a store myself, and am not 100% familiar with queryEngine (to be honest, I find it a little confusing). Why is queryEngine needed? What does the doc mean by "undefined index"? And how do you write a queryEngine for a JsonRest store? Shouldn't I use some kind of web socket for an observable REST store, since other users might change the data as well?
Confused!
I realize this quesiton is a bit old, but here's some info for future reference. Since this is a multi-part question, I'll break it down into separate pieces:
1) Server-side Implementation of JsonRest
There's a pretty decent write up on implementing the server side of JsonRest Store. It shows exactly what headers JsonRest will generate and what content will be included in the rest. It helps form a mental model of how the JsonRest api is converted into HTTP.
2) Query Engine
Earlier in the same page, how query() works client side is explained. Basically, the query() function needs to be able to receive an object literal (ex: {title:'Learning Dojo',categoryid:5}) and return the objects in the store that match those conditions. "In the store" meaning already loaded into memory on the client, not on the server.
Depending on what you're trying to do, there's probably no need to write your own queryEngine anyway -- just use the built-in SimpleQueryEngine if you're building your own custom store. The engine just needs to be handed an object literal and it adds the whole dojo query() api for you.
3) Observables
My understanding is that the Observables monitor client side changes in the collection of objects (ex: adding or removing a result) or even within a specific object (ex: post 5 has changed title). It does NOT monitor changes that happen server-side. It simply provides a mechanism to notify other aspects of the client-side app that data changed so that all aspects of the page stay synchronized.
There's a whole write up on using Observables under the headings 'Collection Data Binding' and 'Object Data Binding: dojo/Stateful'.
4) Concurrency
There's two things you'd want to do in order to keep your client side data synchronized with the server side data: a) polling for changes from other users on the server, b) using transactions to send data to the server.
a) To poll for changes to the data, you'd want to have your object store track the active query in a variable. Then, use setTimeout() or setInterval() to run the query in the background again every so often. Make sure that widgets or other aspects of your application use Observables to monitor changes in the query result set(s) they depend on. That way, changes on the server by other users would automatically be reflected throughout your application.
b) Use transactions to combine actions that must be combined. Then, make sure the server sends back HTTP 200 Status codes (meaning 'It Worked!'). If the transactions returns a HTTP status in the 400s, then it didn't work for some reason, and you need to requery the data because something changed on the backend. For example, the record you want to update was deleted, so you can't update it. There's a write up on transactions as well under the heading 'Transactional'

Categories