ExtJS 4.1.0
Update 6/6/13:
I have posted this same question on the Sencha forums where there hasn't been much action. The post is more or less the same, but I figured I would add it here just for reference. I am still eager to hear other community members' input on what must be a very common scenario in an ExtJS Application!
http://www.sencha.com/forum/showthread.php?265358-Complex-Model-Save-Decoupling-Data-and-Updating-Related-Stores
Update 7/16/13 (Conclusion?)
The Sencha post garnered very little discussion. I have decided to put the majority of the load of complex save operations on my application server and lazily refresh client stores where need be. This way I can use my own Database wrapper to encompass all of the transactions associated with one complex Domain Object save to guarantee atomicity. If saving a new Order consists of saving the order metadata, ten new instances of OrderContents and potentially other information (addresses residing in other tables, a new customer defined at the time of order creation, etc.) I would much rather send the payload to the application server, rather than establish a vulgar web of callbacks in client-side application code. Data which is associated on a One-to-One basis (such as an Order hasOne Address) is updated in the success callback of the Order.save() operation. More complex data, such as the Order's contents, is lazily handled by simply calling contentStore.sync(). I feel that this is the means to guarantee atomicity without an overwhelming number of client-callbacks
Original Post Content
Given the overall disappointing functionality of saving association-heavy models, I have all but ditched model associations in my application and rely retrieving associated data myself. This is all well and good, but unfortunately does not resolve the issue of actually saving the data and updating ExtJS stores to reflect the changes on the server.
Take for example saving an Order object, which is composed of metadata as well as OrderContents i.e., the parts on the order. The metadata ends up in an Order_Data table in the database, whereas the contents all end up in an Order_Contents table where each row is linked to the parent order via an order_id column.
On the client, retrieving the contents for an order is quite easy to do without any need for associations: var contents = this.getContentsStore().query('order_id', 10).getRange(). However, a major flaw is that this is hinging on the content records being available in the OrderContents ExtJS Store, which would apply if I were using associations NOT returned by the data server with the "main" object.
When saving an order, I send a single request which holds the order's metadata (e.g., date, order number, supplier information, etc.) as well as an array of contents. These pieces of data are picked apart and saved to their appropriate tables. This makes enough sense to me and works well.
All is well until it comes to returning saved/updated records from the application server. Since the request is fired off by calling a OrderObject.save(), there is nothing telling the OrderContents store that new records are available. This would be handled automatically if I were to instead add records to the store and call .sync(), but I feel this complicates the saving process and I would just much rather handle this decoupling on the application server not to mention, saving an entire request is quite nice as well.
Is there a better way to solve this? My current solution is as follows...
var orderContentsStore = this.getOrderContentsStore();
MyOrderObject.save({
success: function(rec, op){
// New Content Records need to be added to the contents store!
orderContentsStore.add(rec.get('contents')); // Array of OrderContent Records
orderContentsStore.commitChanges(); // This is very important
}
});
By calling commitChanges() the records added to the store are considered to be clean (non-phantom, non-dirty) and thus are no longer returned by the store's getModifiedRecords() method; rightly so as the records should not be passed to the application server in the event of a store.sync().
This approach just seems kinda sloppy/hacky to me but I haven't figured out a better solution...
Any input / thoughts are greatly appreciated!
Update 8/26/13 I found that associated data is indeed handled by Ext in the create/update callback on the model's proxy, but finding that data wasn't easy... See my post here: ExtJS 4.1 - Returning Associated Data in Model.Save() Response
Well, it's been a couple months of having this question open and I feel like there is no magically awesome solution to this problem.
My solution is as follows...
When saving a complex model (e.g., a model that would, or does have a few hasMany associations), I save the 'parent' model which includes all associated data (as a property/field on the model!) and then add the (saved) associated data in the afterSave/afterUpdate callback.
Take for example my PurchaseOrder model which hasMany Items and hasOne Address. Take note that the associated data is included in the model's properties, as it will not be passed to the server if it solely exists in the model's association store.
console.log(PurchaseOrder.getData());
---
id: 0
order_num: "PO12345"
order_total: 100.95
customer_id: 1
order_address: Object
id: 0
ship_address_1: "123 Awesome Street"
ship_address_2: "Suite B"
ship_city: "Gnarlyville"
ship_state: "Vermont"
ship_zip: "05401"
...etc...
contents: Array[2]
0: Object
id: 0
sku: "BR10831"
name: "Super Cool Shiny Thing"
quantity: 5
sold_price: 84.23
1: Object
id: 0
sku: "BR10311"
name: "Moderately Fun Paddle Ball"
quantity: 1
sold_price: 1.39
I have Models established for PurchaseOrder.Content and PurchaseOrder.Address, yet the data in the PurchaseOrder is not an instance of these models, rather just the data. Again, this is to ensure that it is passed correctly to the application server.
Once I have an object like described above, I send it off to my application server via .save() as follows:
PurchaseOrder.save({
scope: me,
success: me.afterOrderSave,
failure: function(rec,op){
console.error('Error saving Purchase Order', op);
}
});
afterOrderSave: function(record, operation){
var me = this;
switch(operation.action){
case 'create':
/**
* Add the records to the appropriate stores.
* Since these records (from the server) have an id,
* they will not be marked as dirty nor as phantoms
*/
var savedRecord = operation.getResultSet().records[0]; // has associated!
me.getOrderStore().add(savedRecord);
me.getOrderContentStore().add(savedRecord.getContents()); //association!
me.getOrderAddressStore().add(savedRecord.getAddress()); // association!
break;
case 'update':
// Locate and update records with response from server
break;
}
}
My application server receives the PurchaseOrder and handles saving the data accordingly. I will not go into gross details as this process is largely dependent on your own implementation. My application framework is loosely based on Zend 1.11 (primarily leveraging Zend_Db).
I feel this is the best approach for the following reasons:
No messy string of various model.save() callbacks on the client
Only one request, which is very easy to manage
Atomicity is easily handled on the application server
Less round trips = less potential points of failure to worry about
If you're really feeling lazy, the success method of the callback can simply reload stores.
I will let this answer sit for a bit to encourage discussion.
Thanks for reading!
Related
I am currently working on a project where we use microservice architecture. I am also somewhat new to this architecture and have had a few concerns. I understand the concept of microservices in general and also how we can have one database per service. This brings me to a point where I get confused on how to pull data from different databases for a particular user.
Scenario
Assuming I have a Users and a Posts service with their schema like this
User
const schema = {
name: String
id: String
...
}
Post
const schema = {
text: String
user: Id // reference of the user who made this post.
}
Now on the UI, I want to load a set of posts and the associated users who made the post, how do I get a Post alongside the User who made the respective Post. I am using MongoDB, how do I populate data that are stored in other databases? I am also using Kafka handle async operations, how do I leverage Kafka for this usecase? Or is there a much better way of doing this? The final response of a Post could be something like this.
{
text: 'Some random message',
user: {
name: 'John Doe',
id: 1234
}
}
Also, I know I could make a call to the User service to get the User, then make a call to the Post service to get the Post and merge both objects together, but is there a much better option than this basically? I am thinking in cases where I want to do multiple lookups for a user, e.g to get a User and their associated Posts, Messages, etc, how can I handle scenarios like this, are their any techniques I could leverage for situations like this?
Thank you in advance!
I think your issue is service boundaries are too granular. I would recommend aligning your services to bounded contexts (https://martinfowler.com/bliki/BoundedContext.html). For example if you have a "blog" service with posts and users, its quite alright for the blog service to contain both a mongo and relational database for the different models.
Then you ask the service "give me posts for a user" and it is responsible for combining that data as part of its logic.
If you MUST keep them separate (which i would not recommend for the exact problem you are having) then I would keep a lightweight cache of usernames inside the posts service.
Use that to populate the usernames into the post when you return one. You can either update the cache on a regular basis using events, polling, batches. Or just query the user service on a cache-miss.
When dealing with distributed systems you cannot rely on consistency and synchronous, stable communication like you can in a monolith.
I have the following react-apollo-wrapped GraphQL query:
user(id: 1) {
name
friends {
id
name
}
}
As semantically represented, it fetches the user with ID 1, returns its name, and returns the id and name of all of its friends.
I then render this in a component structure like the following:
graphql(ParentComponent)
-> UserInfo
-> ListOfFriends (with the list of friends passed in)
This is all working for me. However, I wish to be able to refetch the list of friends for the current user.
I can do this.props.data.refetch() on the parent component and updates will be propagated; however, I'm not sure this is the best practice, given that my GraphQL query looks something more like this,
user(id: 1) {
name
foo1
foo2
foo3
foo4
foo5
...
friends {
id
name
}
}
Whilst the only thing I wish to refetch is the list of friends.
What is the best way to cleanly architect this? I'm thinking along the lines of binding an initially skipped GraphQL fetcher to the ListOfFriends component, which can be triggered as necessary, but would like some guidance on how this should be best done.
Thanks in advance.
I don't know why you question is downvoted because I think it is a very valid question to ask. One of GraphQL's selling points is "fetch less and more at once". A client can decide very granually what it needs from the backend. Using deeply nested graphlike queries that previously required multiple endpoints can now be expressed in a single query. At the same time over-fetching can be avoided. Now you find yourself with a big query, everything loads at once and there are no n+1 query waterfalls. But now you know that a few fields in your big query are subject to change from now and then and you want to actively update the cache with new data from the server. Apollo offers the refetch field but it loads the whole query which clearly is overfetching that was sold to me as not being a concern anymore in GraphQL. Let me offer some solutions:
Premature Optimisation?
The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming. - Donald Knuth
Sometimes we try to optimise too much without measuring first. Write it the easy way first and then see if it is really an issue. What exactly is slow? The network? A particular field in the query? The sheer size of the query?
After you analized what exactly is slow we can start looking into improving:
Refetch and include/skip directives
Using directives you can exclude fields from a query depending on variables. The refetch function can specify different variables than the initial query. This way you can exclude fields when you refetch the query.
Splitting up Queries
Single page apps are a great idea. The HTML is generated client side and the page does not have to make expensive trips to the server to render a new page. But soon SPAs got to big and code splitting became an issue. And now we are basically back to server side rendering and splitting the app into pages. The same might apply to GraphQL. Sometimes queries are too big and should be split. You could split up the queries for UserInfo and ListOfFriends. Inside of the cache the fields will be merged. With query batching both queries will be send in the same request and a GraphQL server that implements per request resource caching correctly (e.g. with Dataloader) will barely notice a difference.
Subscriptions
Maybe you are ready to use subscriptions already. Subscriptions send updates from the server for fields that have changed. This way you could subscribe to a user's friends and get updates in real time. The good news is that Apollo Client, Relay and many server implementations offer support for subscriptions already. The bad news is that it needs websockets that usually put different requirements on your technology stack than pure HTTP.
withApollo() -> this.client.query
This should only be your last resort! Using react-apollo's withApollo higher order component you can directly inject the ApolloClient instance. You can now execute queries using this.client.query(). { user(id: 1) { friendlist { ... } } } can be used to just fetch the friend list and update the cache which will lead to an update of your component. This might look like what you want but can haunt you in later stages of the app.
TL;DR:
I'm making an app for a canteen. I have a collection with the persons and a collection where I "log" every meat took. I need to know those who DIDN'T take the meal.
Long version:
I'm making an application for my local Red Cross.
I'm trying to optimize this situation:
there is a canteen at wich the helped people can take food at breakfast, lunch and supper. We need to know how many took the meal (and this is easy).
if they are present they HAVE TO take the meal and eat, so we need to know how many (and who) HAVEN'T eat (this is the part that I need to optimize).
When they take the meal the "cashier" insert their barcode, the program log the "transaction" in the log collection.
Actually, on creation of the template "canteen" I create a local collection "meals" and populate it with the data of all the people in the DB, (so ID, name, fasting/satiated), then I use this collection for my counters and to display who took the meal and who didn't.
(the variable "mealKind" is = "breakfast" OR "lunch" OR "dinner" depending on the actual serving.)
Template.canteen.created = function(){
Meals=new Mongo.Collection(null);
var today= new Date();today.setHours(0,0,1);
var pers=Persons.find({"status":"present"},{fields:{"Name":1,"Surname":1,"barcode":1}}).fetch();
pers.forEach(function(uno){
var vediamo=Log.findOne({"dest":uno.codice,"what":mealKind, "when":{"$gte": today}});
if(typeof vediamo=="object"){
uno['eat']="satiated";
}else{
uno['eat']="fasting";
}
Meals.insert(uno);
});
};
Template.canteen.destroyed = function(){
meals.remove({});
};
From the meal collection I estrapolate the two colums of people satiated (with name, surname and barcode) and fasting, and I also use two helpers:
fasting:function(){
return Meals.find({"eat":"fasting"});
}
"countFasting":function(){
return Meals.find({"eat":"fasting"}).count();
}
//same for satiated
This was ok, but now the number of people is really increasing (we are arount 1000 and counting) and the creation of the page is very very slow, and usually it stops with errors so I can read that "100 fasting, 400 satiated" but I have around 1000 persons in the DB.
I can't figure out how to optimize the workflow, every other method that I tried involved (in a manner or another) more queries to the DB; I think that I missed the point and now I cannot see it.
I'm not sure about aggregation at this level and inside meteor, because of minimongo.
Although making this server side and not client side is clever, the problem here is HOW discriminate "fasting" vs "satiated" without cycling all the person collection.
+1 if the solution is compatibile with aleed:tabular
EDIT
I am still not sure about what is causing your performance issue (too many things in client memory / minimongo, too many calls to it?), but you could at least try different approaches, more traditionally based on your server.
By the way, you did not mention either how you display your data or how you get the incorrect reading for your number of already served / missing Persons?
If you are building a classic HTML table, please note that browsers struggle rendering more than a few hundred rows. If you are in that case, you could implement a client-side table pagination / infinite scrolling. Look for example at jQuery DataTables plugin (on which is based aldeed:tabular). Skip the step of building an actual HTML table, and fill it directly using $table.rows.add(myArrayOfData).draw() to avoid the browser limitation.
Original answer
I do not exactly understand why you need to duplicate your Persons collection into a client-side Meals local collection?
This requires that you have first all documents of Persons sent from server to client (this may not be problematic if your server is well connected / local. You may also still have autopublish package on, so you would have already seen that penalty), and then cloning all documents (checking for your Logs collection to retrieve any previous passages), effectively doubling your memory need.
Is your server and/or remote DB that slow to justify your need to do everything locally (client side)?
Could be much more problematic, should you have more than one "cashier" / client browser open, their Meals local collections will not be synchronized.
If your server-client connection is good, there is no reason to do everything client side. Meteor will automatically cache just what is needed, and provide optimistic DB modification to keep your user experience fast (should you structure your code correctly).
With aldeed:tabular package, you can easily display your Persons big table by "pages".
You can also link it with your Logs collection using the dburles:collection-helpers (IIRC there is an example en the aldeed:tabular home page).
I have a question about Collections - specifically, I want to have a large collection on a server, and load only small bits of it a piece at a time, in an unpredictable order, where I might stop wanting to have a local copy of any given piece at any time. Should I make a new subscription for each piece of data, and then stop it when I no longer want that piece of data? Or should I use some other method? Or should I just load large chunks of it that I won't use and leave them sitting around in my local copy of the collection?
Edit: Or should I have one subscription with a list of the ID's for each piece of data I want, and have the publication function specifically find each of those? Seems complicated, but it does keep me with only having to deal with one subscription.
Edit: Or maybe I should just skip using publications and subscriptions, and just use Methods to pass my data to the client? Loses a lot of functionality, and requires some extra work, but it does dodge most of the problems and should work just fine for my purposes.
Suppose Mongo collection ="items"
{
name:'item1',
type:'basic',
qty:40
}
you define collections on the Meteor server with
Items= new Mongo.Collection('items')
1.These collections contain all the data from the MongoDB collections, and you can run Items.find({...}) on them, which will return a cursor (a set of records, with methods to iterate through them and return them).
Meteor.publish('itemOver30', function itemPublication() {
return(Items.find({qty:{gte:10},{name:1,qty:1}));
});
This will return cursor to all the records with item qty over 30 in items collection(subset of total records, not whole collection).
2.Cursor is used to publish (send) a set of records (called a "record set"). You can optionally publish only some fields from those records. It is record sets (not collections) that clients subscribe to.
Meteor.subscribe('itemOver30');
On the client, you have Minimongo collections that partially mirror some of the records from the server. "Partially" because they may contain only some of the fields, and "some of the records" because you usually want to send to the client only the records it needs, to speed up page load, and only those it needs and has permission to access.
I have a grid(employee grid) which has say 1000-2000 rows.
I display employee name and department in the grid.
When I get data for the grid, I get other detail for the employee too(Date of Birth, location,role,etc)
So the user has option to edit the employee details. when he clicks edit, I need to display other employee details in the pop up. since I have stored all the data in JavaScript, I search for the particular id and display all the details. so the code will be like
function getUserDetails(employeeId){
//i store all the employeedetails in a variable employeeInformation while getting //data for the grid.
for(var i=0;i<employeeInformation.length;i++){
if(employeeInformation[i].employeeID==employeeId){
//display employee details.
}
}
}
the second solution will be like pass employeeid to the database and get all the information for the employee. The code will be like
function getUserDetails(employeeId){
//make an ajax call to the controller which will call a procedure in the database
// to get the employee details
//then display employee details
}
So, which solution do you think will be optimal when I am handling 1000-2000 records.
I don't want to make the JavaScript heavy by storing a lot of data in the page.
UPDATED:
so one of my friend came up with a simple solution.
I am storing 4 columns for 500 rows(average). So I don't think there should not be rapid slowness in the webpage.
while loading the rows to the grid, under edit link, I give the data-rowId as an attribute so that it will be easy to retrieve the data.
say I store all the employee information in a variable called employeeInfo.
when someone clicks the edit link.. $(this).attr('data-rowId') will give the rowId and employeeInfo[$(this).attr('data-rowId')] should give all the information about the employee.
instead of storing the employeeid and looping over the employee table to find the matching employeeid, the rowid should do the trick. this is very simple. but did not strike me.
I would suggest you make an AJAX call to the controller. Because of two main reasons
It is not advisable to handle Database actiity in javascript due to security issues.
Javascript runs on client side machine it should have the least load and computation.
Javascript should be as light as possible. So i suggest you do it in the database itself.
Don't count on JavaScript performance, because it is heavily depend on computer that is running on. I suggest you to store and search on server-side rather than loading heavy payload of data in Browser which is quite restricted to resources of end-user.
Running long loops in JavaScript can lead to an unresponsive and irritating UI. Use Ajax calls to get needed data as a good practice.
Are you using HTML5? Will your users typically have relatively fast multicore computers? If so, a web-worker (http://www.w3schools.com/html/html5_webworkers.asp) might be a way to offload the search to the client while maintaining UI responsiveness.
Note, I've never used a Worker, so this advice may be way off base, but they certainly look interesting for something like this.
In terms of separation of concerns, and recommended best approach, you should be handling that domain-level data retrieval on your server, and relying on the client-side for processing and displaying only the records with which it is concerned.
By populating your client with several thousand records for it to then parse, sort, search, etc., you not only take a huge performance hit and diminish user experience, but you also create many potential security risks. Obviously this also depends on the nature of the data in the application, but for something such as employee records, you probably don't want to be storing that on the client-side. Anyone using the application will then have access to all of that.
The more pragmatic approach to this problem is to have your controller populate the client with only the specific data which pertains to it, eliminating the need for searching through many records. You can also retrieve a single object by making an ajax query to your server to retrieve the data. This has the dual benefit of guaranteeing that you're displaying the current state of the DB, as well as being far more optimized than anything you could ever hope to write in JS.