I've been looking for a way to do complex queries like SQL can perform but totally client side. I know that I can get the exact results that I want from doing SQL queries off of the server and I could even AJAX it so that it looks smooth. However for scaleability, performance, and bandwidth reasons I'd prefer to do this all client side.
Some requirements:
Wide browser compatibility. Anything that can run jQuery is fine. I'd actually prefer that it be a jQuery plugin.
Can sort on more than one column. For instance, order by state alphabetically and list all cities alphabetically within each state.
Can filter results. For instance, the equivalent of "where state = 'CA' or 'NY' or 'TX'".
Must work completely client side so the user only needs to download a large set of data once and can cut the data however they want without constantly fetching data from the server and would in fact be able to do all queries offline after the initial pull.
I've looked around on stackoverflow and found jslinq but it was last updated in 2009 and has no documentation. I also can't tell if it can do more complex queries like ordering on two different columns or doing "and" or "or" filtering.
I would think that something like this would have been done already. I know HTML5 got started down this path but then hit a roadblock. I just need basic queries, no joins or anything. Does anyone know of something that can do this? Thanks.
Edit: I think I should include a use case to help clarify what I'm looking for.
For example, I have a list of the 5000 largest cities in the US. Each record include Cityname, State, and Population. I would like to be able to download the entire dataset once and populate a JS array with it then, on the client side only, be able to run queries like the following and create a table from the resulting records.
Ten largest cities in California
All cities that start with "S" with populations of 1,000,000 or more.
Largest three cities in California, New York, Florida, Texas, and Illinois and order them alphabetically by state then by population. i.e. California, Los Angeles, 3,792,621; California, San Diego, 1,307,402; California, San Jose, 945,942...etc.
All of these queries would be trivial to do via SQL but I don't want to keep going back and forth to the server and I also want to allow offline use.
Take a look at http://linqjs.codeplex.com/
It easily meets all your requirements.
Try Alasql.js. This is a javascript client-side SQL database.
You can do complex queries with joins and grouping, even optimization of joins and where parts. It does not use WebSQL.
Your requirements support:
Wide browser compatibility - all modern versions of browsers, including mobiles.
Can sort on more than one column.- Alasql does it with ORDER BY clause.
Can filter results. - with WHERE clause.
Must work completely client side so the user only needs to download a large set of data once and can cut the data however they want without constantly fetching data from the server and would in fact be able to do all queries offline after the initial pull. - you can use pure JavaScript (Array.push(), etc.) operations to modify data (do not forget to set table.dirty flag).
Here is a simple example ( play with it in jsFiddle ):
// Fill table with data
var person = [
{ name: 'bill' , sex:'M', income:50000 },
{ name: 'sara' , sex:'F', income:100000 },
{ name: 'larry' , sex:'M', income:90000 },
{ name: 'olga' , sex:'F', income:85000 },
];
// Do the query
var res = alasql("SELECT * FROM ? person WHERE sex='F' AND income > 60000", [person]);
As long as the data can fit in memory as an array of objects, you can just use sort and filter. For example, say you want to filter products. You want to find all products either below $5 or above $100 and you want to sort by price (ascending), and if there are two products with the same price, sort by manufacturer (descending). You could do that like this:
var results = products.filter(function(product) {
// price is in cents
return product.price < 500 || product.price > 10000;
});
results.sort(function(a, b) {
var order = a.price - b.price;
if(order == 0) {
order = b.manufacturer.localeCompare(a.manufacturer);
}
return order;
});
For cross-browser compatibility, just shim filter.
How about Yahoo's YQL? I've only briefly looked at it, but it looks interesting.
Backbone is a pretty good js library which (their words) "gives structure to web applications by providing models with key-value binding and custom events, collections with a rich API of enumerable functions, views with declarative event handling, and connects it all to your existing API over a RESTful JSON interface."
I am not sure if this is what you are looking for but you can use it to mock up your model and bind event listeners to it. This seems to be a good tutorial to go through some of the base uses for it.
You can use CanJS. Its a relativelly new library that performs better then Backbone and other and its based on the infamous JavaScript MVC library. In reallity, its the MVC part of the JS MVC with a bit of spices.
You can take a look at this tut by net.tutsplus.com http://net.tutsplus.com/tutorials/javascript-ajax/diving-into-canjs-part-3/
It's pretty powerfull and fast. Has features like live binding that make's your life easy.
Coils is a Clojurescript framework which compiles to Javascropt and has client side SQL queries like this:
(go
(log (sql "SELECT * FROM test_table where name = ?" ["shopping"] )))
: It is full SQL that is passed to a server side relational database:
https://github.com/zubairq/coils
Related
I have a use case where I need to do complicated string matching on records of which there are about 5.1 Million of. When I say complicated string matching, I mean using library to do fuzzy string matching. (http://blog.bripkens.de/fuzzy.js/demo/)
The database we use at work is SAP Hana which is excellent for retrieving and querying because it's in memory so I would like to avoid pulling data out of there and re-populating it in memory on the application layer but at the same time I cannot take advantages of the libraries (there is an API for fuzzy matching in the DB but it's not comprehensive enough for us).
What is the middle ground here? If I do pre-processing and associate words in the DB with certain keywords the user might search for I can cut down the overhead but are there any best practises that are employed when It comes to this ?
If it matters. The list is a list of Billing Descriptors (that show up on CC statements) therefore, the user will search these descriptors to find out which companies the descriptor belongs too.
Assuming your "billing descriptor" is a single column, probably of type (N)VARCHAR I would start with a very simple SAP HANA fuzzy search, e.g.:
SELECT top 100 SCORE() AS score, <more fields>
FROM <billing_documents>
WHERE CONTAINS(<bill_descr_col>, <user_input>, FUZZY(0.7))
ORDER BY score DESC;
Maybe this is already good enough when you want to apply your js library on the result set. If not, I would start to experiment with the similarCalculationMode option, like 'similarcalculationmode=substringsearch' etc. And I would always have a look at the response times, they can be higher when using some of the options.
Only if response times are to high, or many active concurrent users are using your query, I would try to create a fuzzy search index on your search column. If you need more search options, you can also create a fullext index.
But that all really depends on you use case, the values you want to compare etc.
There is a very comprehensive set of features and options for different use cases, check help.sap.com/hana/SAP_HANA_Search_Developer_Guide_en.pdf.
In a project we did a free style search on several address columns (name, surname, company name, post code, street) and we got response times of 100-200ms on ca 6 Mio records WITHOUT using any special indexes.
TL;DR:
I'm making an app for a canteen. I have a collection with the persons and a collection where I "log" every meat took. I need to know those who DIDN'T take the meal.
Long version:
I'm making an application for my local Red Cross.
I'm trying to optimize this situation:
there is a canteen at wich the helped people can take food at breakfast, lunch and supper. We need to know how many took the meal (and this is easy).
if they are present they HAVE TO take the meal and eat, so we need to know how many (and who) HAVEN'T eat (this is the part that I need to optimize).
When they take the meal the "cashier" insert their barcode, the program log the "transaction" in the log collection.
Actually, on creation of the template "canteen" I create a local collection "meals" and populate it with the data of all the people in the DB, (so ID, name, fasting/satiated), then I use this collection for my counters and to display who took the meal and who didn't.
(the variable "mealKind" is = "breakfast" OR "lunch" OR "dinner" depending on the actual serving.)
Template.canteen.created = function(){
Meals=new Mongo.Collection(null);
var today= new Date();today.setHours(0,0,1);
var pers=Persons.find({"status":"present"},{fields:{"Name":1,"Surname":1,"barcode":1}}).fetch();
pers.forEach(function(uno){
var vediamo=Log.findOne({"dest":uno.codice,"what":mealKind, "when":{"$gte": today}});
if(typeof vediamo=="object"){
uno['eat']="satiated";
}else{
uno['eat']="fasting";
}
Meals.insert(uno);
});
};
Template.canteen.destroyed = function(){
meals.remove({});
};
From the meal collection I estrapolate the two colums of people satiated (with name, surname and barcode) and fasting, and I also use two helpers:
fasting:function(){
return Meals.find({"eat":"fasting"});
}
"countFasting":function(){
return Meals.find({"eat":"fasting"}).count();
}
//same for satiated
This was ok, but now the number of people is really increasing (we are arount 1000 and counting) and the creation of the page is very very slow, and usually it stops with errors so I can read that "100 fasting, 400 satiated" but I have around 1000 persons in the DB.
I can't figure out how to optimize the workflow, every other method that I tried involved (in a manner or another) more queries to the DB; I think that I missed the point and now I cannot see it.
I'm not sure about aggregation at this level and inside meteor, because of minimongo.
Although making this server side and not client side is clever, the problem here is HOW discriminate "fasting" vs "satiated" without cycling all the person collection.
+1 if the solution is compatibile with aleed:tabular
EDIT
I am still not sure about what is causing your performance issue (too many things in client memory / minimongo, too many calls to it?), but you could at least try different approaches, more traditionally based on your server.
By the way, you did not mention either how you display your data or how you get the incorrect reading for your number of already served / missing Persons?
If you are building a classic HTML table, please note that browsers struggle rendering more than a few hundred rows. If you are in that case, you could implement a client-side table pagination / infinite scrolling. Look for example at jQuery DataTables plugin (on which is based aldeed:tabular). Skip the step of building an actual HTML table, and fill it directly using $table.rows.add(myArrayOfData).draw() to avoid the browser limitation.
Original answer
I do not exactly understand why you need to duplicate your Persons collection into a client-side Meals local collection?
This requires that you have first all documents of Persons sent from server to client (this may not be problematic if your server is well connected / local. You may also still have autopublish package on, so you would have already seen that penalty), and then cloning all documents (checking for your Logs collection to retrieve any previous passages), effectively doubling your memory need.
Is your server and/or remote DB that slow to justify your need to do everything locally (client side)?
Could be much more problematic, should you have more than one "cashier" / client browser open, their Meals local collections will not be synchronized.
If your server-client connection is good, there is no reason to do everything client side. Meteor will automatically cache just what is needed, and provide optimistic DB modification to keep your user experience fast (should you structure your code correctly).
With aldeed:tabular package, you can easily display your Persons big table by "pages".
You can also link it with your Logs collection using the dburles:collection-helpers (IIRC there is an example en the aldeed:tabular home page).
So I'm using mongodb and I'm unsure if I've got the correct / best database collection design for what I'm trying to do.
There can be many items, and a user can create new groups with these items in. Any user may follow any group!
I have not just added the followers and items into the group collection because there could be 5 items in the group, or there could be 10000 (and the same for followers) and from research I believe that you should not use unbound arrays (where the limit is unknown) due to performance issues when the document has to be moved because of its expanding size. (Is there a recommended maximum for array lengths before hitting performance issues anyway?)
I think with the following design a real performance issue could be when I want to get all of the groups that a user is following for a specific item (based off of the user_id and item_id), because then I have to find all of the groups the user is following, and from that find all of the item_groups with the group_id $in and the item id. (but I can't actually see any other way of doing this)
Follower
.find({ user_id: "54c93d61596b62c316134d2e" })
.exec(function (err, following) {
if (err) {throw err;};
var groups = [];
for(var i = 0; i<following.length; i++) {
groups.push(following[i].group_id)
}
item_groups.find({
'group_id': { $in: groups },
'item_id': '54ca9a2a6508ff7c9ecd7810'
})
.exec(function (err, groups) {
if (err) {throw err;};
res.json(groups);
});
})
Are there any better DB patterns for dealing with this type of setup?
UPDATE: Example use case added in comment below.
Any help / advice will be really appreciated.
Many Thanks,
Mac
I agree with the general notion of other answers that this is a borderline relational problem.
The key to MongoDB data models is write-heaviness, but that can be tricky for this use case, mostly because of the bookkeeping that would be required if you wanted to link users to items directly (a change to a group that is followed by lots of users would incur a huge number of writes, and you need some worker to do this).
Let's investigate whether the read-heavy model is inapplicable here, or whether we're doing premature optimization.
The Read Heavy Approach
Your key concern is the following use case:
a real performance issue could be when I want to get all of the groups that a user is following for a specific item [...] because then I have to find all of the groups the user is following, and from that find all of the item_groups with the group_id $in and the item id.
Let's dissect this:
Get all groups that the user is following
That's a simple query: db.followers.find({userId : userId}). We're going to need an index on userId which will make the runtime of this operation O(log n), or blazing fast even for large n.
from that find all of the item_groups with the group_id $in and the item id
Now this the trickier part. Let's assume for a moment that it's unlikely for items to be part of a large number of groups. Then a compound index { itemId, groupId } would work best, because we can reduce the candidate set dramatically through the first criterion - if an item is shared in only 800 groups and the user is following 220 groups, mongodb only needs to find the intersection of these, which is comparatively easy because both sets are small.
We'll need to go deeper than this, though:
The structure of your data is probably that of a complex network. Complex networks come in many flavors, but it makes sense to assume your follower graph is nearly scale-free, which is also pretty much the worst case. In a scale free network, a very small number of nodes (celebrities, super bowl, Wikipedia) attract a whole lot of 'attention' (i.e. have many connections), while a much larger number of nodes have trouble getting the same amount of attention combined.
The small nodes are no reason for concern, the queries above, including round-trips to the database are in the 2ms range on my development machine on a dataset with tens of millions of connections and > 5GB of data. Now that data set isn't huge, but no matter what technology you choose you, will be RAM bound because the indices must be in RAM in any case (data locality and separability in networks is generally poor), and the set intersection size is small by definition. In other words: this regime is dominated by hardware bottlenecks.
What about the supernodes though?
Since that would be guesswork and I'm interested in network models a lot, I took the liberty of implementing a dramatically simplified network tool based on your data model to make some measurements. (Sorry it's in C#, but generating well-structured networks is hard enough in the language I'm most fluent in...).
When querying the supernodes, I get results in the range of 7ms tops (that's on 12M entries in a 1.3GB db, with the largest group having 133,000 items in it and a user that follows 143 groups.)
The assumption in this code is that the number of groups followed by a user isn't huge, but that seems reasonable here. If it's not, I'd go for the write-heavy approach.
Feel free to play with the code. Unfortunately, it will need a bit of optimization if you want to try this with more than a couple of GB of data, because it's simply not optimized and does some very inefficient calculations here and there (especially the beta-weighted random shuffle could be improved).
In other words: I wouldn't worry about the performance of the read-heavy approach yet. The problem is often not so much that the number of users grows, but that users use the system in unexpected ways.
The Write Heavy Approach
The alternative approach is probably to reverse the order of linking:
UserItemLinker
{
userId,
itemId,
groupIds[] // for faster retrieval of the linker. It's unlikely that this grows large
}
This is probably the most scalable data model, but I wouldn't go for it unless we're talking about HUGE amounts of data where sharding is a key requirement. The key difference here is that we can now efficiently compartmentalize the data by using the userId as part of the shard key. That helps to parallelize queries, shard efficiently and improve data locality in multi-datacenter-scenarios.
This could be tested with a more elaborate version of the testbed, but I didn't find the time yet, and frankly, I think it's overkill for most applications.
I read your comment/use-case. So I update my answer.
I suggest to change the design as per this article: MongoDB Many-To-Many
The design approach is different and you might want to remodel your approach to this. I'll try to give you an idea to start with.
I make the assumption that a User and a Follower are basically the same entities here.
I think the point you might find interesting is that in MongoDB you can store array fields and this is what I will use to simplify/correct your design for MongoDB.
The two entities I would omit are: Followers and ItemGroups
Followers: It is simply a User who can follow Groups. I would add an
array of group ids to have a list of Groups that the User follows. So instead of having an entity Follower, I would only have User with an array field that has a list of Group Ids.
ItemGroups: I would remove this entity too. Instead I would use an array of Item Ids in the Group entity and an array of Group Ids in the Item entity.
This is basically it. You will be able to do what you described in your use case. The design is simpler and more accurate in the sense that it reflects the design decisions of a document based database.
Notes:
You can define indexes on array fields in MongoDB. See Multikey Indexes for example.
Be wary about using indexes on array fields though. You need to understand your use case in order to decide whether it is reasonable or not. See this article. Since you only reference ObjectIds I thought you could try it, but there might be other cases where it is better to change the design.
Also note that the ID field _id is a MongoDB
specific field type of ObjectID used as primary key. To access the ids you can refer to it e.g. as user.id, group.id, etc. You can use an index to ensure uniqueness as per this question.
Your schema design could look like this:
As to your other question/concerns
Is there a recommended maximum for array lengths before hitting performance issues anyway?
the answer is in MongoDB the document size is limited to 16 MB and there is now way you can work around that. However 16 MB is considered to be sufficient; if you hit the 16 MB then your design has to be improved. See here for info, section Document Size Limit.
I think with the following design a real performance issue could be when I want to get all of the groups that a user is following for a specific item (based off of the user_id and item_id)...
I would do this way. Note how "easier" it sounds when using MongoDB.
get the item of the user
get groups that reference that item
I would be rather concerned if the arrays get very large and you are using indexes on them. This could overall slow down write operations on the respective document(s). Maybe not so much in your case, but not entirely sure.
You're on the right track to creating a performant NoSQL schema design, and I think you're asking the right questions as to how to properly lay things out.
Here's my understanding of your application:
It looks like Groups can both have many Followers (mapping users to groups) and many Items, but Items may not necessarily be in many Groups (although it is possible). And from your given use-case example, it sounds like retrieving all the Groups an Item is in and all the Items in a Group will be some common read operations.
In your current schema design, you've implemented a model between mapping users to groups as followers and items to groups as item_groups. This works alright until you mention the problem with more complex queries:
I think with the following design a real performance issue could be when I want to get all of the groups that a user is following for a specific item (based off of the user_id and item_id)
I think a few things could help you out in this situation:
Take advantage of MongoDB's powerful indexing capabilities. In particular, I think you should consider creating compound indexes on your Follower objects covering your Group and User, and your Item_Groups on Item and Group, respectively. You'll also want to make sure this kind of relationship is unique, in that a user can only follow a group once and an item can only be added to a group once. This would best be achieved in some pre-save hooks defined in your schema, or using a plugin to check for validity.
FollowerSchema.index({ group: 1, user: 1 }, { unique: true });
Item_GroupsSchema.index({ group: 1, item: 1 }, { unique: true });
Using an index on these fields will create some overhead when writing to the collection, but it sounds like reading from the collection will be a more common interaction so it'll be worth it (I'd suggest reading more up on index performance).
Since a User probably won't be following thousands of groups, I think it'd be worthwhile to include in the user model an array of groups the user is following. This will help you out with that complex query when you want to find all instances of an item in groups that a user is currently following, since you'll have the list of groups right there. You'll still have the implementation where your using $in: groups, but it'll be with one less query to the collection.
As I mentioned before, it seems like items may not necessarily be in that many groups (just like users won't necessarily be following thousands of groups). If the case may commonly be that an item is in maybe a couple hundred groups, I'd consider just adding an array to the item model for each group that it gets added to. This would increase your performance when reading all the groups an item is in, a query you mentioned would be a common one. Note: You'd still use the Item_Groups model to retrieve all the items in a group by querying on the (now indexed) group_id.
Unfortunately NoSQL databases aren't eligible in this case. Your data model seems exact relational. According to MongoDB documentation we can do only these and can perform only these.
There are some practices. MongoDB advises to us using Followers collection to get which user follows which group and vice versa with good performance. You may find the closest case to your situation on this page on slide 14th. But I think the slides can eligible if you want to get each result on different page. For instance; You are a twitter user and when you click the followers button you'll see the all your followers. And then you click on a follower name you'll see the follower's messages and whatever you can see. As we can see all of those work step-by-step. No needed a relational query.
I believe that you should not use unbound arrays (where the limit is unknown) due to performance issues when the document has to be moved because of its expanding size. (Is there a recommended maximum for array lengths before hitting performance issues anyway?)
Yes, you're right. http://askasya.com/post/largeembeddedarrays .
But if you have about a hundred items in your array there is no problem.
If you have fixed size some data thousands you may embed them into your relational collections as array. And you can query your indexed embedded document fields rapidly.
In my humble opinion, you should create hundreds of thousands test data and check performances of using embedded documents and arrays eligible to your case. Don't forget creating indexes appropriate your queries. You may try to using document references on your tests. After tests if you like performance of results go ahead..
You had tried to find group_id records that are followed by a specific user and then you've tried to find a specific item with those group_id. Would it be possible Item_Groups and Followers collections have a many-to-many relation?
If so, many-to-many relation isn't supported by NoSQL databases.
Is there any chance you can change your database to MySQL?
If so you should check this out.
briefly MongoDB pros against to MySQL;
- Better writing performance
briefly MongoDB cons against to MySQL;
- Worse reading performance
If you work on Node.js you may check https://www.npmjs.com/package/mysql and https://github.com/felixge/node-mysql/
Good luck...
We are investigating using Breeze for field deployment of some tools. The scenario is this -- an auditor will visit sites in the field, where most of the time there will be no -- or very degraded -- internet access. Rather than replicate our SQL database on all the laptops and tablets (if that's even possible), we are hoping to use Breeze to cache the data and then store it locally so it is accessible when there is not a usable connection.
Unfortunately, Breeze seems to choke when caching any significant amount of data. Generally on Chrome it's somewhere between 8 and 13MB worth of entities (as measured by the HTTPResponse headers). This can change a bit depending on how many tabs I have open and such, but I have not been able to move that more than 10%. the error I get is the Chrome tab crashes and tells me to reload. The error is replicable (I download the data in 100K chunks and it fails on the same read every time and works fine if I stop it after the previous read) When I change the page size, it always fails within the same range.
Is this a limitation of Breeze, or Chrome? Or windows? I tried it on Firefox, and it handles even less data before the whole browser crashes. IE fares a little better, but none of them do great.
Looking at performance in task manager, I get the following:
IE goes from 250M memory usage to 1.7G of memory usage during the caching process and caches a total of about 14MB before throwing an out-of-memory error.
Chrome goes from 206B memory usage to about 850M while caching a total of around 9MB
Firefox goes from around 400M to about 750M and manages to cache about 5MB before the whole program crashes.
I can calculate how much will be downloaded with any selection criteria, but I cannot find a way to calculate how much data can be handled by any specific browser instance. This makes using Breeze for offline auditing close to useless.
Has anyone else tackled this problem yet? What are the best approaches to handling something like this. I've thought of several things, but none of them are ideal. Any ideas would be appreciated.
ADDED At Steve Schmitt's request:
Here are some helpful links:
Metadata
Entity Diagram (pdf) (and html and edmx)
The first query, just to populate the tags on the page runs quickly and downloads minimal data:
var query = breeze.EntityQuery
.from("Countries")
.orderBy("Name")
.expand("Regions.Districts.Seasons, Regions.Districts.Sites");
Once the user has select the Sites s/he wishes to cache, the following two queries are kicked off (used to be one query, but I broke it into two hoping it would be less of a burden on resources -- it didn't help). The first query (usually 2-3K entities and about 2MB) runs as expected. Some combination of the predicates listed are used to filter the data.
var qry = breeze.EntityQuery
.from("SeasonClients")
.expand("Client,Group.Site,Season,VSeasonClientCredit")
.orderBy("DistrictId,SeasonId,GroupId,ClientId")
var p = breeze.Predicate("District.Region.CountryId", "==", CountryId);
var p1 = breeze.Predicate("SeasonId", "==", SeasonId);
var p2 = breeze.Predicate("DistrictId", "==", DistrictId);
var p3 = breeze.Predicate("Group.Site.SiteId", "in", SiteIds);
After the first query runs, the second query (below) runs (also using some combination of the predicates listed to filter the data. At about 9MB, it will have about 50K rows to download). When the total download burden between the two queries is between 10MB and 13MB, browsers will crash.
var qry = breeze.EntityQuery
.from("Repayments")
.orderBy('SeasonId,ClientId,RepaymentDate');
var p1 = breeze.Predicate("District.Region.CountryId", "==", CountryId);
var p2 = breeze.Predicate("SeasonId", "==", SeasonId);
var p3 = breeze.Predicate("DistrictId", "==", DistrictId);
var p4 = breeze.Predicate("SiteId", "in", SiteIds);
Thanks for the interest, Steve. You should know that the Entity Relationships are inherited and currently in production supporting the majority of the organization's operations, so as few changes as possible to that would be best. Also, the hope is to grow this from a reporting application to one with which data entry can be done in the field (so, as I understand it, using projections to limit the data wouldn't work).
Thanks for the interest, and let me know if there is anything else you need.
Here are some suggestions based on my experience building on an offline capable web application using breeze. Some or all of these might not make sense for your use cases...
Identify which entity types need to be editable vs which are used to fill drop-downs etc. Load non-editable data using the noTracking query option and cache them in localStorage yourself using JSON.stringify. This avoids the overhead of coercing the data into entities, change tracking, etc. Good candidates for this approach in your model might be entity types like Country, Region, District, Site, etc.
If possible, provide a facility in your application for users to identify which records they want to "take offline". This way you don't need to load and cache everything, which can get quite expensive depending on the number of relationships, entities, properties, etc.
In conjunction with suggestion #2, avoid loading all the editable data at once and avoid using the same EntityManager instance to load each set of data. For example, if the Client entity is something that needs to be editable out in the field without a connection, create a new EntityManager, load a single client (expanding any children that also need to be editable) and cache this data separately from other clients.
Cache the breeze metadata once. When calling exportEntities the includeMetadata argument should be false. More info on this here.
To create new EntityManager instances make use of the createEmptyCopy method.
EDIT:
I want to respond to this comment:
Say I have a client who has bills and payments. That client is in a
group, in a site, in a region, in a country. Are you saying that the
client, payment, and bill information might each have their own EM,
while the location hierarchy might be in a 4th EM with no-tracking?
Then when I refer to them, I wire up the relationships as needed using
LINQs on the different EMs (give me all the bills for customer A, give
me all the payments for customer A)?
It's a bit of a judgement call in terms of deciding how to separate things out. Some of what I'm suggesting might be overkill, it really depends on the amount of data and the way your application is used.
Assuming you don't need to edit groups, sites, regions and countries while offline, the first thing I'd do would be to load the list of groups using the noTracking option and cache them in localStorage for offline use. Then do the same for sites, regions and countries. Keep in mind, entities loaded with the noTracking option aren't cached in the entity manager so you'll need to grab the query result, JSON.stringify it and then call localStorage.setItem. The intent here is to make sure your application always has access to the list of groups, sites, regions, etc so that when you display a form to edit a client entity you'll have the data you need to populate the group, site, region and country select/combobox/dropdown.
Assuming the user has identified the subset of clients they want to work with while offline, I'd then load each of these clients one at a time (including their payment and bill information but not expanding their group, site, region, country) and cache each client+payments+bills set using entityManager.exportEntities. Reasoning here is it doesn't make sense to load several clients plus their payments and bills into the same EntityManager each time you want to edit a particular client. That could be a lot of unnecessary overhead, but again, this is a bit of a judgement call.
#Jeremy's answer was excellent and very helpful, but didn't actually answer the question, which I was starting to think was unanswerable, or at least the wrong question. However #Steve in the comments gave me the most appropriate information for this question.
It is neither Breeze nor the Browser, but rather Knockout. Apparently the knockout wrapper around the breeze entities uses all that memory (at least while loading the entities and in my environment). As described above, Knockout/Breeze would crap out after reading around 5MB of data, causing Chrome to crash with over 1.7GB of memory usage (from a pre-download memory usage around 300MB). Rewriting the app in ANgularJS eliminated the problem. So far I have been able to download over 50MB from the exact same EF6 model into Breeze/Angular, total Chrome memory usage never went above 625MB.
I will be testing larger payloads, but 50 MB more than satisfies my needs for the moment. Thanks everyone for your help.
Poor performance of autocomplete fields reduces their usefulness. If the client-side implementation has to call an endpoint that does heavy db lookup, the response time can easily get frustrating.
One neat approach comes from AWS Case Study: IMDb. It used to come with a diagram (no longer available), but in a nutshell a prediction tree would be generated and stored for every combination that can resolve in a meaningful way. E.g. resolutions for sta would include Star Wars, Star Trek, Sylvester Stallone which will be stored, but stb will not resolve to anything meaningful and will not be stored.
To get the lowest possible latency, all possible results are
pre-calculated with a document for every combination of letters in
search. Each document is pushed to Amazon Simple Storage Service
(Amazon S3) and thereby to Amazon CloudFront, putting the documents
physically close to the users. The theoretical number of possible
searches to calculate is mind-boggling—a 20-character search has 23 x
1030 combinations—but in practice, using IMDb's authority on movie and
celebrity data can reduce the search space to about 150,000 documents,
which Amazon S3 and Amazon CloudFront can distribute in just a few
hours. IMDb creates indexes in several languages with daily updates
for datasets of over 100,000 movie and TV titles and celebrity names.
How would one achieve a similarly performant experience be achieved with private data? E.g. autocompleting client names, job ids, invoice numbers... Storing different documents/decision trees for separate users sounds expensive, especially if some of the data (client names?) could be available for multiple users.
You right that such workload requires some special optimizations.
You can use ready search engine like Apache lucene or Solr (wich is REST API wrapper for lucene)
This engine optimized for full text searches and can work with private data.
Work steps:
Install solr (or lucene)
Design schema for storing information (what fields and what types of searchs you need)
Load data into it ( via bach operations or on update basis)
Query searches based on solrs query language (similar to google search).
In this place you could add special restrictions based on user_id or any over parameter in addition to original user query. So private data wouldn't mess between users.
I actually agree with CGI. The best solution is a 3rd party search engine. Anything else is trying to build your own search engine. I'm really not sure what the hardware at your disposal by your post so i'll give a possible solution for a lowbrow if all you got is LAMP hosting.
So in your PHP code you would make a query string like:
$qstr = "SELECT * FROM Clients WHERE `name` like '%".$search."%' ORDER BY popularity DESC LIMIT 0,100";
Than increment the popularity column for every record that is found via the "search engine."
On the front end (Lets say your using Dojo) you could do something like...
<script>
require(["dojo/on", "dojo/dom", "dojo/request/xhr", "dojo/domReady!"], function (on, dom, xhr) {
on(dom.byId('txtSearch'), "change", function(evt) {
if (typeof searchCheck !== undefined) clearTimeout(searchCheck);
searchCheck = setTimeout(function() { //keep from flooding XHR
xhr("fetch-json-results.php", {
handleAs: "json"
}).then(function(data){
//update txtSearch combo store
});
}, 500);
});
});
</script>
<input id="txtSearch" type="text" data-dojo-type="dijit/form/ComboBox" data-dojo-props="intermediateChanges:true">
This would be a low tech low budget (LAMP) equiv answer.