I'm struggling with where to put my business logic in my SailsJS project.
My data model is as follows:
Portfolio (hasMany) -> Positions (hasOne) -> Asset-> (hasMany) Ticks
I'm looking to aggregate data of the children using computed properties on the parent model cascading all the way up to Portfolio. e.G an Asset knows the newest tick (price), the position knows its current price (numberShares * newest Tick of its Asset), etc. Ideally i want to add this data to the JSON so i can move business logic away from the Ember client.
This works fine for one level, but if i try to use a computed property of a child, it's either non defined or when using toJSON() empty
Asset.js:
latestTick: function () {
if (this.ticks.length> 0)
{
this.ticks.sort(function(a, b){
var dateA=new Date(a.date), dateB=new Date(b.date)
return dateB-dateA
})
return this.ticks[Object.keys(this.ticks)[0]].price;
}
return 0;
},
toJSON: function() {
var obj = this.toObject();
if (typeof this.latestTick === 'function')
obj.latestTick = this.latestTick();
return obj;
}
This works (after adding the typeof check as depending on whether it was returned nested or not it didn't work).
Now in Position.js
assetID:{
model: "Asset",
},
i want to calculate currentValue:
currentValue: function () {
return this.assetID.latestTick() * this.numberShares;
},
which does not work because the function is not available (only the hardcoded attributes). I tried using Tick.Find()... but ran into async issues (JSON is returned before data is fetched).
I tried to force embedding / populate the associations in the JSON but it made no difference. As soon as I'm crossing more than one hierarchy level i don't get any data.
Where should/can i do this aggregation? I don't want to use custom controller actions but leverage the REST API.
Waterline does not support deep populates. If you want to implement a deep query like that, you can use native (https://sailsjs.com/documentation/reference/waterline-orm/models/native) if you are using Sails < 1, or manager (https://sailsjs.com/documentation/reference/waterline-orm/datastores/manager) if you are using Sails >= 1
That way you can use whatever your database is capable for building this kind of "more advanced" queries.
Related
I'm working on a Vue/Vuetify app that has a performance problem. I've created a custom component that wraps around a standard Vuetify v-data-table component. It works fine for small amounts of data, but giving it moderate to large amounts of data causes Firefox to hang and Chrome to crash.
Here's a slightly simplified version of my code:
<script>
import ...
export default {
props: {
theValues: Array,
// other props
},
computed: {
theKeys: function() {
return this.schema.map(col => col.col);
},
schema: function() {
return this.theValues[0].schema;
},
dataForDataTable: function() {
console.time('test');
let result = [];
for (let i = 0; i < theValues[0].data.length; i++) {
let resultObj = {};
for (let j = 0; j < theKeys.length; j++) {
// The "real" logic; this causes the browser to hang/crash
// resultObj[theKeys[j]] = theValues[0].data[i][j];
// Test operation to diagnose the problem
resultObj[theKeys[j]] = Math.floor(Math.random() * Math.floor(99999));
}
result.push(resultObj);
}
console.timeEnd('test');
// For ~30k rows, timer reports that:
// Real values can take over 250,000 ms
// Randomly generated fake values take only 7 ms
return result;
},
// other computed
},
// other Vue stuff
</script>
And here's an example of what theValues actually looks like:
[
{
data: [
[25389, 24890, 49021, ...] <-- 30,000 elements
],
schema: [
{
col: "id_number",
type: "integer"
}
]
}
]
The only meaningful difference I see between the fast code and the slow code is that the slow code accesses the prop theValues on each iteration whereas the fast code doesn't touch any complicated part of Vue. (It does use theKeys, but the performance doesn't change even if I create a local deep copy of theKeys inside the function.)
Based on this, it seems like the problem is not that the data table component can't handle the amount of data I'm sending, or that the nested loops are inherently too inefficient. My best guess is that reading from the prop so much is somehow slowing Vue itself down, but I'm not 100% sure of that.
But I do ultimately need to get the information from the prop into the table. What can I do to make this load at a reasonable speed?
The performance problem is actually a symptom of your loop code rather than Vue. The most expensive data access is in your inner loop in dataForDataTable():
for (i...) {
for (j...) {
theValues[0].data[i][j] // ~50 ms average (expensive)
}
}
// => long hang for 32K items
An optimization would be to cache the array outside your loop, which dramatically improves the loop execution time and resolves the hang:
const myData = theValues[0].data
for (i...) {
for (j...) {
myData[i][j] // ~0.00145 ms average
}
}
// => ~39 ms for 32K items
demo 1
Note the same result can be computed without loops, using JavaScript APIs. This affords readability and reduced lines of code at a slight performance cost (~1ms). Specifically, use Array.prototype.map to map each value from data to an object property, obtained by Array.prototype.reduce on theKeys:
theValues[0].data
.map(values => theKeys.reduce((obj,key,i) => {
obj[key] = values[i]
return obj
}, {}))
// => ~40 ms for 32K items
demo 2
Times above measured on 2016 MacBook Pro - 2.7GHz i7, Chrome 87. Codesandbox demos might show a vast variance from the above.
Tip 1
Original text:
Accessing the prop (data) should not be an issue. Yes, data are reactive but reading it should be very efficient (Vue is just "making notes" that you are using that data for rendering)
Well it seems I was clearly wrong here...
Your component is getting data by prop but it is very probable that the data is reactive in parent component (coming from Vuex or stored in parent's data). Problem with Vue 2 reactivity system is it is based on Object.defineProperty and this system does not allow to intercept indexed array access (Vue is not able to detect code like arr[1] as an template dependency). To workaround this, if object property (theValues[0].data in your code) is accessed, it checks whether the value is array and if yes it iterates the whole array (plus all nested arrays) to mark the items as dependencies - you can read more in depth explanation here
One solution to this problem is to create local variable let data = theValues[0].data as tony19 suggests. Now the .data Vue getter is not called every time and the performance is fixed...
But if your data is immutable (never change), just use Object.freeze() and Vue will not try to detect changes of such data. This will not only make your code faster but also saves a ton of memory in case of large lists of objects
Note that this problem is fixed in Vue 3 as it uses very different reactivity system based on ES6 proxies...
Tip 2
Although Vue computed properties are highly optimized, there is still some code running every time you access the property (checking whether underlying data is dirty and computed prop needs reevaluate) and this work adds up if you use it in a tight loop as in your case...
Try to make local copy of the theKeys computed prop before executing the loop (shallow copy is enough, no need for a deep copy)
See this really good video from Vue core member
Of course the same issue applies to accessing the dataForDataTable computed prop from the template. I encourage You to try to use watcher instead of computed to implement same logic as dataForDataTable and store it's result in data to see if it makes any difference...
I'm currently working on an app whose database schema changes frequently. This rapid change creates a big problem for my front-end Angular code which consumes the backend JSON API (which I don't have much control over) via Restangular; take the following code for example:
<ul>
<li ng-repeat="item in items">
<h2>{{item.label}}</h2>
</li>
</ul>
There will be a lot of template tags like {{item.label}} scattered everywhere in the front-end code, so whenever the property name changes from, say "label" to "item_label", I'll need to remember where those tags are and change all of them. Of course, I could do a project wide search and replace, but that's not really ideal from an DRY stand point and it'll also be a maintenance nightmare.
My question is, does Angular (or Restangular) provide a way to map model property names to custom ones like this in Backbone?
That way, I can just have something like this
{
label: model.item_label
}
then next time when the "item_label" is changed to something else, I can just update it in this configuration object and not worry about all the references in the templates.
Thanks.
The idea with angular is that you can do whatever you want with the model. While this doesn't point you in any specific direction it does give you the opportunity to implement it in your own OO manner. Say you have an app that has a data object called ...Task a model for tasks might look like..
function Task(initJson){
this.name = initJson._name || 'New Task';
this.completed = initJson.is_completed || false;
this.doneDateTime = initJson.datetime || null;
}
Task.prototype = {
save: function(){
//do stuff with this and $http.put/post...
}
create: function(){
//do stuff with this and $http.put/post
}
//....etc
}
All of this might be wrapped up in a factory.
myApp.factory('TaskFactory',function($http){
var Tasks = []; //Or {};
//above constructor function...
//other helper methods...etc
return {
instance: Task,
collection: Tasks,
init: function(){} // get all tasks? run them through the constructor (Task), populate collection
};
})
You could then edit properties on your constructor (one place (for each data type), the only place). Although this isn't ideal if your using things like Restangular or $resource as they not equipped to be a large backing store but they just assume the properties that come across the wire, which for large, changing applications can sometimes be difficult to manage.
I ended up going with Restangular's setResponseExtractor config property based on this FAQ answer.
It looks like this:
Restangular.setResponseExtractor(function(response, operation, what, url) {
var newResponse = response;
angular.forEach(newResponse.items, function(item) {
item.label = item.item_label;
}
return newResponse;
}
I have a Breeze web api controller, with methods that accept parameters and do some work, filtering, sorting, etc, on the server.
On the querySucceeded, I'd like to do further querying to data.results. Is there a way to accomplish this? I got this working by exporting/importing data.results to a local manager, and do the projection from there. The projection is needed in order to use the observable collection in a vendor grid control.
var query = datacontext.EntityQuery.from("GetActiveCustomers")
.withParameters({ organizationID: "1" })
.toType("Customer")
.expand("Organization")
.orderBy('name');
var queryProjection = query
.select("customerID, organizationID, name, isActive, organization.name");
return manager.executeQuery(query)
.then(querySucceeded)
.fail(queryFailed);
function querySucceeded(data) {
var exportData = manager.exportEntities(data.results);
var localManager = breeze.EntityManager.importEntities(exportData);
var resultProjection = localManager.executeQueryLocally(queryProjection);
//This is the way I came up with to query data.results (exporting/importing the result entities to a local manager)
//Is there a better way to do this? Querying directly data.results. Example: data.results.where(...).select("customerID, organizationID...)
if (collectionObservable) {
collectionObservable(resultProjection);
}
log('Retrieved Data from remote data source',
data, true);
}
You've taken an interesting approach. Normally a projection returns uncacheable objects, not entities. But you casted this to Customer (with the toType clause) which means you've created PARTIAL Customer entities with missing data.
I must hope you know what you are doing and have no intention of saving changes to these customer entities while they remain partial else calamity may ensue.
Note that when you imported the selected Customers to the "localManager" you did not bring along their related Organization entities. That means an expression such as resultProjection[0].organization will return null. That doesn't seem correct.
I understand that you want to hold on to a subset of the Customer partial entities and that there is no local query that could select that subset from cache because the selection criteria are only fully known on the server.
I think I would handle this need differently.
First, I would bury all of this logic inside the DataContext itself; the purpose of a DataContext is to encapsulate the details of data access so that callers (such as ViewModels) don't have to know internals. The DataContext is an example of the UnitOfWork (UoW) pattern, an abstraction that helps isolate the data access/manipulation concerns from ViewModel concerns.
Then I would store it either in a named array of the DataContext (DC) or of the ViewModel (VM), depending upon whether this subset was of narrow or broad interest in the application.
If only the VM instance cares about this subset, then the DC should return the data.results and let the VM hold them.
I do not understand why you are re-querying a local EntityManager for this set nor for why your local query is ALSO appling a projection ... which would return non-entity data objects to the caller. What is wrong with returning the (partial) Customer entities.
It seems you intend to further filter the subset on the client. Hey ... it's a JavaScript array. You can call stuffArray.filter(filterFunction).
Sure that doesn't give you the Breeze LINQ-like query syntax ... but do you really need that? Why do you need ".select" over that set?
If that REALLY is your need, then I guess I understand why you're dumping the results into a separate EntityManager for local use. In that case, I believe you'll need more code in your query callback method to import the related Organization entities into that local EM so that someCustomer.organization returns a value. The ever-increasing trickiness of this approach makes me uncomfortable but it is your application.
If you continue down this road, I strongly encourage you to encapsulate it either in the DC or in some kind of service class. I wouldn't want my VMs to know about any of these shenanigans.
Best of luck.
Update 3 Oct 2013: Local cache query filtering on unmapped property
After sleeping on it, I have another idea for you that eliminates your need for a second EM in this use case.
You can add an unmapped property to the client-side Customer entity and set that property with a subset marker after querying the "GetActiveCustomers" endpoint on the server; you'd set the marker in the query callback.
Then you can compose a local query that filters on the marker value to ensure you only consider Customer objects from that subset.
Reference the marker value only in local queries. I don't know if a remote query filtering on the marker value will fail or simply ignore that criterion.
You won't need a separate local EntityManager; the Customer entities in your main manager carry the evidence of the server-side filtering. Of course the server will never have to deal with your unmapped property value.
Yes, a breeze local query can target unmapped properties as well as mapped properties.
Here's a small demonstration. Register a custom constructor like this:
function Customer() { /* Custom constructor ... which you register with the metadataStore*/
// Add unmapped 'subset' property to be queried locally.
this.subset = Math.ceil(Math.random() * 3); // simulate values {1..3}
}
Later you query it locally. Here are examples of queries that do and do not reference that property:
// All customers in cache
var x = breeze.EntityQuery.from("Customers").using(manager).executeLocally();
// All customers in cache whose unmapped 'subset' property === 1.
var y = breeze.EntityQuery.from("Customers")
.where("subset", 'eq', 1) // more criteria; knock yourself out
.using(manager).executeLocally();
I trust you'll know how to set the subset property appropriately in your callback to our "GetActiveCustomers" query.
HTH
Once you queried for some data breeze stores those entities on the local memory.
All you have to do is query locally when you need to filter the data some more.
You do this by specifying for the manager to query locally :
manager.executeQueryLocally(query);
Because querying from the database is done asynchronously you have to make sure that you retrieve from the local memory only if there is something there. Follow the promises.
I have a collection that can potentially contain thousands of models. I have a view that displays a table with 50 rows for each page.
Now I want to be able to cache my data so that when a user loads page 1 of the table and then clicks page 2, the data for page 1 (rows #01-50) will be cached so that when the user clicks page 1 again, backbone won't have to fetch it again.
Also, I want my collection to be able to refresh updated data from the server without performing a RESET, since RESET will delete all the models in a collection, including references of existing model that may exist in my app. Is it possible to fetch data from the server and only update or add new models if necessary by comparing the existing data and the new arriving data?
In my app, I addressed the reset question by adding a new method called fetchNew:
app.Collection = Backbone.Collection.extend({
// fetch list without overwriting existing objects (copied from fetch())
fetchNew: function(options) {
options = options || {};
var collection = this,
success = options.success;
options.success = function(resp, status, xhr) {
_(collection.parse(resp, xhr)).each(function(item) {
// added this conditional block
if (!collection.get(item.id)) {
collection.add(item, {silent:true});
}
});
if (!options.silent) {
collection.trigger('reset', collection, options);
}
if (success) success(collection, resp);
};
return (this.sync || Backbone.sync).call(this, 'read', this, options);
}
});
This is pretty much identical to the standard fetch() method, except for the conditional statement checking for item existence, and using add() by default, rather than reset. Unlike simply passing {add: true} in the options argument, it allows you to retrieve sets of models that may overlap with what you already have loaded - using {add: true} will throw an error if you try to add the same model twice.
This should solve your caching problem, assuming your collection is set up so that you can pass some kind of page parameter in options to tell the server what page of options to send back. You'll probably want to add some sort of data structure within your collection to track which pages you've loaded, to avoid doing unnecessary requests, e.g.:
app.BigCollection = app.Collection.extend({
initialize: function() {
this.loadedPages = {};
},
loadPage: function(pageNumber) {
if (!this.loadedPages[pageNumber]) {
this.fetchNew({
page: pageNumber,
success: function(collection) {
collection.loadedPages[pageNumber] = true;
}
})
}
}
});
Backbone.Collection.fetch has an option {add:true} which will add models into a collection instead of replacing the contents.
myCollection.fetch({add:true})
So, in your first scenario, the items from page2 will get added to the collection.
As far as your 2nd scenario, there's currently no built in way to do that.
According to Jeremy that's something you're supposed to do in your App, and not part of Backbone.
Backbone seems to have a number of issues when being used for collaborative apps where another user might be updating models which you have client side. I get the feeling that Jeremy seems to focus on single-user applications, and the above ticket discussion exemplifies that.
In your case, the simplest way to handle your second scenario is to iterate over your collection and call fetch() on each model. But, that's not very good for performance.
For a better way to do it, I think you're going to have to override collection._add, and go down the line dalyons did on this pull request.
I managed to get update in Backbone 0.9.9 core. Check it out as it's exactly what you need http://backbonejs.org/#Collection-update.
I have a situation using backbone.js where I have a collection of models, and some additional information about the models. For example, imagine that I'm returning a list of amounts: they have a quantity associated with each model. Assume now that the unit for each of the amounts is always the same: say quarts. Then the json object I get back from my service might be something like:
{
dataPoints: [
{quantity: 5 },
{quantity: 10 },
...
],
unit : quarts
}
Now backbone collections have no real mechanism for natively associating this meta-data with the collection, but it was suggested to me in this question: Setting attributes on a collection - backbone js that I can extend the collection with a .meta(property, [value]) style function - which is a great solution. However, naturally it follows that we'd like to be able to cleanly retrieve this data from a json response like the one we have above.
Backbone.js gives us the parse(response) function, which allows us to specify where to extract the collection's list of models from in combination with the url attribute. There is no way that I'm aware of, however, to make a more intelligent function without overloading fetch() which would remove the partial functionality that is already available.
My question is this: is there a better option than overloading fetch() (and trying it to call it's superclass implementation) to achieve what I want to achieve?
Thanks
Personally, I would wrap the Collection inside another Model, and then override parse, like so:
var DataPointsCollection = Backbone.Collection.extend({ /* etc etc */ });
var CollectionContainer = Backbone.Model.extend({
defaults: {
dataPoints: new DataPointsCollection(),
unit: "quarts"
},
parse: function(obj) {
// update the inner collection
this.get("dataPoints").refresh(obj.dataPoints);
// this mightn't be necessary
delete obj.dataPoints;
return obj;
}
});
The Collection.refresh() call updates the model with new values. Passing in a custom meta value to the Collection as previously suggested might stop you from being able to bind to those meta values.
This meta data does not belong on the collection. It belongs in the name or some other descriptor of the code. Your code should declaratively know that the collection it has is only full of quartz elements. It already does since the url points to quartz elements.
var quartzCollection = new FooCollection();
quartzCollection.url = quartzurl;
quartzCollection.fetch();
If you really need to get this data why don't you just call
_.uniq(quartzCollecion.pluck("unit"))[0];