How to hydrate Project name in SnapshotStore - javascript

When using the SnapshotStore, I have been unable to hydrate the "Name" property of the objects that are returning as part of my query. I get the correct OID for the project, but I would like to display the string name of the project instead of the OID. How can I go about doing that?
This is the code that I am using to query, but when I add the "Project" property to the Hydrate field, it does not seem to matter. If I comment out the hydrate line entirely, the state and resolution come back as unhydrated and not readable (by most people) so I know that it is at least working.
doSearch: function(query, fields, sort, pageSize, callback){
var transformStore = Ext.create('Rally.data.lookback.SnapshotStore', {
context: {
workspace: this.context.getWorkspace(),
project: this.context.getProject()
},
fetch: fields,
find: query,
autoLoad: true,
hydrate: ["State","Resolution","Project"],
listeners: {
scope: this,
load: this.processSnapshots
}
});
},
"We could not find the user-friendly form for the following: 'Project':7579240995" - is what I get when trying to include "Project" in the hydrate field.
I read somewhere that hydrate only works with drop down menus. Is that correct? And if so, how would I be able to show the project name easily for each object that the query is returning?

Unfortunately Project is not a hydratable field. In general the fields that can be hydrated are dropdown fields as you mentioned in your question.
The best way to do what you need is to use a Rally.data.WsapiDataStore to query for the projects in your current workspace and to build an in-memory map of OID to name.
var projects = {};
Ext.create('Rally.data.WsapiDataStore', {
model: 'Project',
autoLoad: true,
limit: Infinity,
fetch: ['Name', 'ObjectID'],
context: this.getContext().getDataContext(),
listeners: {
load: function(store, records) {
Ext.Array.each(records, function(record) {
projects[record.get('ObjectID')] = record.get('Name');
});
}
}
});

Related

GraphQL Unions and Sequelize

I'm having trouble understanding how to retrieve information from a GraphQL Union. I have something in place like this:
const Profile = StudentProfile | TeacherProfile
Then in my resolver I have:
Profile: {
__resolveType(obj, context, info) {
if (obj.studentId) {
return 'StudentProfile'
} else if (obj.salaryGrade) {
return 'TeacherProfile'
}
},
},
This doesn't throw any errors, but when I run a query like this:
query {
listUsers {
id
firstName
lastName
email
password
profile {
__typename
... on StudentProfile {
studentId
}
... on TeacherProfile {
salaryGrade
}
}
}
}
This returns everything except for profile which just returns null. I'm using Sequelize to handle my database work, but my understanding of Unions was that it would simply look up the relevant type for the ID being queried and return the appropriate details in the query.
If I'm mistaken, how can I get this query to work?
edit:
My list user resolver:
const listUsers = async (root, { filter }, { models }) => {
const Op = Sequelize.Op
return models.User.findAll(
filter
? {
where: {
[Op.or]: [
{
email: filter,
},
{
firstName: filter,
},
{
lastName: filter,
},
],
},
}
: {},
)
}
User model relations (very simple and has no relation to profiles):
User.associate = function(models) {
User.belongsTo(models.UserType)
User.belongsTo(models.UserRole)
}
and my generic user resolvers:
User: {
async type(type) {
return type.getUserType()
},
async role(role) {
return role.getUserRole()
},
},
The easiest way to go about this is to utilize a single table (i.e. single table inheritance).
Create a table that includes columns for all the types. For example, it would include both student_id and salary_grade columns, even though these will be exposed as fields on separate types in your schema.
Add a "type" column that identifies each row's actual type. In practice, it's helpful to name this column __typename (more on that later).
Create a Sequelize model for your table. Again, this model will include all attributes, even if they don't apply to a specific type.
Define your GraphQL types and your interface/union type. You can provide a __resolveType method that returns the appropriate type name based on the "type" field you added. However, if you named this field __typename and populated it with the names of the GraphQL types you are exposing, you can actually skip this step!
You can use your model like normal, utilizing find methods to query your table or creating associations with it. For example, you might add a relationship like User.belongsTo(Profile) and then lazy load it: User.findAll({ include: [Profile] }).
The biggest drawback to this approach is you lose database- and model-level validation. Maybe salary_grade should never be null for a TeacherProfile but you cannot enforce this with a constraint or set the allowNull property for the attribute to false. At best, you can only rely on GraphQL's type system to enforce validation but this is not ideal.
You can take this a step further and create additional Sequelize models for each individual "type". These models would still point to the same table, but would only include attributes specific to the fields you're exposing for each type. This way, you could at least enforce "required" attributes at the model level. Then, for example, you use your Profile model for querying all profiles, but use the TeacherProfile when inserting or updating a teacher profile. This works pretty well, just be mindful that you cannot use the sync method when structuring your models like this -- you'll need to handle migrations manually. You shouldn't use sync in production anyway, so it's not a huge deal, but definitely something to be mindful of.

How to improve the performance of this MongoDB / Mongoose query?

I have a GET all products endpoint which is taking an extremely long time to return responses:
Product.find(find, function(err, _products) {
if (err) {
res.status(400).json({ error: err })
return
}
res.json({ data: _products })
}).sort( [['_id', -1]] ).populate([
{ path: 'colors', model: 'Color' },
{ path: 'size', model: 'Size' },
{ path: 'price', model: 'Price' }
]).lean()
This query is taking up to 4 seconds, despite there only being 60 documents in the products collection.
This query came from a previous developer, and I'm not so familiar with Mongoose.
What are the performance consequences of sort and populate? I assume populate is to blame here? I am not really sure what populate is doing, so I'm unclear how to either avoid it or index at a DB level to improve performance.
From the Mongoose docs, "Population is the process of automatically replacing the specified paths in the document with document(s) from other collection(s)"
So your ObjectId reference on your model gets replaced by an entire Mongoose document. Doing so on multiple paths in one query will therefore slow down your app. If you want to keep the same code structure, you can use select to specify what fields of the document that should be populated, i.e. { path: 'colors', model: 'Color', select: 'name' }. So instead of returning all the data of the Color document here, you just get the name.
You can also call cursor() to stream query results from MongoDB:
var cursor = Person.find().cursor();
cursor.on('data', function(doc) {
// Called once for every document
});
cursor.on('close', function() {
// Called when done
});
You can read more about the cursor function in the Mongoose documentation here.
In general, try to only use populate for specific tasks like getting the name of a color for only one product.
sort will not cause any major performance issues until you reach much larger databases.
Hope it helps!

Kendo UI TreeListViews with complex data

Background Information
In short i'm looking to achieve "mostly" whats shown here ...
http://demos.telerik.com/kendo-ui/treelist/remote-data-binding
... except it's a bit of a mind bender and the data comes from more than base endpoint url in my case.
I am trying to build a generic query building page that allows users to pick a context, then a "type" (or endpoint) and then from there build a custom query on that endpoint.
I have managed to get to the point where I do this for a simple query but now i'm trying to handle more complex scenarios where I retrieve child,or deeper data items from the endpoint in question.
With this in mind ...
The concept
I have many endpoints not all of which OData but follow mostly OData v4 rules, and so I am trying to build a "TreeGrid" based having selected an endpoint that will expose the expansion options available to the query.
All my endpoints have a custom function on it called GetMetadata() which describes the type information for that endpoint, where an endpoint is for the most part basically a REST CRUD<T> implementation which may or may not have some further custom functions on it to handle a few other business scenarios.
So, given a HTTP get request to something like ...
~/SomeContext/SomeType/GetMetadata()
... I would get back an object that looks much like an MVC / WebAPI Metadata container.
that object has a property called "Properties" some of which are scalar and some of which are complex (as defined in the data).
I am trying to build a TreeListDataSource or a HierarchicalDataSource object that I can use to bind to the Kendo treeList control for only the complex properties, that dynamically builds the right get url for the meta and lists out the complex properties for that type based on the property information from the parent type with the root endpoint being defined in other controls on the page.
The Problem
I can't seem to figure out how to configure the kendo datasource object for the TreeGrid to get the desired output, I think for possibly one of two reasons ...
The TreeListDataSource object as per the demo shown here: http://demos.telerik.com/kendo-ui/treelist/local-data-binding seems to imply that the hierarchy based control wants a flat data source.
I can't figure out how to configure the datasource in such a way that I could pass in the parent meta information (data item from the source) in order to build the right endpoint url for the get request.
function getDatasource(rootEndpoint) {
return {
pageSize: 100,
filter: { logic: 'and', filters: [{ /* TODO:possibly filter properties in here? */ }] },
type: 'json',
transport: {
read: {
url: function (data) {
//TODO: figure out how to set this based on parent
var result = my.api.rootUrl + endpoint + "/GetMetadata()";
return result;
},
dataType: 'json',
beforeSend: my.api.beforeSend
}
},
schema: {
model: {
id: 'Name',
fields: {
Type: { field: 'Type', type: 'string' },
Template: { field: 'Template', type: 'string' },
DisplayName: { field: 'DisplayName', type: 'string' },
ShortDisplayName: { field: 'ShortDisplayName', type: 'string' },
Description: { field: 'Description', type: 'string' },
ServerType: { field: 'ServerType', type: 'string' }
}
}
parse: function (data) {
// the object "data" passed in here will be a meta container, a single object that contains a property array.
$.each(data.Properties, function (idx, item) {
item.ParentType = data;
item.Parent = ??? where do I get this ???
});
return data.Properties;
}
}
};
}
Some of my problem may be down to the fact that metadata inherently doesn't have primary keys, I wondered if perhaps using parse to attach a generated guid as the key might be an idea, but then I think Kendo uses the id for the question on the API when asking for children.
So it turns out that kendo is just not geared up to do anything more than serve up data from a single endpoint and the kind of thing i'm doing here is a little bit more complex than that, and further more due to the data being "not entity type data" I don't have common things like keys and foreign keys.
With that in mind I chose take the problem away from kendo altogether and simply handle the situation with a bit of a "hack that behaves like a normal kendo expand but not really" ...
In a treegrid, when kendo shows an expandable row it renders something like this in the first cell ...
With no expanded data or a data source that is bound to a server this cell is not rendered.
so I faked it in place and added an extra class to my version .not-loaded.
that meant I could hook up a custom block of js on click of my "fake expand", to build the right url, do my custom stuff, fake / create some id's, then hand the data to the data source.
expandList.on('click', '.k-i-expand.not-loaded', function (e) {
var source = expandList.data("kendoTreeList");
var cell = $(e.currentTarget).closest('td');
var selectedItem = source.dataItem($(e.currentTarget).closest('tr'));
my.type.get(selectedItem.ServerType, ctxList.val(), function (meta) {
var newData = JSLINQ(meta.Properties)
.Select(function (i) {
i.id = selectedItem.id + "/" + i.Name;
i.parentId = selectedItem.id;
i.Selected = my.type.ofProperty.isScalar(i);
i.TemplateSource = buildDefaultTemplateSourceFor(i);
return i;
})
.ToArray();
for (var i in newData) {
source.dataSource.add(newData[i]);
}
$(e.currentTarget).remove();
source.expand(selectedItem);
buildFilterGrid();
generate();
});
});
This way, kendo gets given what it epects for a for a treeviewlist "a flat set with parent child relationships" and I do all the heavy lifting.
I used a bit of JSLINQ magic to make the heavy lifting a bit more "c# like" (i prefer c# after all), but in short all it does is grab the parent item that was expanded on and uses the id from that as the parent then generates a new id for the current item as parent.id + "/" + current.name, this way everything is unique as 2 properties on an object can't have the same name, and where two objects are referenced by the same parent the prefix of the parents property name makes the reference unique.
It's not the ideal solution, but this is how things go with telerik, a hack here, a hack there and usually it's possible to make it work!
Something tells me there's a smarter way to do this though!

EmberJS: Object as Query Param to Refresh Model

I followed the Query Params guide (http://guides.emberjs.com/v1.11.0/routing/query-params/) and it worked great. Specifically, refreshing the model did exactly what I wanted.
I'm moving the filter to the json-api spec and filtering takes place in a filter object. So rather than:
http://localhost:3000/accounts?id=1
The server responds to:
http://localhost:3000/accounts?filter[id]=1
I tried to get the query params to work refreshing the model based on an object, but it doesn't seem to update.
// app/controllers/accounts/index.js
import Ember from 'ember';
export default Ember.Controller.extend({
queryParams: ['filter', 'sort'],
filter: {},
sort: '-id'
});
// app/routes/accounts/index.js
import Ember from 'ember';
export default Ember.Route.extend({
queryParams: {
filter: { refreshModel: true },
sort: { refreshModel: true }
},
model: function(params) {
return this.store.find('account', params);
},
});
// template
<th>{{input type="text" placeholder="ID" value=filter.id}}</th>
Is it possible to have query params work with an object?
This answer is as of Ember version 1.13.0-beta.1+canary.
The short answer: No. Query params will not work with an object.
The long answer:
As of now, a private function named _serializeQueryParams in the Router serializes the queryParams.
_serializeQueryParams(targetRouteName, queryParams) {
var groupedByUrlKey = {};
forEachQueryParam(this, targetRouteName, queryParams, function(key, value, qp) {
var urlKey = qp.urlKey;
if (!groupedByUrlKey[urlKey]) {
groupedByUrlKey[urlKey] = [];
}
groupedByUrlKey[urlKey].push({
qp: qp,
value: value
});
delete queryParams[key];
});
for (var key in groupedByUrlKey) {
var qps = groupedByUrlKey[key];
var qp = qps[0].qp;
queryParams[qp.urlKey] = qp.route.serializeQueryParam(qps[0].value, qp.urlKey, qp.type);
}
},
qp.urlKey would evaluate to 'filter' in your example, and object would be serialized as 'object [Object]'. Even though you could override the serializeQueryParam method in your route, that wouldn't help because the queryParam key would still be 'filter', and you'd need it to be 'filter%5Bid%5D'
Based on this comment in the Ember Discussion Forum, it sounds like object query params are unlikely, and you'd be better off just flattening and unflattening the filtered fields.
I know this is a bit late but you can use this workaround to allow for objects with query params. It's pretty easy to get working and so far I haven't found any issues with it.
I ran into the same problem when building an Ember app on top of my JSON API (http://jsonapi.org/).
The JSON API specification provides recommended syntax for both paging and filtering that requires object based query params.
For paging it suggests syntax like this:
/books?page[size]=100&page[number]=1
and for filtering it suggest syntax like this:
/books?filter[author]=Bob
While Ember.js Query Params (as of Ember v2.1) do not support this out of the box it is fairly simple to get working. In your controller you should map a controller property to the query param "object" as a string.
So for example, in the above "filter" example you would map a controller property called filterByAuthorValue to the query param filter[author].
The code to do this would look like this:
export default Ember.Controller.extend({
queryParams: ['sort',{
filterByAuthorValue: 'filter[author]'
}],
sort: '',
filterByAuthorValue: ''
});
Note in the example above I also have a query param called sort (which also follows JSON API recommendations but doesn't require an object). For more information on mapping a controller property to a query param see this section of the official Ember.js guide:
http://guides.emberjs.com/v2.1.0/routing/query-params/#toc_map-a-controller-s-property-to-a-different-query-param-key
Once you have the query param created you then need to handle the query param in your router. First, the router should force the model to be refreshed when the controller property filterByAuthor changes:
export default Ember.Route.extend({
queryParams: {
sort: {
refreshModel: true
},
filterByAuthor:{
refreshModel: true
}
}
});
Finally, you now need to translate the controller property filterByAuthor into an actual object when you load the model in the router's model method and assign the value from the controller property filterByAuthor. The full router code would then look like:
export default Ember.Route.extend({
queryParams: {
sort: {
refreshModel: true
},
filterByAuthor:{
refreshModel: true
}
},
model: function(params){
// The params value for filtering the entity name
if(params.filterByAuthor){
params.filter = {};
params.filter['author'] = params.filterByAuthor;
delete params.filterByAuthor;
}
return this.store.query('book', params);
},
});
Settings things up like this allows for an object based query param to be used with Ember and thus follow the JSON API recommendations.
The above has been tested with the following versions:
Ember : 2.1.0
Ember Data : 2.1.0
jQuery : 1.11.3

setting fuelux datagrid source from backbone collection

I am trying to set the fuelux datagrid source from my backbone collection. The examples source is here on https://github.com/ExactTarget/fuelux/tree/master/sample .
I tired like
(function (root, factory) {
if (typeof define === 'function' && define.amd) {
define(factory);
} else {
root.sampleData = factory();
}
}(this, function () {
return {
"geonames": new mycollection ///this will retrurn new collection array as in example
};
}));
And my backbone render consist following code to instatate data source
var dataSource = new StaticDataSource({
columns: [
{
property: 'username',
label: 'Name',
sortable: true
},
{
property: 'username',
label: 'Country',
sortable: true
},
data: this.collection,
delay: 250
});
$('#MyGrid').datagrid({
dataSource: dataSource,
stretchHeight: true
});
I get error StaticDataSource is not defined.
Can anyone explain me this? Or i will be greatful if you can help with me any reference to tutorail that explains well how to set datssource data from backbone collection? fuelux dosent have well documentation in my view.
The sample datasource at https://github.com/ExactTarget/fuelux/blob/master/sample/datasource.js allows you to populate the datagrid with a simple JavaScript object, which you could get from a Backbone collection by calling .toJSON() on the collection. Then, instantiate the datasource as follows:
https://github.com/ExactTarget/fuelux/blob/master/index.html#L112-L138
(replace the columns with what's needed for your own grid, and replace data: sampleData.geonames with data: yourCollection.toJSON())
You should then be able to instantiate the datagrid as follows:
https://github.com/ExactTarget/fuelux/blob/master/index.html#L144-L147
NOTE: this takes a one-time snapshot of your data and provides it to the datagrid. If you want your datagrid to support live queries against your Backbone collection it would just be a matter of providing a datasource that makes those queries against your collection. The datasource pattern allows end developers to connect a datagrid to any kind of data provider. Here is another example that uses the Flickr API: http://dailyjs.com/2012/10/29/fuel-ux/
I don't know of any existing datasource examples specifically for Backbone but if someone doesn't beat me to it I may create one - I really like Backbone also.

Categories