I've placed an AJAX GET call in an Enyo view. The GET calls a web service which returns an array of records including the column headers.
My aim is to dynamically build a table with this returned array, where a row is created for each index and columns for each header within the index.
What I do know in terms of Enyo is to create one record by mapping the AJAX response headers to component fields:
this.$.actionsTaken.setContent( inResponse.ActionsTaken);
But I'm not sure how to do that dynamically and create the table on the fly.
So for example when I inspect the web service response my index 0 contains the following: (Where ActionsTaken, Application and EM are the col headers.)
{
ActionsTaken: "Tested uptime"
Application: "2011 Hanko"
EM: "EM102 "
}
Question:
How can you dynamically build a table from a JSON array?
The AJAX GET:
fetch: function() {
var ajax = new enyo.Ajax({
url: "http://testservice.net/WebService/GetHistory",
async: false,
handleAs:"json",
xhrFields: {withCredentials: true}
});
ajax.go(this.data);
ajax.response(this, "gotResponse");
ajax.error(this, function(inSender, inError) {
Misc.hideMask();
ViewLibrary.back();
Misc.showToast("Error retrieving data");
});
},
gotResponse: function(inSender, inResponse)
{
var debugVar = inResponse;
this.$.actionsTaken.setContent( inResponse.ActionsTaken);
this.$.configurationItemLogicalName_value.setContent( inResponse.Application);
this.$.eM.setContent( inResponse.EM);
},
The components that hold the data values:
{name:"outage_table", kind: "FittableRows",components:[
{content: "Actions Taken", name: "actionsTaken", },
{content: "Application", name: "application", },
{content: "EM", name: "eM", },
]}
Depending on the complexity of all your data, you might be able to do this fairly simply. Iterate through your array and on each object, you can then iterate through its keys and create each column with its data.
Something like:
for (var k in theObject) {
// make column k
// add theObject[k] to it
}
I think the problem is that you have created named components that match this example data, but it is unknown if those will always be the same keys?
There is a kind (enyo.DataTable, which, surprisingly, I have never used) that you might use instead. It lets you add rows (no headers) so you would make your first row from all the object keys, then the next row would be those keys' data. It is derived from DataRepeater so there may be some implementation to work out, such as using an enyo.Collection to store your data and then set the collection to the DataTable.
The other alternative that matches closer to what you have would be to just dynamically create the components as you need them:
this.$.outage_table.createComponents([{...}]);
this.$.outage_table.render(); // need to re-render when dynamically adding components
Related
Background Information
In short i'm looking to achieve "mostly" whats shown here ...
http://demos.telerik.com/kendo-ui/treelist/remote-data-binding
... except it's a bit of a mind bender and the data comes from more than base endpoint url in my case.
I am trying to build a generic query building page that allows users to pick a context, then a "type" (or endpoint) and then from there build a custom query on that endpoint.
I have managed to get to the point where I do this for a simple query but now i'm trying to handle more complex scenarios where I retrieve child,or deeper data items from the endpoint in question.
With this in mind ...
The concept
I have many endpoints not all of which OData but follow mostly OData v4 rules, and so I am trying to build a "TreeGrid" based having selected an endpoint that will expose the expansion options available to the query.
All my endpoints have a custom function on it called GetMetadata() which describes the type information for that endpoint, where an endpoint is for the most part basically a REST CRUD<T> implementation which may or may not have some further custom functions on it to handle a few other business scenarios.
So, given a HTTP get request to something like ...
~/SomeContext/SomeType/GetMetadata()
... I would get back an object that looks much like an MVC / WebAPI Metadata container.
that object has a property called "Properties" some of which are scalar and some of which are complex (as defined in the data).
I am trying to build a TreeListDataSource or a HierarchicalDataSource object that I can use to bind to the Kendo treeList control for only the complex properties, that dynamically builds the right get url for the meta and lists out the complex properties for that type based on the property information from the parent type with the root endpoint being defined in other controls on the page.
The Problem
I can't seem to figure out how to configure the kendo datasource object for the TreeGrid to get the desired output, I think for possibly one of two reasons ...
The TreeListDataSource object as per the demo shown here: http://demos.telerik.com/kendo-ui/treelist/local-data-binding seems to imply that the hierarchy based control wants a flat data source.
I can't figure out how to configure the datasource in such a way that I could pass in the parent meta information (data item from the source) in order to build the right endpoint url for the get request.
function getDatasource(rootEndpoint) {
return {
pageSize: 100,
filter: { logic: 'and', filters: [{ /* TODO:possibly filter properties in here? */ }] },
type: 'json',
transport: {
read: {
url: function (data) {
//TODO: figure out how to set this based on parent
var result = my.api.rootUrl + endpoint + "/GetMetadata()";
return result;
},
dataType: 'json',
beforeSend: my.api.beforeSend
}
},
schema: {
model: {
id: 'Name',
fields: {
Type: { field: 'Type', type: 'string' },
Template: { field: 'Template', type: 'string' },
DisplayName: { field: 'DisplayName', type: 'string' },
ShortDisplayName: { field: 'ShortDisplayName', type: 'string' },
Description: { field: 'Description', type: 'string' },
ServerType: { field: 'ServerType', type: 'string' }
}
}
parse: function (data) {
// the object "data" passed in here will be a meta container, a single object that contains a property array.
$.each(data.Properties, function (idx, item) {
item.ParentType = data;
item.Parent = ??? where do I get this ???
});
return data.Properties;
}
}
};
}
Some of my problem may be down to the fact that metadata inherently doesn't have primary keys, I wondered if perhaps using parse to attach a generated guid as the key might be an idea, but then I think Kendo uses the id for the question on the API when asking for children.
So it turns out that kendo is just not geared up to do anything more than serve up data from a single endpoint and the kind of thing i'm doing here is a little bit more complex than that, and further more due to the data being "not entity type data" I don't have common things like keys and foreign keys.
With that in mind I chose take the problem away from kendo altogether and simply handle the situation with a bit of a "hack that behaves like a normal kendo expand but not really" ...
In a treegrid, when kendo shows an expandable row it renders something like this in the first cell ...
With no expanded data or a data source that is bound to a server this cell is not rendered.
so I faked it in place and added an extra class to my version .not-loaded.
that meant I could hook up a custom block of js on click of my "fake expand", to build the right url, do my custom stuff, fake / create some id's, then hand the data to the data source.
expandList.on('click', '.k-i-expand.not-loaded', function (e) {
var source = expandList.data("kendoTreeList");
var cell = $(e.currentTarget).closest('td');
var selectedItem = source.dataItem($(e.currentTarget).closest('tr'));
my.type.get(selectedItem.ServerType, ctxList.val(), function (meta) {
var newData = JSLINQ(meta.Properties)
.Select(function (i) {
i.id = selectedItem.id + "/" + i.Name;
i.parentId = selectedItem.id;
i.Selected = my.type.ofProperty.isScalar(i);
i.TemplateSource = buildDefaultTemplateSourceFor(i);
return i;
})
.ToArray();
for (var i in newData) {
source.dataSource.add(newData[i]);
}
$(e.currentTarget).remove();
source.expand(selectedItem);
buildFilterGrid();
generate();
});
});
This way, kendo gets given what it epects for a for a treeviewlist "a flat set with parent child relationships" and I do all the heavy lifting.
I used a bit of JSLINQ magic to make the heavy lifting a bit more "c# like" (i prefer c# after all), but in short all it does is grab the parent item that was expanded on and uses the id from that as the parent then generates a new id for the current item as parent.id + "/" + current.name, this way everything is unique as 2 properties on an object can't have the same name, and where two objects are referenced by the same parent the prefix of the parents property name makes the reference unique.
It's not the ideal solution, but this is how things go with telerik, a hack here, a hack there and usually it's possible to make it work!
Something tells me there's a smarter way to do this though!
In the Kendo UI documentation for the DataSource component, it states that the data function is used to get data items for the data source.
However it also states that if the data source is bound to a JavaScript array (via the data option) the data method will return the items of that array. Every item from the array is wrapped in a kendo.data.ObservableObject or kendo.data.Model.
How can I retrieve the original unwrapped data items (i.e. having same reference) that were passed into the data source?
I ask because I'm using a Kendo UI treeview control and in its event handlers (e.g. check event) I want to update the original data item for a tree node based on some custom logic.
Update
For example here is a simple treeview having a single node (of course in a realistic scenario the tree would contain many nodes) . When checking the node I want to get a reference to the original data item for the checked node. this.dataItem(e.node) does not return the original data item as the log statement outputs false.
<div id="treeview"></div>
<script>
var mydata = [
{ text: "foo", checked: false}
];
$("#treeview").kendoTreeView({
checkboxes: true,
dataSource: mydata,
check: function(e) {
console.log(this.dataItem(e.node) == mydata[0]); //I want this to output true
}
});
</script>
If I understand your question correctly, you can get to the records independently by referencing your data source and using the .at(x) function, where x equals whatever record of your data source you are attempting to access. So to get the first.
var theData = yourDataSource.at(0);
To update it, you then use .set and .sync.
theData.set('userFirstName', 'Joe');
theData.set('userAverageTime', 10);
yourDataSource.sync();
Using .set() is handy because if you store all your updates into an iterable collection, then you can just run through them.
$.each(updatedVars, function(key, element) {
theData.set(key, element);
});
yourDataSource.sync();
Is there a way to configure a JsonRestStore to work with an existing web service that returns an array of objects which is not at the root-level of the JSON response?
My JSON response is currently similar to this:
{
message: "",
success: true,
data: [
{ name: "Bugs Bunny", id: 1 },
{ name: "Daffy Duck", id: 2 }
],
total: 2
}
I need to tell the JsonRestStore that it will find the rows under "data", but I can't see a way to do this from looking at the documentation. Schema seems like a possibility but I can't make sense of it through the docs (or what I find in Google).
My web services return data in a format expected by stores in Ext JS, but I can't refactor years worth of web services now (dealing with pagination via HTTP headers instead of query string values will probably be fun, too, but that's a problem for another day).
Thanks.
While it's only barely called out in the API docs, there is an internal method in dojox/data/JsonRestStore named _processResults that happens to be easily overridable for this purpose. It receives the data returned by the service and the original Deferred from the request, and is expected to return an object containing items and totalCount.
Based on your data above, something like this ought to work:
var CustomRestStore = declare(JsonRestStore, {
_processResults: function (results) {
return {
items: results.data,
totalCount: results.total
};
}
});
The idea with dojo/store is that reference stores are provided, but they are intended to be customized to match whatever data format you want. For example, https://github.com/sitepen/dojo-smore has a few additional stores (e.g. one that handles Csv data). These stores provide good examples for how to handle data that is offered under a different structure.
There's also the new dstore project, http://dstorejs.io/ , which is going to eventually replace dojo/store in Dojo 2, but works today against Dojo 1.x. This might be easier for creating a custom store to match your data structure.
A Brief Description...
I have an editable jqWidgets grid within my site but unlike traditional editable grids, instead of updating the database as you edit a row, I want to update the entire grid at once when a 'Save' button is pressed.
With this in mind, I can only see two possible options:
Somehow stringify the entire grid object into a query string and send 1 AJAX request to the server.
Or, loop through each row and execute an AJAX request for each individual row in the grid
The issue I have is that I simply cannot figure out which method is better as they both have their complications. The query string on option 1 could potentially be astronomical and exceed memory limits as the user is trying to pass the entire contents of the grid over a POST request. However, the latter solution could cause issues by the fact that it is executing an AJAX request for each row in the grid. Imagine if there were 100, or even 1000 rows in the grid!!
My Question
So, can anybody think of an effective way to achieve this without exceeding memory limits, but whilst also avoiding making multiple AJAX requests?
Further Information
In case the above wasn't clear, consider this javascript array:
[
{ name: 'Ben', age: 23, occupation: 'Developer' },
{ name: 'Charlie', age: 24, occupation: 'Receptionist' },
{ name: 'Jemima', age: 18, occupation: 'Designer' }
]
And now, try to ascertain the best method for passing all that information to PHP in one query?
on your source object, there is an updaterow function that takes parameters rowid, newdata, and commit:
updaterow: function(rowid, newdata, commit)
newdata is a json object representing the updated row from the grid, and commit is a callback you need to invoke to consider the change to have been accepted.
within this function, you can take the updated rowdata object and throw it in an associative array based on the rowid (to make sure a changed row only gets sent back once). Finally, call
commit(true)
you can now have a function called when a button is clicked that does your ajax post with the updated row data:
var editedRows = {};
var source = {
localdata: [], // your data for the grid
datatype: 'json',
datafields: [
// define columns and types
],
updaterow: function(rowid, datarow, commit) {
editedRows[rowid] = datarow;
commit(true);
}
}
var saveData = function() {
// post editedRows, clear editedRows on success
};
I have read and read the docs on these two methods, but for the life of me cannot work out why you might use one over the other?
Could someone just give me a basic code situation where one would be application and the other wouldn't.
reset sets the collection with an array of models that you specify:
collection.reset( [ { name: "model1" }, { name: "model2" } ] );
fetch retrieves the collection data from the server, using the URL you've specified for the collection.
collection.fetch( { url: someUrl, success: function(collection) {
// collection has values from someUrl
} } );
Here's a Fiddle illustrating the difference.
We're assuming here that you've read the documentation, else it'l be a little confusing here.
If you look at documentation of fetch and reset, what it says is, suppose you have specified the url property of the collection - which might be pointing to some server code, and should return a json array of models, and you want the collection to be filled with the models being returned, you will use fetch.
For example you have the following json being returned from the server on the collection url:
[{
id : 1,
name : "a"
}, {
id : 2,
name : "b"
}, {
id : 3,
name : "c"
}]
Which will create 3 models in your collection after successful fetch. If you hunt for the code of collection fetch here you will see that fetch will get the response and internally will call either reset or add based on options specified.
So, coming back to discussion, reset assumes that we already have json of models, which we want to be stored in collection, we will pass it as a parameter to it. In your life, ever if you want to update the collection and you already have the models on client side, then you don't need to use fetch, reset will do your job.
Hence, if you want to the same json to be filled in the collection with the help of reset you can do something like this:
var _self = this;
$.getJSON("url", function(response) {
_self.reset(response); // assuming response returns the same json as above
});
Well, this is not a practice to be followed, for this scenario fetch is better, its just used for example.
Another example of reset is on the documentation page.
Hope it gives a little bit of idea and makes your life better :)
reset() is used for replacing collection with new array. For example:
#collection.reset(#full_collection.models)
would load #full_collections models, however
#collection.reset()
would return empty collection.
And fetch() function returns default collection of model