I've been using the jgrid for a couple of weeks and I love it.
We are using SignalR to provide updates to the grid, when a update comes in we highlight a cell on the grid and that highlight will fade after a user configured time.
currently to do this - we use data- attributes and every 3 seconds process all the elements with the attribute and decide on what class to apply.
the problem with this approach is that every time a client side event happens (sorting, paging, grouping, filter) these data- attributes are lost.
to combat that we have been using arrays to manage this but its got very messy and is just a nightmare already to maintain!
So what Id like to know is - is there a better way to attach data to a cell.. possibly at the array level? for example, id like to be able to just set a property on the cell in the data object and then just process that rather than maintaining lots of lists!
ok so long story short! is it possible to attach additional information a cell? so it can then be processed when the page is loaded..
Additional information
Setting the actual cell value isnt a problem, its attaching additional information to the cell which we need to do - currently we add a last updated data- attribute to the cell and this lets us decide on how to display that cell in the grid (it can changed based on multiple threshold that are defined by the user)
I have used the jquery.data() but sadly that was destroyed when the element was removed from the dom.
I could just use a single array but im hoping for a better solution!
Answer
Decided to use $(grid).jqGrid('getLocalRow', id)["field"] = value; this was persisted for the life of the grid and allowed me to query they property ongridloadcompleted!
cheers.
ste.
If you need just update some existing data in the grid you can use setCell method for example which allows your to specify new data, class or other attributes on the cell (see the answer which discusses the options). Disadvantage of the approach will be reflow of the page after every modification of the cell. Nevertheless if you have not so much modifications it could be more effective as one modification of the whole grid body. If you would provide small SignalR demo which demonstrate the problem I could try to provide you more optimization advises.
Related
React documentation seems to be very insistent on the idea that in almost every situation, deriving state from props is a bad idea, an anti-pattern, verbose, likely to cause bugs, hard to understand, and all-around probably going to place a curse on one's lineage for a thousand years.
My use case isn't that weird, so I'm probably doing something wrong, but the suggested patterns for not needing getDerivedStateFromProps() (i.e. making Your object fully controlled or fully uncontrolled) don't seem like good solutions.
The situation: I have a Table component, that takes in an array rows as a prop. Table is used in many different places in my app. I want the Table to be able to handle row-sorting. It is obviously a bad idea to to make whichever parent component controls Table to have to control the sorting*, so fully controlled is out. Making Table fully uncontrolled with a key, also seems like it doesn't make a lot of sense unless the key is the row-data itself-- but my understanding is that key is meant to be simple data (like an id), and actually having to compare all of the rows, which are typically fairly complicated objects, would be pretty inefficient**. Using memoize-one is also not an option as I am working in a closed system and can't import any new libraries.
My current solution: Table has a state variable sortedRows which is updated either whenever sort() is called or whenever props.rows is updated (via getDerivedStateFromProps), by:
Making a shallow copy of props.rows,
in-place sorting that copy and
updating state.sortedRows on that value.
As I see it, there is still only one source of truth here (which is from props), and the state is always just storing a sorted version of that truth (but always dependent on and in sync with it).
Is this solution bad? If so why? What would be a better way to implement this?
Thanks!
Note: I didn't include my code because I am massively simplifying the situation in this prompt-- in reality Table element already exists, and is pretty complicated.
Note 2: I going to ask if I'd run into issues once I want to be able to modify elements in the tables, but I think I'm actually ok, since Table doesn't manage its elements, just arrange and display them, and the buttons for adding and removing elements from a table are not contained within Table, so all that processing is happening at the level of the parent's logic as passed down as part of props.rows
*Having something like <Table rows={sort(rowsFromParent)}/>every time I call Table is repetitive and error-prone, and since clicking on a table header determines sorting column, we'd actually have to have the parent element passing down an onClick() function in every case, which quickly and unnecessarily ramps up complexity).
**There is also a secondary problem to do with rebuilding an element. The table has an infinite scroll, such that when You reach a certain element more rows are loaded in. Using key will destroy the Table component and create a new one, scrolling the user to the top of the new table (which could potentially have many thousands of rows). Something also feels wrong about having to set key in each use of Table, even though resetting based on changes to props.rows seems like it should be intrinsic to how Table works, rather than something that has to be configured each time.
Edit: I have React 15.4, which is before getDerivedStateFromProps was added and using a later version is not an option, so I guess even if I happened to find a valid use case for getDerivedStateFromProps, an alternative would be nice...
I want to create an example of a table with dynamic data that also indicates when a table value has changed.
So imagine a table of data. One cell on one row changes it's value and it turns green to show that it has changed.
I'm new to Angular. I've been through the tutorial but I'm struggling to figure out the right kind of approach to this. I'm not asking for a step by step tutorial, but if an Angular veteran could give me a broad-strokes approach as to which parts of Angular I need to be focusing on, and a few tips on how best to structure the app, it would be a big help.
Right now I have an array of JSON objects attached to $scope.rows and a table with the rows created using ng-repeat. There's a button that changes some values in the rows data at random. That seems to be doing the trick of updating the rows as I expected, but I haven't figured out how to bridge that gap between data-binding and dom manipulation. And it's possible that my approach is all wrong.
You need to detect when your rows object changes and which element changed. I have done something similar by first creating a copy of your rows object then putting a watch on scope.rows (make sure you include the object equality flag). When the watch fires, loop through the scope.rows object and when you find the element that is different, put some boolean property on it and set it to true.
In your row DOM tag, use:
ng-class="{highlightRowCSSClass:row.boolProp, normalRowCSSClass:!row.boolProp }"
and set the highlightRowCSSClass to be whatever you want to indicate a changed element.
After you set the prop on the object, set the copy of the rows to what it currently is and wait for the watch to fire again. You will need to clear the old value off each element when you loop through it again so you don't have two elements that are "on".
I'm just playing around with backbone.js and some jQuery magic to prepare for some upcoming projects.
One test case contains a table whose rows are rendered by a backbone view. They get perfectly re-rendered on value change. Afterwards the whole table is sorted by an jQuery plugin (Animated Table Sort), rows move to new positions. In fact, this process works once, but the next time, rows appear twice, everything ends up in chaos.
Is it possible, that the link between DOM element and backbone view can't handle such an change? Are there any workarounds?
When you're developing with a Model/View framework like backbone.js or knockout.js, I find that you need to re-arrange your thinking and implementations to make changes to what is diplayed (like sorting) to the Model, and not allow them to happen in the view (like using a jquery plugin).
If you do end up using a view-side script to do something fancy (animations are a good example), then it is up to you to make sure the model is updated correctly, either by disabling or extending the binding.
Also note that according to the documentation, that animated sort plugin removes your table rows from the DOM, adds them to new DIVs, animates them, removes them from the DIVs, and restores them to the table. I'm wondering if after this is all done, backbone has lost track of those TDs, and when it re-renders after the change, it's just adding a new set since the last set is 'gone'.
Thanks for your answers. Indeed, the table sorter does a lot that makes it difficult fpr backbone to maintain bindings. I've switched over to the great Quicksand plugin which uses a hidden list to animate changes in another (visible) list. Fits better to backbone.js.
Your collection maintains an order for your models, and therefor your corresponding views. If an outside force (like a jQuery table sorting plugin) modifies the order of the views, this change is not inherently reflected in the Backbone collection, so things are quickly out of sync.
Also, if the table sorter clones elements and removes the original, Backbone would likely lose track of the views and end up recreating them.
I am using partials in Rails combined with jQuery child field templates to dynamically generate forms with multiple nested child attributes. Within these partials is JavaScript that is fired when various events (e.g. onChange) occur in the form, and the code that executes needs to change the DOM for a related but different form element. In this particular case, I'm using it to add multiple sale transactions in a point-of-sale type solution to a particular group of transactions and control what data is collected for each transaction. So, for example:
User clicks on a link that adds 3 new transaction records to the transaction group.
User sets different values in each of those three transaction records, each of which fires an onChange event that calls a JS function that needs to manipulate the DOM for each of the events in different ways. (Example: transaction 1 may be a return, so it changes the text of the transaction from cost to refund, transaction 2 may be for a service, so it changes the taxable status, etc...)
The JS function has to be able to change the DOM for the correct transaction.
My problem is #3 - the DOM elements being modified are not the same as the element generating the event, so I can't use JS this. They are part of a common parent element (i.e. they share a parent <div>), so it may be possible for me to use jQuery parent/child selectors to navigate to the correct element in the DOM. However, even this requires some uniqueness in the div id attributes, and I'm not sure of the best way in Rails to dynamically generate these.
I suppose I could use random values, but this seems bad because it's not truly guaranteed to be unique and just seems like a hack. I suppose I could try to introduce some kind of counter, but this seems to violate the principles of Rails development.
Any guidance is appreciated. Thanks!
Using the current time is recipe for pain and unexplained bugs. I highly recommend you use something like a GUID generator for generating unique IDs.
you could use Time.now as unique-id when creating a new tag
Time.now.to_i
it's not an hack or something that goes against Rails principles ;-)
I have a relatively large dataset of items (a few thousand items) that I want to navigate by applying a number of filters client side in a web application. Applying the filtering logic itself is not an issue, the question is about which method to use for updating the table of matching results to get the best user experience. The methods I've come up with are:
Setting the class of each row to hide or show it (using visibility: collapsed to hide it), and keeping the DOM element in the table.
Keeping a DOM element for each data item, detaching/attaching it to the table to hide and show it.
Just keep an abstract object for each data item, creating a DOM object on demand to show it.
Which one is likely to give the best user experience? Any other recommended method besides those I've listed already?
If the display area has fixed size (or at least a maximum size), and you must filter on the client-side, I would not create a DOM node for each item, but instead reuse a predefined set of DOM nodes as templates, hiding unnecessary templates depending on the number of results from the filter. This will drastically reduce the DOM nodes in the document which will keep your page rendering responsive and is fairly easy to implement.
Example HTML*:
<ul id="massive-dataset-list-display">
<li>
<div class="field-1"></div>
<div class="field-2"></div>
<div class="field-n"></div>
</li>
<li>
<div class="field-1"></div>
<div class="field-2"></div>
<div class="field-n"></div>
</li>
<li>
<div class="field-1"></div>
<div class="field-2"></div>
<div class="field-n"></div>
</li>
.
.
.
</ul>
Example JavaScript*:
var MassiveDataset = function(src) {
var data = this.fetchDataFromSource(src);
var templateNodes = $("#massive-dataset-list-display li");
// It seems that you already have this handled, but just for
// completeness' sake
this.filterBy(someParam) {
var filteredData = [];
// magic filtering of `data`
this.displayResults(filteredData);
};
this.displayResults(filteredData) {
var resultCount = filteredData.length;
templateNodes.each(function(index, node) {
// There are more results than display node templates, start hiding
if ( index >= resultCount ) {
$(node).hide();
return;
}
$(node).show();
this.formatDisplayResultNode(node, filteredData[i]);
});
};
this.formatDisplayResultNode = function(node, rowData) {
// For great justice
};
};
var md = new MassiveDataset("some/data/source");
md.filterBy("i can haz filter?");
* Not tested. Don't expect copy/paste to work, but that would be cool.
Adding a class and using CSS to show/hide the element will probably be the fastest (coding and performance wise), especially with so much items.
If you want to go the DOM manipulation route, consider editing the DOM off-line. Cache the DOM tree in memory (a local variable), update all rows and replace the original DOM node. See http://www.peachpit.com/articles/article.aspx?p=31567&seqNum=5 for more information on this matter.
I've done a project that required filtering items on the location within a Google Maps 'viewport' and a min-max value slider (for those that are curious, it was for a real estate website).
The first version used an AJAX request to get all (server-side) filtered items, so every change in the filter requested new data. Then the JSON data was parsed to DOM nodes and added to the document. Also, in this case search-engine indexing of the items was not possible.
The second version also used an AJAX request, but this time only requested the filtered ids of the items. All items were present in the HTML with the unique ids and filtered items had an extra class name to initially hide them. Whenever the filter changed, only the filtered ids were requested and the item's class name accordingly updated. This significantly improved the speed, especially in Internet Explorer (which has the slowest JavaScript engine -of our supported browsers-!)...
I realize that it's not exactly what you're asking for, but since you opened the door for alternates...
Have you considered doing any filtering server-side? You could load in your results with AJAX if the user changes filtering options, and that way you're not loading thousands of rows of data into a browser when you might only display a portion of them. It will potentially save you and the visitor bandwith, though this will depend on how your site really gets used.
Basically, if you decide ahead of time what data you want to show, you don't have to go through the trouble of picking over what's there.
I understand that this may not fit your needs, but I offer it as a suggestion in case you were stuck on the idea of client-side.
DOM manipulation is just too slow for "a few thousand items". Assuming you have a really, really good reason why you aren't getting the server to do the filtering then the best solution I've found is to use client-side XSL transforms on data held as XML.
Transforms themselves are very quick even on reasonably large data sets. You would then ultimately assign the results to the innerHTML property of a containing DIV where you want the table to appear. Using innerHTML for large changes in the DOM is way quicker than manipulating the DOM with Javascript.
Edit: Answers to Justin Johnson's comments:-
If the dataset is that large, the XML is potentially going to be beastly large.
Note I already make the disclaimer in my first paragraph regarding the enlisting of the servers help here. There may be a case here to switch the design around and make sensible use of AJAX, or simply not attempting to show much data at once. However I'm doing my best to answer the question posed.
Its also worth considering that "beastly large" is at least function of bandwidth. In a well connected intranet web application bandwidth is not at such a premium. In addition I've seen and used implementations that build up and re-use cached XML over time.
Also, if XML is converted to a DOM object, how is this any better?
There is massive difference between the technique I propose and direct DOM manipulation by Javascript. Consider when code in javascript modifies the DOM the underlying engine has no way to know that other changes are about to immediately follow, nor can it be sure that the javascript will not immediately examine other properties of the DOM. Hence when a change is made to the DOM by Javascript the browser needs to ensure it updates all sorts of other properties so that they are consistent with a completed rendering.
However when innerHTML is assigned a large HTML string the browser can quite happily create a whole bunch of DOM objects without doing any recalculations, it can defer a zillion updates to various property values until after the entire DOM as been constructed. Hence for large scale changes innerHTML will blow direct DOM manipulation out of the water.