I'm trying to do a webservice where multiple Users can be logged in at the same time. On the Nodejs server there is a unsorted array with all the users. And a Database with all users.
Every User can always see every User online in a HTML Bootstrap Table, with different columns for Username, Id, online since.... and there are also lists for Groups that include online and offline Users. The important part is, that the Table should be updated like every 3-5 seconds.
I want the Users to be able to sort the Online Users Table by clicking on the Tableheader of a Column. What is the best practice to do that?
I currently can only think of two different solutions, both don't seem perfect to me.
1. Use Bootstrap sorting
I save the information in which way the User wanted the list to be sorted.
Then I receive the unsorted Data and fill the Table with it, after which I will trigger a click on the header and sort the Table the same way again.
But if I do it this way I think the User will always notice that the Table is refilled and then sorted againm if done every 3-5 Seconds.
2. Keep sorted lists on the Server
Keep all the different sorted lists of the Users on my Server at all Time and let the server sort them new every 3-5 Seconds.
Then when the client requests it, send the sorted list he currently wants to the client and fill the Table HTML.
But this way I think it would use quite some resources from my server, because it also has to sort some mixed online/offline Users for groups which would be many different tables I had to constantly save and reorder on my server.
Are there any better ways to achieve many sortable Userlists for the clientside?
The important thing about the UI is to reduce flicker and any kind of lag. First off try testing sorting on the user end before the data is displayed in the table. You don't want to trigger the click events because that might make a flicker effect where the data comes in, is displayed, sorted, then displayed again. If for some reason the sorting is taking too long this could result in lag or choppiness on the ui so test it out and see how it feels. I would only look to the server side if the client side isn't performing well. Check your CPU and RAM to see how best to handle that. The sorting on the fly might be doable with your setup or keeping it in RAM may be an option if you have some to spare.
Serverside stored in a site-wide or thread-engine variable in ram. If you can get away with it the thread-engine variable will be the fastest option but the cost would be SORTEDDATA_BYTES * WEB_THREADS.
Array.prototype.keySort = function(k, descending) {
var z = descending === true ? -1 : 1;
this.sort(function(a, b) {
return a[k] > b[k] ? 1*z : a[k] < b[k] ? -1*z : 0;
});
return this;
};
var sortedJSON = {
UsernameAsc: JSON.stringify(data.keySort("Username")),
UsernameDesc: JSON.stringify(data.keySort("Username", true)),
IdAsc: JSON.stringify(data.keySort("Id")),
IdDesc: JSON.stringify(data.keySort("Id", true))
};
Related
I want to get data from a database, to show on a page. There is a huge amount of rows in the table, so I'm using pages to avoid having to scroll forever.
I have functionnalities to search words (no specific columns), order by any column, and obviously change the page size and which page I am on.
I could, in theory, just ask the database for everything (SELECT * FROM myTable), send it to my html view, and work through the data entirely in javascript. The problem is, there is so much data that this is extremely slow using my structure (page controller calls my main logic, which calls a webservice, which calls the database), sometimes waiting up to 20 seconds for the original load of the page. After it's loaded, the javascript is usually fast.
Or, I could do most of that work in the controller, using Linq. I could also do the work in the webservice (it's mine), still in Linq. Or, I could straight away use WHERE, ORDER BY, COUNT, and a bunch of dynamic SQL requests so that I get instantly what I want from the database. But any of those forces me to refresh the page every time one of the parameters changes.
So I'm wondering about performance. For example, which is faster between:
var listObjects = ExecuteSQL("SELECT * FROM myTable");
return listObjects.Where(x => x.field == word).OrderBy(x => x.field);
and
var listObjects = ExecuteSQL("SELECT * FROM myTable WHERE field = :param1 ORDER BY field", word);
return listObjects;
And in what specific situations would using the different methods I've mentioned be better or worse?
No.
You want to do the work of selecting a block (pagefull) of data on your dataserver. That's it's job; it knows how to do it best.
So, forget the ExecuteSQL. You are pretty much shutting down everything's ability to help you. Try LINQ:
var page = (from m in MyTable
where m.field == param1
orderby m.field
select m)
.Skip((nPage-1)*pageLength).Take(pageLength);
That will generate the exact SQL to tell the Data Server to return just the rows you want.
Ok, so I've been reading and reading and searching and searching and strangely it doesn't seem like my scenario has been really covered anywhere.
I have an app that creates a list of products. I want a simple view that can sort the products and page through them.
Fore reference here is a simple representation of the data in Firebase.
app
stock
unique_id
name
url
imageUrl
price
When creating the list I have multiple threads using the push method on my firebase references:
new Firebase(firebaseUrl).child('stock').push({
name: "name",
price: 123
});
This gives me a lovely "hash" collection on the stock property of the app.
So what I'd now like to do is have a table to sort and page through the records that were placed in the stock hash.
I make a GET request to my server to a url like /stock?limit=10&skip=10&sort=name%20asc. This particular url would be the second page where the table contained 10 records per page and was sorted by the name property in ascending order.
Currently in my query handler I have this:
var firebaseRef = new Firebase(firebaseUrl).child('stock');
if (this.sortDesc) {
firebaseRef = firebaseRef
.orderByChild(this.sortProperty)
.endAt()
.limitToFirst(this.limitAmount);
} else {
firebaseRef = firebaseRef
.orderByChild(this.sortProperty)
.limitToFirst(this.limitAmount);
if (this.skipAmount > 0) {
firebaseRef = firebaseRef.startAt(this.skipAmount);
}
}
firebaseRef.once('value', function (snapshot) {
var results = [];
snapshot.forEach(function (childSnapshot) {
results.push(childSnapshot.val());
});
callback(null, results);
});
I'm running into a couple of problems. I'm going to split this into two cases, ascending and descending queries.
Ascending query
The orderByChild and limitToFirst seems to work correctly in the sorting ascending case. This means I can change which property has an ascending sort and how many results to return. What I am not able to get to work is skipping n records for paging to work. In the example query above I'm going to the second page. I do not get results 11-20, but I instead get the same 10 records as the first page.
Descending query
In this case I cannot begin to figure out how to tell Firebase to order by a property of the object identified by the unique key in a descending fashion. The closest I've read is to use endAt() and then limit. Docs say the limit is deprecated plus this still doesn't help me with any paging.
I tired to do doodles picturing how this would work. I came up with: order by the property, start at the 'end' of the collection, and then limit back to the page size. While this still wouldn't solve paging I would expect it to give me the last n records where n was the size of the page. I get no results.
I suppose I could say use firebaseRef = firebaseRef .orderByChild(this.sortProperty).limitToLast(this.limitAmount + this.skipAmount); and in the result callback use the forEach loop to take the first (or would it be the last; I'm not sure how that iteration would work) n records where n=this.limitAmount. This just seems inefficient. Wouldn't it be better to limit the query instead of using CPU cycles to limit data that had come over the wire or is this the relational DB query thought pattern overriding the correct thought process for NoSQL?
Further Confusion
After posting this I've still been working on a solution. I've had some things get close, but I'm also running into this filtering issue. How could I filter a set of items to one property by still sorting on another? Jeez! I want to have the ability for a user to get all the stock that isn't sold out and order it by price.
Finally
Why hasn't this basic example been fleshed out on any of the Firebase "Getting Started" pages? Being able to show tabular data, page through it, sort, and filter seem like something that EVERY web developer would come across. I'm using ng-table in an Angular app to drive the view, but it still seems that regardless of platform that the queries that I'm trying to generate would be practical on any platform that Firebase supports. Perhaps I'm missing something! Please educate me!
Firebase and NoSQL
I've come up with this simple scenario that I often run into with web applications. I want to show tabular data, filter, page, and sort it. Very simple. Very common. Writing a SQL statement for this would be dead easy. Why is the query so complicated for something like Firebase. Is this common with all NoSQL solutions? There is no relational data being stored thus the need for a relational database seems unnecessary. Yet, it seems like I could hack together a little flat file to do this storage since the ability to make Firebase do these simple tasks is not made clear in its API or Docs. FRUSTRATED!!!
Scenario
I have web interface (in a large web application) that allows a user to make a connection between two very large lists.
List A - 40,000+ items
List B - 1,000+ items
List C - Contains a list of items in b that are connected to the selected item in list A
The Code
Here is a rough jsfiddle of the current behavior minus the ajax update of the database.
Here is the primary functionality (only here because stack overflow requires a code snippet for jsfiddle links).
$('.name-listb input').add('.name-listc input').click(function (e) {
var lista_id = $('.name-lista input:checked').val();
var listb_id = $(this).val();
var operation = $(this).prop('checked') ? 'create' : 'delete';
var $listb = $('.name-listb .checkbox-list');
var $listc = $('.name-listc .checkbox-list');
if (operation == 'create') {
$listb.find('input[value=' + listb_id + ']').prop('checked', true);
// Ajax request to add checked item.
$new_item = $listb.find('input[value=' + listb_id + ']').parents('.option-group').clone();
$listc.append($new_item);
} else if (operation == 'delete') {
console.log('hello list delete');
$listb.find('input[value=' + listb_id + ']').prop('checked', false);
// Ajax request to remove checked item.
$listc.find('input[value=' + listb_id + ']').parents('.option-group').remove();
}
});
The Problem
The requirements do not allow for me to use an auto complete field or pager. But the current page takes way too long to load (between 1 and 5sec depending on caching). Also the JS behaviors are attached to all 40k+ items which will cause problems on lower performance computers (Tested on a newish $200 consumer special and the computer was crippled by the JS). There is also (not on JS fiddle but the final product) a filter that filters the list down based on text input.
The Question
What is a good strategy for handling this scenario?
My Idea
My first thought was to create a sort of document view architecture. A JavaScript list that adds items to the top and bottom as the user scrolls and dumps items off the other end when the list reaches a certain size. To filter I would dump the whole list and obtain a new list of filtered items like an auto-complete but it would be able to scroll and add items using ajax. But this is very complicated. I was hoping someone might have a better idea or a jquery plugin that already uses this approach.
Update
Lista is 70K Fixed
Listb is User generated and will span between 1k-70k.
That said just optimizing the JS with the excellent feedback of using delegates (which will make life 10x more awesome), won't be enough. Still need to limit the visible list.
Your Ideas?
I've encountered this issue on numerous projects before and one solution that's both easy to implement and well performing is using something like Infinity.js.
To summarize shortly, Infinity, like many other "infinite scroll" libraries, allows you to render only a small part of the actual list that should be visible (or should be visible soon), thus reducing the strain on the browser tremendously. You can see a simple live demo over here, check the first link for the API reference.
This is not impossible to do, the trick is NOT to load all that stuff onto the DOM because it will wreck any Java Script Engine.
The answer is using the d3js base library, which is the king amongst sorting extremely large data on client side, whether it is tabular or graphical. When I first saw it it had 4 examples, now there are pages and pages.
This is one of the first examples provided by d3, crossfilter.
The dataset if 5.3megabytes! And it filters data in milliseconds, and it promises to sort millions of rows without a performance loss.
I am using a web service that serves travel related data from third party sources, this data is converted to JSON and
is used to formulate the output based on search criteria a user.
If the web service subscribes to multiple third party service providers, the application receives thousands of potential search results for some searches.Some of these JSON files created for these search results are as [high as 2-4 MB][1] which causes considerable delay in attempting to loading the json results.
The whole json result set is required for further sorting and filtering operations on the search results by the users.Moving the sort and filtering operations to the back-end is not a possibility as for now.
For small and medium JSON result sets the current implementation works out well but large JSON result sets cause performance degradation.
How could I phasing out the JSON loading process to multiple steps to achieve improved user experience even with very large JSON result sets?
Any leads on how I can overcome this problem would be much appreciated.
I did find some a way to solve this issue so i though might as well post it for other who may find this answer useful.
The web pagination mechanism will automatically improve responsiveness of the system, user experience, and may reduce clutter on the page. Unless the returning result set is guaranteed to be very small, any web application with search capabilities must have pagination. For instance, if the result set is less then 30 rows, pagination may be optional. However, if it's bigger then 100 rows, pagination is highly recommended, and if it's bigger then 500 rows, pagination is practically required. There are a lot of different ways to implement the pagination algorithm. Depending on the method used, both performance and the user experience will be affected.
Some Alternatives on phasing out the Json loading processes to multiple steps
If the JSON is large, you could always contemplate a "paging" metaphor, where you download the first x records, and then make subsequent requests for the next "pages" of server data. Given that additional server records can presumably be added between requests, we will have to think carefully about that implementation, but it's probably easy to do.
Another way of making the JSON more efficient, is to limit the data returned in the initial request. For example, the first request could return just identifiers or header information for all or a reasonable subset of the records, and then you could have subsequent requests for additional details (e.g. the user goes to a detail screen).
A particular permutation of the prior point would be if the JSON is currently including any large data elements (e.g. Base64-encoded binary data such as images or sound files). By getting those out of the initial response, that will address many issues. And if you could return URLs for this binary data, rather than the data itself, that would definitely lend itself for lazy loading (as well as giving you the ability to easy download the binary information rather than a Base64 encoding which is 33% larger). And if you're dealing with images, you might want to also contemplate the idea of thumbnails v large images, handling the latter, in particular, in a lazy-loading model.
If adopting the first approach.....
If adopting the first approach which seems more feasible given the current development scenario, you first need to change your web search result processing back end code so that in response to a request, it only delivers n records, starting at at a particular record number. You probably also need to return the total number of records on the server so you can give the user some visual cues as to how much there is to scroll through.
Second, you need to update your web app to support this model. Thus, in your UI, as you're scrolling through the results, you probably want your user interface to respond to scrolling events by showing either
(a) actual results if you've already fetched it; or
(b) some UI that visually indicates that the record in question is being retrieved and then asynchronously request the information from the server.
You can continue downloading and parsing the JSON in the background while the initial data is presented to the user. This will simultaneously yield the benefit that most people associate with "lazy loading" (i.e. you don't wait for everything before presenting the results to the user) and "eager loading" (i.e. it will be downloading the next records so they're already ready for the user when they go there).
General Implementation steps to follow for first approach
Figure out how many total rows of data there are once merge process is complete.
Retrieve the first n records;
Send request for subsequent n records and repeat this process in the background until full result set is received.
If the user scrolls to a record, you show it if you've got it, but if not, provide visually cue that you're going to get that data for them asynchronously.
when the data comes in merge JSON update the UI, if appropriate.
If by any channce filters and sorting options are applied on web end they should be disabled until full result set is received.
General Implementation steps to follow to generating page links
After choosing the best approach to retrieve the result set and traverse it, the application needs to create the actual page links for users to click. Below is a generic function pseudo-code, which should work with any server-side technology. This logic should reside in your application, and will work with both database-driven and middle-tier pagination algorithms.
The function takes three (3) parameters and returns HTML representing page links.The parameters are query text, staring location row number, and total number of the result set rows. The algorithm is clever enough to generate appropriate links based on where the user is in the navigation path.
Note: I set the default of 50 rows per page, and a page window to 10. This means that only 10 (or fewer) page links will be visible to the user.
private String getSearchLinks(String query, int start, int total) {
// assuming that initial page = 1, total = 0, and start is 0
String result = "";
int start = 0;
if (total == 0) { return ""; // no links }
Int page_size = 50; // number of rows per page
//page window - number of visible pagers per page
Int window = 10;
int pages = ceil(total / page_size );
result = "Pages:";
int current_page = (start / page_size ) + 1;
//numeric value of current page ex. if start is 51 : 51/50 =
// 1 + 1 = 2
Int left_link_count = ((current_page - 1) > (window / 2))
? (window / 2 - 1) : (current_page - 1);
Int pageNo = current_page - left_link_count;
if (pageNo > 1) { // show first page and if there are more
// links on the left
result += "1 .. ";
result += "«";
}
for (int i = 0; i < window-1; i++) {
if (pageNo > pages) {
break;
}
else if (pageNo == current_page) {
result += "" + pageNo + "";
}
else {
result += "pageNo";
}
pageNo++;
} // end for
if ((pageNo - 1) < pages) {
result += "»";
}
result += "
Showing"+((start > total)?total+1:start+1)+
" - "+(((start + 50)>total)?total:(start + 50))+"
of Total:"+total;
return result;
}
This logic does not care how the viewable portion of the result set is generated. Whether it is on the database side or on the application server's side, all this algorithm needs is a "total" number of results (that can be per-cached after the first retrieval), and the indicator ("start") containing which row number was the first on the last page user was looking at. This algorithm also shows the first page link if the user is on a page beyond the initial page window (for example, page 20), and correctly accounts for the result sets with a number of rows not enough for 10 pages. (for example, only 5 pages)
The main "for loop" generates the links and correctly computes the "start" parameter for the next page. The Query string and Total are always the same values.
Points to note
In a portal where internal and external data is used there would be
an unavoidable performance overhead because the system will have to
wait until all external hotel information is fully received and
merged and sorted with local hotel details. Only then would we be
able to know the exact result count.
Deciding how many additional requests are required would be somewhat
simplified if a standard result count per page is used.
This change would provide the user with a quick and uniform page load
time for all search operations and it would be populated with the
most significant result block of the search on the first instance.\
Additional page requests could be handled through asynchronous
requests.
Not only the hotel count but the content sent also needs to be
checked and optimized for performance gains.
Some instances for similar hotel counts for the generated JSON object
size varies considerably depending on the third party suppliers
subscribed too.
Once the successive JSON messages are received these JSON blocks
would need to be merged. Prior to the merging the user may need to be
notified new results are available and if he/she wishes to merge the
new results or work with existing primary result set.
Useful Resources :*
Implementing search result pagination in a web application
How big is too big for JSON?
On our web application, the search results are displayed in sortable tables. The user can click on any column and sort the result. The problem is some times, the user does a broad search and gets a lot of data returned. To make the sortable part work, you probably need all the results, which takes a long time. Or I can retrieve few results at a time, but then sorting won't really work well. What's the best practice to display sortable tables that might contain lots of data?
Thanks for all the advises. I will certainly going over these.
We are using an existing Javascript framework that has the sortable table; "lots" of results means hundreds. The problem is that our users are at some remote site and a lot of delay is the network time to send/receive data from the data center. Sorting the data at the database side and only send one page worth of results at a time is nice; but when the user clicks some column header, another round trip is done, which always add 3-4 seconds.
Well, I guess that might be the network team's problem :)
Using sorting paging at the database level is the correct answer. If your query returns 1000 rows, but you're only going to show the user 10 of them, there is no need for the other 990 to be sent across the network.
Here is a mysql example. Say you need 10 rows, 21-30, from the 'people' table:
SELECT * FROM people LIMIT 21, 10
You should be doing paging back on the database server. E.g. on SQL 2005 and SQL 2008 there are paging techniques. I'd suggest looking at paging options for whatever system you're looking at.
What database are you using as there some good paging option in SQL 2005 and upwards using ROW_NUMBER to allow you to do paging on the server. I found this good one on Christian Darie's blog
eg This procedure which is used to page products in a category. You just pass in the pagenumber you want and the number of products on the page etc
CREATE PROCEDURE GetProductsInCategory
(#CategoryID INT,
#DescriptionLength INT,
#PageNumber INT,
#ProductsPerPage INT,
#HowManyProducts INT OUTPUT)
AS
-- declare a new TABLE variable
DECLARE #Products TABLE
(RowNumber INT,
ProductID INT,
Name VARCHAR(50),
Description VARCHAR(5000),
Price MONEY,
Image1FileName VARCHAR(50),
Image2FileName VARCHAR(50),
OnDepartmentPromotion BIT,
OnCatalogPromotion BIT)
-- populate the table variable with the complete list of products
INSERT INTO #Products
SELECT ROW_NUMBER() OVER (ORDER BY Product.ProductID),
Product.ProductID, Name,
SUBSTRING(Description, 1, #DescriptionLength) + '...' AS Description,
Price, Image1FileName, Image2FileName, OnDepartmentPromotion, OnCatalogPromotion
FROM Product INNER JOIN ProductCategory
ON Product.ProductID = ProductCategory.ProductID
WHERE ProductCategory.CategoryID = #CategoryID
-- return the total number of products using an OUTPUT variable
SELECT #HowManyProducts = COUNT(ProductID) FROM #Products
-- extract the requested page of products
SELECT ProductID, Name, Description, Price, Image1FileName,
Image2FileName, OnDepartmentPromotion, OnCatalogPromotion
FROM #Products
WHERE RowNumber > (#PageNumber - 1) * #ProductsPerPage
AND RowNumber <= #PageNumber * #ProductsPerPage
You could do the sorting on the server. AJAX would eliminate the necessity of a full refresh, but there'd still be a delay. Sides, databases a generally very fast at sorting.
For these situations I employ techniques on the SQL Server side that not only leverage the database for the sorting, but also use custom paging to ONLY return the specific records needed.
It is a bit of a pain to implemement at first, but the performance is amazing afterwards!
How large is "a lot" of data? Hundreds of rows? Thousands?
Sorting can be done via JavaScript painlessly with Mochikit Sortable Tables. However, if the data takes a long time to sort (most likely a second or two [or three!]) then you may want to give the user some visual cue that soming is happening and the page didn't just freeze. For example, tint the screen (a la Lightbox) and display a "sorting" animation or text.