Website takes very long to load, order, and display statistics - javascript

I have a database where a few statistics are stored (PlayerID, Time, Kills, Deaths), 7 in total but these are enough to explain.
What I did was load the table in php, create an array with all the statistics (including statistics that are a product of multiple statistics, kills per death for example), sort them, and cut the top 10.
Then I passed this top10-array as JSON to javascript, where I just created tr- and td-elements and made the respective tables append them.
However, with just 10 different sets of statistics, it took an eternity (like 30 seconds I'd say) to load the page. My guess was that the sorting takes too long for the server, so I tried just passing the initial array to javascript and put the sorting part in /* */, to test if it would go any faster. It did not.
Afterwards I turned the sorting back on, but disabled the javascript part and just displayed it with var_dump(). It still didn't go much faster.
The actual code is kinda long and I think this isn't a question that you need the code for, but I can still post the code if you really need it. What exactly is causing the load time, and what would be the best way to sort and display the statistics?
Edit: I forgot to say that using ORDER BY in the SQL query doesn't work, because I need the to calculate some statistics using others.

Sounds like you spend all you time on retrieving and processing the whole data set, and then you throw away everything but top 10 records.
Without seeing the model and understanding how the top 10 are selected, the only advice would be to rethink you model, decide on an indexed field by which you would be able to fetch just the top 10 records, and do the calculations for that field before you save statistics into DB.
It also depends on which operation is more time-sensitive for you - SELECT or UPDATE (i.e when you fetch or save statistics). But I bet couple of math operations before you save data will not affect much the time spent on saving the data. But will greatly improve time spent on generating some reports, including top 10 report.

Related

Detecting change in raw data

I am currently building a web application that acts as a storage tank level dashboard. It parses incoming data from a number of sensors in tanks and stores these values in a database. The application is built using express / node.js. The data is sampled every 5 minutes but is sent to the server every hour (12 samples per transmission).
I am currently trying to expand the application's capabilities to detect changes in the tank level due to filling or emptying. The end goal is to have a daily report that generates a summary of the filling / emptying events with the duration of time and quantity added or removed. This image shows a screenshot of tank capacity during one day - https://imgur.com/a/kZ50N.
My questions are:
What algorithms / functions are available that detects the changes in tank level? How would I implement them into my application?
When should the data handling take place? As the data is parsed and saved into the server? At the end of the day with a function that goes through all the data for that day?
Is it worth considering some sort of data cleaning during the parsing stage? I have noticed times when there are random spikes in the data due to noise.
How should I handle events when they immediately start emptying the tank immediately after completing a delivery? I will need the algorithm to be robust enough that it detects a change in the direction of the slope to be the end of an event. Example of this is in the provided image.
I realise that it may difficult to put together a robust solution. There are times when the tank is being emptied at the same time that it is being filled. This makes it difficult to measure these reductions. The only was to know that this took place is the slope of during the delivery flatlines for approximately 15 minutes and the delivery is a fixed amount less than the usual delivery total.
This has been a fun project to put together. Thanks for any assistance.
You should be able to develop an algorithm that specifies what you mean by a fill or en emptying (a change in tank level). A good place to start is X% in Y seconds. You then calibrate to avoid false positives or false negatives (e.g. showing a fill when there was none vs. missing a fill when it occurs. One potential approach is to average the fuel level over a period of time (say 10 minutes) and compare it with the average for the next 10 minutes. If there is a difference above a threshold (say 5%), you can call this a change.
When you process the data depends on when you need it, so if the users need to be constantly informed of changes, this could be done on querying of the data. Processing the data into changes in level on write to your datastore might be more efficient (you only do it once), however you lose the ability to tweak your algorithm. It could well depend on performance, e.g. if someone wants to pull a years worth of data, is the system able to deal with this?
You will almost certainly need to do something like a low pass filter on the incoming data. You don't want to show a tank fill based on a temporary spike in level. This is easy to do with an array of values. As mentioned above, a moving average, say of the last 10 minutes of levels is another way of smoothing the data. You may never get a 0% false positive rate or a 0% false negative rate, you can only aim for values as low as possible.
In this case it looks like a fill followed by an emptying of the tank. If you consider these to be two separate events then you can simply detect changes on the incoming data. I'd suggest you create a graph marking fills as a symbol on the graph as well as emptying. This way you can eyeball the data to ensure you are detecting changes. I would also say you could add some very useful unit tests for your calculations using perhaps jasmin.js or cucumber.js.

Need advice on a database structure

A friend and I put together a MySQL php website where users can make predictions on sports events.
My question is regarding our "rankings" page which displays the list of members who have made the best predictions. The way we had it initially was each time a user loaded that page the server would calculate all the predictions from all the users, then displayed the top ones. Unfortunately, even with a small group alpha testing it, the loading of this page quickly became slower as the number of predictions grew.
Ideally speaking, we would like to have filters on this page, for each different sport, league and time frames, and create a dynamic rankings page, but we're stuck on finding an efficient way to structure our database that won't use up too much server resources.
Any ideas on what a better way could be is greatly appreciate. Thanks!
You could use "materialized views" to calculate the score. A materialized view is a view (thus some query), but where the results are stored (in memory) and updated accordingly:
http://www.fromdual.com/mysql-materialized-views
One can expect that the difference (in the number of (correct) predictions) will not change dramatically between two page renderings. Thus that the number of calculations will be quite low.

How to display large content quickly in browser [duplicate]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I need to present a large number of rows of data (ie. millions of rows) to the user in a grid using JavaScript.
The user shouldn't see pages or view only finite amounts of data at a time.
Rather, it should appear that all of the data are available.
Instead of downloading the data all at once, small chunks are downloaded as the user comes to them (ie. by scrolling through the grid).
The rows will not be edited through this front end, so read-only grids are acceptable.
What data grids, written in JavaScript, exist for this kind of seamless paging?
(Disclaimer: I am the author of SlickGrid)
UPDATE
This has now been implemented in SlickGrid.
Please see http://github.com/mleibman/SlickGrid/issues#issue/22 for an ongoing discussion on making SlickGrid work with larger numbers of rows.
The problem is that SlickGrid does not virtualize the scrollbar itself - the scrollable area's height is set to the total height of all the rows. The rows are still being added and removed as the user is scrolling, but the scrolling itself is done by the browser. That allows it to be very fast yet smooth (onscroll events are notoriously slow). The caveat is that there are bugs/limits in the browsers' CSS engines that limit the potential height of an element. For IE, that happens to be 0x123456 or 1193046 pixels. For other browsers it is higher.
There is an experimental workaround in the "largenum-fix" branch that raises that limit significantly by populating the scrollable area with "pages" set to 1M pixels height and then using relative positioning within those pages. Since the height limit in the CSS engine seems to be different and significantly lower than in the actual layout engine, this gives us a much higher upper limit.
I am still looking for a way to get to unlimited number of rows without giving up the performance edge that SlickGrid currently holds over other implementations.
Rudiger, can you elaborate on how you solved this?
https://github.com/mleibman/SlickGrid/wiki
"SlickGrid utilizes virtual rendering to enable you to easily work with hundreds of thousands of items without any drop in performance. In fact, there is no difference in performance between working with a grid with 10 rows versus a 100’000 rows."
Some highlights:
Adaptive virtual scrolling (handle hundreds of thousands of rows)
Extremely fast rendering speed
Background post-rendering for richer cells
Configurable & customizable
Full keyboard navigation
Column resize/reorder/show/hide
Column autosizing & force-fit
Pluggable cell formatters & editors
Support for editing and creating new rows."
by mleibman
It's free (MIT license).
It uses jQuery.
The best Grids in my opinion are below:
Flexigrid: http://flexigrid.info/
jQuery Grid: http://www.trirand.com/blog/
jqGridView: http://plugins.jquery.com/project/jqGridView
jqxGrid: https://www.jqwidgets.com/
Ingrid: http://reconstrukt.com/ingrid/
SlickGrid http://github.com/mleibman/SlickGrid
DataTables http://www.datatables.net/index
ShieldUI http://demos.shieldui.com/web/grid-virtualization/performance-1mil-rows
Smart.Grid https://www.htmlelements.com/demos/grid/overview/
My best 3 options are jqGrid, jqxGrid and DataTables. They can work with thousands of rows and support virtualization.
I don't mean to start a flame war, but assuming your researchers are human, you don't know them as well as you think. Just because they have petabytes of data doesn't make them capable of viewing even millions of records in any meaningful way. They might say they want to see millions of records, but that's just silly. Have your smartest researchers do some basic math: Assume they spend 1 second viewing each record. At that rate, it will take 1000000 seconds, which works out to more than six weeks (of 40 hour work-weeks with no breaks for food or lavatory).
Do they (or you) seriously think one person (the one looking at the grid) can muster that kind of concentration? Are they really getting much done in that 1 second, or are they (more likely) filtering out the stuff the don't want? I suspect that after viewing a "reasonably-sized" subset, they could describe a filter to you that would automatically filter out those records.
As paxdiablo and Sleeper Smith and Lasse V Karlsen also implied, you (and they) have not thought through the requirements. On the up side, now that you've found SlickGrid, I'm sure the need for those filters became immediately obvious.
I can say with pretty good certainty that you seriously do not need to show millions of rows of data to the user.
There is no user in the world that will be able to comprehend or manage that data set so even if you technically manage to pull it off, you won't solve any known problem for that user.
Instead I would focus on why the user wants to see the data. The user does not want to see the data just to see the data, there is usually a question being asked. If you focus on answering those questions instead, then you would be much closer to something that solves an actual problem.
I recommend the Ext JS Grid with the Buffered View feature.
http://www.extjs.com/deploy/dev/examples/grid/buffer.html
(Disclaimer: I am the author of w2ui)
I have recently written an article on how to implement JavaScript grid with 1 million records (http://w2ui.com/web/blog/7/JavaScript-Grid-with-One-Million-Records). I discovered that ultimately there are 3 restrictions that prevent from taking it highter:
Height of the div has a limit (can be overcome by virtual scrolling)
Operations such as sort and search start being slow after 1 million records or so
RAM is limited because data is stored in JavaScript array
I have tested the grid with 1 million records (except IE) and it performs well. See article for demos and examples.
dojox.grid.DataGrid offers a JS abstraction for data so you can hook it up to various backends with provided dojo.data stores or write your own. You'll obviously need one that supports random access for this many records. DataGrid also provides full accessibility.
Edit so here's a link to Matthew Russell's article that should provide the example you need, viewing millions of records with dojox.grid. Note that it uses the old version of the grid, but the concepts are the same, there were just some incompatible API improvements.
Oh, and it's totally free open source.
I used jQuery Grid Plugin, it was nice.
Demos
Here are a couple of optimizations you can apply you speed up things. Just thinking out loud.
Since the number of rows can be in the millions, you will want a caching system just for the JSON data from the server. I can't imagine anybody wanting to download all X million items, but if they did, it would be a problem. This little test on Chrome for an array on 20M+ integers crashes on my machine constantly.
var data = [];
for(var i = 0; i < 20000000; i++) {
data.push(i);
}
console.log(data.length);​
You could use LRU or some other caching algorithm and have an upper bound on how much data you're willing to cache.
For the table cells themselves, I think constructing/destroying DOM nodes can be expensive. Instead, you could just pre-define X number of cells, and whenever the user scrolls to a new position, inject the JSON data into these cells. The scrollbar would virtually have no direct relationship to how much space (height) is required to represent the entire dataset. You could arbitrarily set the table container's height, say 5000px, and map that to the total number of rows. For example, if the containers height is 5000px and there are a total of 10M rows, then the starting row ≈ (scroll.top/5000) * 10M where scroll.top represents the scroll distance from the top of the container. Small demo here.
To detect when to request more data, ideally an object should act as a mediator that listens to scroll events. This object keeps track of how fast the user is scrolling, and when it looks like the user is slowing down or has completely stopped, makes a data request for the corresponding rows. Retrieving data in this fashion means your data is going to be fragmented, so the cache should be designed with that in mind.
Also the browser limits on maximum outgoing connections can play an important part. A user may scroll to a certain position which will fire an AJAX request, but before that finishes the user can scroll to some other portion. If the server is not responsive enough the requests would get queued up and the application will look unresponsive. You could use a request manager through which all requests are routed, and it can cancel pending requests to make space.
I know it's an old question but still.. There is also dhtmlxGrid that can handle millions of rows. There is a demo with 50,000 rows but the number of rows that can be loaded/processed in grid is unlimited.
Disclaimer: I'm from DHTMLX team.
I suggest you read this
http://www.sitepen.com/blog/2008/11/21/effective-use-of-jsonreststore-referencing-lazy-loading-and-more/
Disclaimer: i heavily use YUI DataTable without no headache for a long time. It is powerful and stable. For your needs, you can use a ScrollingDataTable wich suports
x-scrolling
y-scrolling
xy-scrolling
A powerful Event mechanism
For what you need, i think you want is a tableScrollEvent. Its API says
Fired when a fixed scrolling DataTable has a scroll.
As each DataTable uses a DataSource, you can monitoring its data through tableScrollEvent along with render loop size in order to populate your ScrollingDataTable according to your needs.
Render loop size says
In cases where your DataTable needs to display the entirety of a very large set of data, the renderLoopSize config can help manage browser DOM rendering so that the UI thread does not get locked up on very large tables. Any value greater than 0 will cause the DOM rendering to be executed in setTimeout() chains that render the specified number of rows in each loop. The ideal value should be determined per implementation since there are no hard and fast rules, only general guidelines:
By default renderLoopSize is 0, so all rows are rendered in a single loop. A renderLoopSize > 0 adds overhead so use thoughtfully.
If your set of data is large enough (number of rows X number of Columns X formatting complexity) that users experience latency in the visual rendering and/or it causes the script to hang, consider setting a renderLoopSize.
A renderLoopSize under 50 probably isn't worth it. A renderLoopSize > 100 is probably better.
A data set is probably not considered large enough unless it has hundreds and hundreds of rows.
Having a renderLoopSize > 0 and < total rows does cause the table to be rendered in one loop (same as renderLoopSize = 0) but it also triggers functionality such as post-render row striping to be handled from a separate setTimeout thread.
For instance
// Render 100 rows per loop
var dt = new YAHOO.widget.DataTable(<WHICH_DIV_WILL_STORE_YOUR_DATATABLE>, <HOW YOUR_TABLE_IS STRUCTURED>, <WHERE_DOES_THE_DATA_COME_FROM>, {
renderLoopSize:100
});
<WHERE_DOES_THE_DATA_COME_FROM> is just a single DataSource. It can be a JSON, JSFunction, XML and even a single HTML element
Here you can see a Simple tutorial, provided by me. Be aware no other DATA_TABLE pluglin supports single and dual click at the same time. YUI DataTable allows you. And more, you can use it even with JQuery without no headache
Some examples, you can see
List item
Feel free to question about anything else you want about YUI DataTable.
regards,
I kind of fail to see the point, for jqGrid you can use the virtual scrolling functionality:
http://www.trirand.net/aspnetmvc/grid/performancevirtualscrolling
but then again, millions of rows with filtering can be done:
http://www.trirand.net/aspnetmvc/grid/performancelinq
I really fail to see the point of "as if there are no pages" though, I mean... there is no way to display 1,000,000 rows at once in the browser - this is 10MB of HTML raw, I kind of fail to see why users would not want to see the pages.
Anyway...
best approach i could think of is by loading the chunk of data in json format for every scroll or some limit before the scrolling ends. json can be easily converted to objects and hence table rows can be constructed easily unobtrusively
I would highly recommend Open rico.
It is difficult to implement in the the beginning, but once you grab it you will never look back.
I know this question is a few years old, but jqgrid now supports virtual scrolling:
http://www.trirand.com/blog/phpjqgrid/examples/paging/scrollbar/default.php
but with pagination disabled
I suggest sigma grid, sigma grid has embed paging features which could support millions of rows. And also, you may need a remote paging to do it.
see the demo
http://www.sigmawidgets.com/products/sigma_grid2/demos/example_master_details.html
Take a look at dGrid:
https://dgrid.io/
I agree that users will NEVER, EVER need to view millions of rows of data all at once, but dGrid can display them quickly (a screenful at a time).
Don't boil the ocean to make a cup of tea.

Can anyone recommend a well performing interface to allow the user to organize a large number of items in HTML?

Currently for "group" management you can click the name of the group in a list of available groups and it will take you to a page with two side by side multi-select list boxes. Between the two list boxes are Add and Remove buttons. You select all the "users" from the left list and click 'Add' and they will appear in the right list, and vice versa. This works fairly well for a small amount of data.
The problem lies when you start having thousands of users. Not only is it difficult and time consuming to search through (despite having a 'filter' at the top that will narrow results based on a string), but you will eventually reach a point where your computer's power and the number of list items apex and the whole browser starts to lag horrendously.
Is there a better interface idea for managing this? Or are there any well known tricks to make it perform better and/or be easier to use when there are many 'items' in the lists?
Implement an Ajax function that hooks on keydown and checks the characters the user has typed into the search/filter box so far (server-side). When the search results drop below 50, push those to the browser for display.
Alternatively, you can use a jQuery UI Autocomplete plugin, and set the minimum number of characters to 3 to trigger the search. This will limit the number of list items that are pushed to the browser.
I would get away from using the native list box in your browser and implement a solution in HTML/CSS using lists or tables (depending on your needs). Then you can use JavaScript and AJAX to pull only the subset of data you need. Watch the user's actions and pull the next 50 records before they actually get to them. That will give the illusion of all of the records being loaded at runtime.
The iPhone does this kind of thing to preserve memory for it's TableViews. I would take that idea and apply it to your case.
I'd say you hit the nail on the head with the word 'filter'. I'm not the hugest fan of parallel multi-selects like what you are describing, but that is almost beside the point, whatever UX element you use, you are going to run into a problem given thousands of items. Thus, filtering. Filtering with a search string is a fine solution, but I suspect searching by name is not the fastest way to get to the users that the admin here wants. What else do you know about the users? How are they grouped.
For example, if these users were students at a highschool, we would know some meta-data about them: What grade are they in? How old are they? What stream of studies are they in? What is their GPA? ... providing filtering on these pieces of metadata is one way of limiting the number of students available at a time. If you have too many to start with, and it is causing performance problems, consider just limiting them, have a button to load more, and only show 100 at a time.
Update: the last point here is essentially what Jesse is proposing below.

Javascript Efficiency with Constants passed as Parameters?

I have a general JavaScript question. I'll give you my scenario and then ask you the question.
Scenario
I am making a table with (currently) over 3000 rows, and it is growing by 5-10 every day. I am using a javascript plugin to style this table and add useful functionality. It currently takes 15 seconds to fully load the page, after which everything runs smoothly (sorting, paging, etc.). That is a very slow initial load, though. The plugin offers a way with less DOM parsing, where you pass it an array of the information to be placed inside the table, which I am very intrigued by. However, I want to make this as fast as possible, because there will still be an array of 3000 rows (each with 11 columns of, on average, 10 characters).
Question
Would it be significantly faster to use a JavaScript const to store this giant array? Specifically, does JavaScript know not to put a const on the stack when passed as a parameter?
Furthermore, is this simply too much for JavaScript to handle? Should I dismiss this idea and start with AJAX now (which would mean much slower functionality but much faster pageload)?
Thanks!
Because you say interaction is fast once the page is loaded I guess your biggest bottleneck is transfering the data over the wire.
I would send everything as JSON (compressed with gzip) which is very lightweight and fast to load.
I think styling should be done with CSS not JS. Also if you want the best UX initialize your table with less (1-200 elements), and then deal with the rest. It is better for the user if you show something right at the beginning.
Storing the array can't be a problem because GC will clear it up.

Categories