From about 2 months ago, I got back to work on my fully custom ecommerce regarding the optimization of the front-end on pagespeed, in view that googlebot now carries out measurements of parameters such as CLS and LCP for which they have the criticality and part of determining if that particular page goes under index in the google crawler.
I did all the optimizations that I managed like:
extraction of critical css online
all non-critical CSS merged and parsed inline
mod_pagespeed
image and JS under CDN
nginx optimization
moved non-core JS below close to body closure where possible
deferring non critical JS
preloading, prefetch and many other things that I don't remember now done during the long nights spent studying the template
So now, i reach a great result compared of the previous 2 months.
The only thing I can't explain is that I have high Time To interactive, CLS and LCP in mobile mode, When the desktop version is just fine.
I am puzzling myself, trying to find the key to the solution.
This is an example of a product page: https://developers.google.com/speed/pagespeed/insights/?hl=it&url=https%3A%2F%2Fshop.biollamotors.it%2Fcatalogo%2FSS-Y6SO4-HET-TERMINALI-AKRAPOVIC-YAMAHA-FZ-6-S2-FZ-6-FAZER-S2-2004-2009-TITANIO--2405620
and here is the homepage which has acceptable values compared to the product page, but still the time to interactive is much higher than the desktop: https://developers.google.com/speed/pagespeed/insights/?hl=it&url=https%3A%2F%2Fshop.biollamotors.it%2F
Thanks in advance to all expert mode users who will be of help to me.
Why are my mobile scores lower than desktop?
The mobile test uses network throttling and CPU throttling to simulate a mid range device on 4G so you will always get lower scores for mobile.
You may find this related answer I gave useful on the difference between how the page loads as you see it and how Lighthouse applies throttling (along with how to make lighthouse use applied throttling so you can see the slowdown in real time), however I have included the key information in this answer and how it applies to your circumstances.
Simulated Network Throttling
When you run an audit it applies 150ms latency to each request and also limits download speed to 1.6 Megabits (200 kilobytes) per second and upload to 750 megabits (94 kilobytes) per second.
This is very likely to affect your Largest Contentful Paint (LCP) as resources take a lot longer to download.
This throttling is done via an algorithm rather than applied (it is simulated) so you don't see the slower load times.
CPU throttling
Lighthouse applies a 4x slowdown to your CPU to simulate a mid-tier mobile phone performance.
If your JavaScript payload is heavy this could block the main thread and delay rendering. Or if you dynamically insert elements using JavaScript it can delay LCP for the same reason.
This affects your Time To Interactive most out of the items you listed.
This throttling is also done via an algorithm rather than applied (it is simulated) so you don't see the slower processing times.
What do you need to do to improve your scores?
Focus on your JavaScript.
You have 5 blocking scripts for a start, just add defer to them as you have done the others (or better yet async if you know how to handle async JS loading).
Secondly the payload is over 400kb of JS (uncompressed) - if you notice your scripts take 2.3 seconds to evaluate.
Strip out anything you don't need and also run a trace in the "performance" tab in Developer tools and learn how to find long running tasks and performance bottlenecks.
Look at reducing the number of network requests, because of the higher latency of a 4G network this can add several seconds to you load time if you have a lot of requests.
Combine as much CSS and JS as you can (and you only need to inline your critical CSS not the entire page CSS, find all items used "above the fold" and inline them only, at the moment you seem to be sending the whole site CSS inline which is going to add a lot of page weight).
Finally your high Cumulative Layout Shift (CLS) is (in part) because you are using JS to hide items (for example the modal with ID comp-modal appears to be hidden with JS) on page load, but they have already rendered by the time that JS runs, easily fixed by hiding them within your inline CSS rather than JavaScript.
Other than that you just need to follow the guidance that the Lighthouse report gives you (you may not have paid much attention to the "Diagnostics" section yet, start looking there at anything that has a red triangle or orange square next to it, each item provides useful information on things that may need optimisation.).
That should be enough to get you started.
A particular page on our site loads with 1000s of divs each about 1000px x ~1500px(A printable page), each div displays additional elements/basic table/etc but can vary in height.
Render time can be several minutes depending on PC performance.
Using tools like webix which can load millions of rows proves the render process is taking up most of the loading time, but doesn't work well for non-tabular data.
Using Angular JS to create infinite scroll lists is possible. But this also doesn't work well with varying height elements.
All solutions I have found so far loose the browsers find feature, which our users commonly use, thus we will probably have to develop our own search tool.
Yes we could add pagination, or some sort of way of breaking down the data, but users still need to review all the data regardless of how it's broken down.
The same data (10,000 pages 30mb) once exported to PDF loads in < than 1 second.
I think the best solution will be the combination of a few different ideas.
I'm having a issue which is hard to debug. I'm using a Javascript library (the JQuery Flexslider plugin) in a number of different places on my site. It's all working fine except for one particular phone where it doesn't work and slows down everything on the page.
So far, I've only seen it happen on this one device. Other devices of the same type do not have the issue. This person has an iOS that's a few versions out of date and not much memory, so I think it's a memory issue.
An old hack was to move the carousel element that has the issue on the page with Javascript, but I want to find and fix the root issue.
How can I start debugging this? I'm not sure how to test for a memory issue on a device.
If you're on a Mac, then you can plug in and use remote debugging via Safari, where you'll have access to the tools, including the profiler (not sure the state of Safari support in Windows). There are numerous resources for showing how to remotely debug a device, unless it is a really old version of iOS you should be fine, you’ll have to enable the develop menu via settings but after that its plain sailing if you know your debugging tools.
I'd agree that it doesnt really sound like a memory issue, although jQuery tends to be hungry in that respect, I dont know the plugin in question but the quality of plugins is hugely variable in jQuery-land. Old phones and old versions of jQuery certainly never played well together.
When you say one phone, you mean one type of phone + iOS version? The question isnt clear, its almost reads like you have 2 identical phones/os's where 1 works and 1 does not.
If you use Chrome you can use the Heap profiler
First open your developer tool and start recording.
Next start using your page and try to replicate your issue, stop recording and review the stats.
This is likely not a memory issue, but a cpu issue. The way jQuery does animation is processor constrained on older dvices. Factors that are easiest to handle include:
size of the page (html length and complexity)
animation steps, length, and complexity
You have a couple of options here, but the simple answer is you are asking too much of the older processor. Assuming you are using this plugin http://www.woothemes.com/flexslider/ you could try disabling or simplifying some of the transition effects. animation and animationSpeed would be the first I would suggest.
If you are interested in not changing the experience for most users you could consider tying into the start and end functions on the callback api and checking the time it took to perform the first transition, then reinitialize a simpler version of the slideshow for that device.
The hard thing here is there isn't really a right answer. If one of the above options doesn't fix it you're likely looking at choosing/building a different slideshow, degrading the experience for everyone, or determining the best way you feel comfortable with choosing who gets the degraded experience.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I need to present a large number of rows of data (ie. millions of rows) to the user in a grid using JavaScript.
The user shouldn't see pages or view only finite amounts of data at a time.
Rather, it should appear that all of the data are available.
Instead of downloading the data all at once, small chunks are downloaded as the user comes to them (ie. by scrolling through the grid).
The rows will not be edited through this front end, so read-only grids are acceptable.
What data grids, written in JavaScript, exist for this kind of seamless paging?
(Disclaimer: I am the author of SlickGrid)
UPDATE
This has now been implemented in SlickGrid.
Please see http://github.com/mleibman/SlickGrid/issues#issue/22 for an ongoing discussion on making SlickGrid work with larger numbers of rows.
The problem is that SlickGrid does not virtualize the scrollbar itself - the scrollable area's height is set to the total height of all the rows. The rows are still being added and removed as the user is scrolling, but the scrolling itself is done by the browser. That allows it to be very fast yet smooth (onscroll events are notoriously slow). The caveat is that there are bugs/limits in the browsers' CSS engines that limit the potential height of an element. For IE, that happens to be 0x123456 or 1193046 pixels. For other browsers it is higher.
There is an experimental workaround in the "largenum-fix" branch that raises that limit significantly by populating the scrollable area with "pages" set to 1M pixels height and then using relative positioning within those pages. Since the height limit in the CSS engine seems to be different and significantly lower than in the actual layout engine, this gives us a much higher upper limit.
I am still looking for a way to get to unlimited number of rows without giving up the performance edge that SlickGrid currently holds over other implementations.
Rudiger, can you elaborate on how you solved this?
https://github.com/mleibman/SlickGrid/wiki
"SlickGrid utilizes virtual rendering to enable you to easily work with hundreds of thousands of items without any drop in performance. In fact, there is no difference in performance between working with a grid with 10 rows versus a 100’000 rows."
Some highlights:
Adaptive virtual scrolling (handle hundreds of thousands of rows)
Extremely fast rendering speed
Background post-rendering for richer cells
Configurable & customizable
Full keyboard navigation
Column resize/reorder/show/hide
Column autosizing & force-fit
Pluggable cell formatters & editors
Support for editing and creating new rows."
by mleibman
It's free (MIT license).
It uses jQuery.
The best Grids in my opinion are below:
Flexigrid: http://flexigrid.info/
jQuery Grid: http://www.trirand.com/blog/
jqGridView: http://plugins.jquery.com/project/jqGridView
jqxGrid: https://www.jqwidgets.com/
Ingrid: http://reconstrukt.com/ingrid/
SlickGrid http://github.com/mleibman/SlickGrid
DataTables http://www.datatables.net/index
ShieldUI http://demos.shieldui.com/web/grid-virtualization/performance-1mil-rows
Smart.Grid https://www.htmlelements.com/demos/grid/overview/
My best 3 options are jqGrid, jqxGrid and DataTables. They can work with thousands of rows and support virtualization.
I don't mean to start a flame war, but assuming your researchers are human, you don't know them as well as you think. Just because they have petabytes of data doesn't make them capable of viewing even millions of records in any meaningful way. They might say they want to see millions of records, but that's just silly. Have your smartest researchers do some basic math: Assume they spend 1 second viewing each record. At that rate, it will take 1000000 seconds, which works out to more than six weeks (of 40 hour work-weeks with no breaks for food or lavatory).
Do they (or you) seriously think one person (the one looking at the grid) can muster that kind of concentration? Are they really getting much done in that 1 second, or are they (more likely) filtering out the stuff the don't want? I suspect that after viewing a "reasonably-sized" subset, they could describe a filter to you that would automatically filter out those records.
As paxdiablo and Sleeper Smith and Lasse V Karlsen also implied, you (and they) have not thought through the requirements. On the up side, now that you've found SlickGrid, I'm sure the need for those filters became immediately obvious.
I can say with pretty good certainty that you seriously do not need to show millions of rows of data to the user.
There is no user in the world that will be able to comprehend or manage that data set so even if you technically manage to pull it off, you won't solve any known problem for that user.
Instead I would focus on why the user wants to see the data. The user does not want to see the data just to see the data, there is usually a question being asked. If you focus on answering those questions instead, then you would be much closer to something that solves an actual problem.
I recommend the Ext JS Grid with the Buffered View feature.
http://www.extjs.com/deploy/dev/examples/grid/buffer.html
(Disclaimer: I am the author of w2ui)
I have recently written an article on how to implement JavaScript grid with 1 million records (http://w2ui.com/web/blog/7/JavaScript-Grid-with-One-Million-Records). I discovered that ultimately there are 3 restrictions that prevent from taking it highter:
Height of the div has a limit (can be overcome by virtual scrolling)
Operations such as sort and search start being slow after 1 million records or so
RAM is limited because data is stored in JavaScript array
I have tested the grid with 1 million records (except IE) and it performs well. See article for demos and examples.
dojox.grid.DataGrid offers a JS abstraction for data so you can hook it up to various backends with provided dojo.data stores or write your own. You'll obviously need one that supports random access for this many records. DataGrid also provides full accessibility.
Edit so here's a link to Matthew Russell's article that should provide the example you need, viewing millions of records with dojox.grid. Note that it uses the old version of the grid, but the concepts are the same, there were just some incompatible API improvements.
Oh, and it's totally free open source.
I used jQuery Grid Plugin, it was nice.
Demos
Here are a couple of optimizations you can apply you speed up things. Just thinking out loud.
Since the number of rows can be in the millions, you will want a caching system just for the JSON data from the server. I can't imagine anybody wanting to download all X million items, but if they did, it would be a problem. This little test on Chrome for an array on 20M+ integers crashes on my machine constantly.
var data = [];
for(var i = 0; i < 20000000; i++) {
data.push(i);
}
console.log(data.length);
You could use LRU or some other caching algorithm and have an upper bound on how much data you're willing to cache.
For the table cells themselves, I think constructing/destroying DOM nodes can be expensive. Instead, you could just pre-define X number of cells, and whenever the user scrolls to a new position, inject the JSON data into these cells. The scrollbar would virtually have no direct relationship to how much space (height) is required to represent the entire dataset. You could arbitrarily set the table container's height, say 5000px, and map that to the total number of rows. For example, if the containers height is 5000px and there are a total of 10M rows, then the starting row ≈ (scroll.top/5000) * 10M where scroll.top represents the scroll distance from the top of the container. Small demo here.
To detect when to request more data, ideally an object should act as a mediator that listens to scroll events. This object keeps track of how fast the user is scrolling, and when it looks like the user is slowing down or has completely stopped, makes a data request for the corresponding rows. Retrieving data in this fashion means your data is going to be fragmented, so the cache should be designed with that in mind.
Also the browser limits on maximum outgoing connections can play an important part. A user may scroll to a certain position which will fire an AJAX request, but before that finishes the user can scroll to some other portion. If the server is not responsive enough the requests would get queued up and the application will look unresponsive. You could use a request manager through which all requests are routed, and it can cancel pending requests to make space.
I know it's an old question but still.. There is also dhtmlxGrid that can handle millions of rows. There is a demo with 50,000 rows but the number of rows that can be loaded/processed in grid is unlimited.
Disclaimer: I'm from DHTMLX team.
I suggest you read this
http://www.sitepen.com/blog/2008/11/21/effective-use-of-jsonreststore-referencing-lazy-loading-and-more/
Disclaimer: i heavily use YUI DataTable without no headache for a long time. It is powerful and stable. For your needs, you can use a ScrollingDataTable wich suports
x-scrolling
y-scrolling
xy-scrolling
A powerful Event mechanism
For what you need, i think you want is a tableScrollEvent. Its API says
Fired when a fixed scrolling DataTable has a scroll.
As each DataTable uses a DataSource, you can monitoring its data through tableScrollEvent along with render loop size in order to populate your ScrollingDataTable according to your needs.
Render loop size says
In cases where your DataTable needs to display the entirety of a very large set of data, the renderLoopSize config can help manage browser DOM rendering so that the UI thread does not get locked up on very large tables. Any value greater than 0 will cause the DOM rendering to be executed in setTimeout() chains that render the specified number of rows in each loop. The ideal value should be determined per implementation since there are no hard and fast rules, only general guidelines:
By default renderLoopSize is 0, so all rows are rendered in a single loop. A renderLoopSize > 0 adds overhead so use thoughtfully.
If your set of data is large enough (number of rows X number of Columns X formatting complexity) that users experience latency in the visual rendering and/or it causes the script to hang, consider setting a renderLoopSize.
A renderLoopSize under 50 probably isn't worth it. A renderLoopSize > 100 is probably better.
A data set is probably not considered large enough unless it has hundreds and hundreds of rows.
Having a renderLoopSize > 0 and < total rows does cause the table to be rendered in one loop (same as renderLoopSize = 0) but it also triggers functionality such as post-render row striping to be handled from a separate setTimeout thread.
For instance
// Render 100 rows per loop
var dt = new YAHOO.widget.DataTable(<WHICH_DIV_WILL_STORE_YOUR_DATATABLE>, <HOW YOUR_TABLE_IS STRUCTURED>, <WHERE_DOES_THE_DATA_COME_FROM>, {
renderLoopSize:100
});
<WHERE_DOES_THE_DATA_COME_FROM> is just a single DataSource. It can be a JSON, JSFunction, XML and even a single HTML element
Here you can see a Simple tutorial, provided by me. Be aware no other DATA_TABLE pluglin supports single and dual click at the same time. YUI DataTable allows you. And more, you can use it even with JQuery without no headache
Some examples, you can see
List item
Feel free to question about anything else you want about YUI DataTable.
regards,
I kind of fail to see the point, for jqGrid you can use the virtual scrolling functionality:
http://www.trirand.net/aspnetmvc/grid/performancevirtualscrolling
but then again, millions of rows with filtering can be done:
http://www.trirand.net/aspnetmvc/grid/performancelinq
I really fail to see the point of "as if there are no pages" though, I mean... there is no way to display 1,000,000 rows at once in the browser - this is 10MB of HTML raw, I kind of fail to see why users would not want to see the pages.
Anyway...
best approach i could think of is by loading the chunk of data in json format for every scroll or some limit before the scrolling ends. json can be easily converted to objects and hence table rows can be constructed easily unobtrusively
I would highly recommend Open rico.
It is difficult to implement in the the beginning, but once you grab it you will never look back.
I know this question is a few years old, but jqgrid now supports virtual scrolling:
http://www.trirand.com/blog/phpjqgrid/examples/paging/scrollbar/default.php
but with pagination disabled
I suggest sigma grid, sigma grid has embed paging features which could support millions of rows. And also, you may need a remote paging to do it.
see the demo
http://www.sigmawidgets.com/products/sigma_grid2/demos/example_master_details.html
Take a look at dGrid:
https://dgrid.io/
I agree that users will NEVER, EVER need to view millions of rows of data all at once, but dGrid can display them quickly (a screenful at a time).
Don't boil the ocean to make a cup of tea.
This is the fiddle: http://jsfiddle.net/36mdt/
After about 10-20 seconds, the display starts to freeze randomly and shortly after crashes. I cannot reproduce this in Firefox.
Profiling reveals nothing unusual.
http://jsfiddle.net/3pbdQ/ shows there is definitely a memory leak. Even a 1 FPS, the memory usage climes 5 megabytes a frame.
On a side note, this example really shows how Math.random() is really not so random.
I've done only 2 performance improvements and it doesn't crash after 5 mins (also seems to be not leaking memory). Checkout http://jsfiddle.net/3pbdQ/3/
Don't calculate the size inside each iteration
Use timeouts instead of freezing interval.
Use bitwise operator for flooring a number
Profiling reveals nothing unusual.
Chrome Profiler doesn't work with WebWorkers, AFAIK. As per conversation with Paul Irish:
"Check about:inspect for shared workers, also you can do console.profile() within the worker code (I THINK) and capture those bits. The "cleans up" is the garbage collector: if after the cleanup there is still a growing line of excess memory, then thats a leak."
And
On a side note, this example really shows how Math.random() is really
not so random.
It is well known there are no perfect random algorithms, but anyway the bunch of grouped colors you see is because you're not setting canvas.height and canvas.width, and it differs from CSS values.
EDIT: Still leaking memory, I don't know why, about after 10 secs it 'cleans up'. Exceeds my knowledge, but works smoothly at 60 FPS (var TIME = 16)
Depending on the system and browser version you use some steps may vary although I tried my best to provide common steps that are compatible with most systems.
Disable Sandbox:
1. Right click Google Chrome desktop icon.
2. Select Properties.
3. Click Shortcut > Target.
4. Add "--no-sandbox"
5. Click Apply | OK.
6. Download and install ZombieSoftFix.
7. Check and resolve conflicts detected.
Disable Plug-Ins:
1. Type "about:plugins" in Address Bar.
2. Press ENTER.
3. Disable all plug-ins displayed in the list page.
Clear Temporary Files:
1. Click Wrench.
2. Select More Tools | Clear browsing data.
3. Check-up all boxes, click "Clear browsing data" button to confirm the process.
Thanks & Regards.
This is an unfortunate, known Chrome bug.