Violation Long running JavaScript task took xx ms - javascript

Recently, I got this kind of warning, and this is my first time getting it:
[Violation] Long running JavaScript task took 234ms
[Violation] Forced reflow while executing JavaScript took 45ms
I'm working on a group project and I have no idea where this is coming from. This never happened before. Suddenly, it appeared when someone else got involved in the project. How do I find what file/function causes this warning? I've been looking for the answer, but mostly about the solution on how to solve it. I can't solve it if I can't even find the source of the problem.
In this case, the warning appears only on Chrome. I tried to use Edge, but I didn't get any similar warnings, and I haven't tested it on Firefox yet.
I even get the error from jquery.min.js:
[Violation] Handler took 231ms of runtime (50ms allowed) jquery.min.js:2

Update: Chrome 58+ hid these and other debug messages by default. To display them click the arrow next to 'Info' and select 'Verbose'.
Chrome 57 turned on 'hide violations' by default. To turn them back on you need to enable filters and uncheck the 'hide violations' box.
suddenly it appears when someone else involved in the project
I think it's more likely you updated to Chrome 56. This warning is a wonderful new feature, in my opinion, please only turn it off if you're desperate and your assessor will take marks away from you. The underlying problems are there in the other browsers but the browsers just aren't telling you there's a problem. The Chromium ticket is here but there isn't really any interesting discussion on it.
These messages are warnings instead of errors because it's not really going to cause major problems. It may cause frames to get dropped or otherwise cause a less smooth experience.
They're worth investigating and fixing to improve the quality of your application however. The way to do this is by paying attention to what circumstances the messages appear, and doing performance testing to narrow down where the issue is occurring. The simplest way to start performance testing is to insert some code like this:
function someMethodIThinkMightBeSlow() {
const startTime = performance.now();
// Do the normal stuff for this function
const duration = performance.now() - startTime;
console.log(`someMethodIThinkMightBeSlow took ${duration}ms`);
}
If you want to get more advanced, you could also use Chrome's profiler, or make use of a benchmarking library like this one.
Once you've found some code that's taking a long time (50ms is Chrome's threshold), you have a couple of options:
Cut out some/all of that task that may be unnecessary
Figure out how to do the same task faster
Divide the code into multiple asynchronous steps
(1) and (2) may be difficult or impossible, but it's sometimes really easy and should be your first attempts. If needed, it should always be possible to do (3). To do this you will use something like:
setTimeout(functionToRunVerySoonButNotNow);
or
// This one is not available natively in IE, but there are polyfills available.
Promise.resolve().then(functionToRunVerySoonButNotNow);
You can read more about the asynchronous nature of JavaScript here.

These are just warnings as everyone mentioned. However, if you're keen on resolving these (which you should), then you need to identify what is causing the warning first. There's no one reason due to which you can get force reflow warning.
Someone has created a list for some possible options. You can follow the discussion for more information.
Here's the gist of the possible reasons:
What forces layout / reflow
All of the below properties or methods, when requested/called in
JavaScript, will trigger the browser to synchronously calculate the
style and layout*. This is also called reflow or layout
thrashing,
and is common performance bottleneck.
Element
Box metrics
elem.offsetLeft, elem.offsetTop, elem.offsetWidth, elem.offsetHeight, elem.offsetParent
elem.clientLeft, elem.clientTop, elem.clientWidth, elem.clientHeight
elem.getClientRects(), elem.getBoundingClientRect()
Scroll stuff
elem.scrollBy(), elem.scrollTo()
elem.scrollIntoView(), elem.scrollIntoViewIfNeeded()
elem.scrollWidth, elem.scrollHeight
elem.scrollLeft, elem.scrollTop also, setting them
Focus
elem.focus() can trigger a double forced layout (source)
Also…
elem.computedRole, elem.computedName
elem.innerText (source)
getComputedStyle
window.getComputedStyle() will typically force style recalc
(source)
window.getComputedStyle() will force layout, as well, if any of the
following is true:
The element is in a shadow tree
There are media queries (viewport-related ones). Specifically, one of the following:
(source) * min-width, min-height, max-width, max-height, width, height * aspect-ratio, min-aspect-ratio, max-aspect-ratio
device-pixel-ratio, resolution, orientation
The property requested is one of the following: (source)
height, width * top, right, bottom, left * margin [-top, -right, -bottom, -left, or shorthand] only if the
margin is fixed. * padding [-top, -right, -bottom, -left,
or shorthand] only if the padding is fixed. * transform,
transform-origin, perspective-origin * translate, rotate,
scale * webkit-filter, backdrop-filter * motion-path,
motion-offset, motion-rotation * x, y, rx, ry
window
window.scrollX, window.scrollY
window.innerHeight, window.innerWidth
window.getMatchedCSSRules() only forces style
Forms
inputElem.focus()
inputElem.select(), textareaElem.select() (source)
Mouse events
mouseEvt.layerX, mouseEvt.layerY, mouseEvt.offsetX, mouseEvt.offsetY
(source)
document
doc.scrollingElement only forces style
Range
range.getClientRects(), range.getBoundingClientRect()
SVG
Quite a lot; haven't made an exhaustive list , but Tony Gentilcore's 2011 Layout Triggering
List
pointed to a few.
contenteditable
Lots & lots of stuff, …including copying an image to clipboard (source)
Check more here.
Also, here's Chromium source code from the original issue and a discussion about a performance API for the warnings.
Edit: There's also an article on how to minimize layout reflow on PageSpeed Insight by Google. It explains what browser reflow is:
Reflow is the name of the web browser process for re-calculating the
positions and geometries of elements in the document, for the purpose
of re-rendering part or all of the document. Because reflow is a
user-blocking operation in the browser, it is useful for developers to
understand how to improve reflow time and also to understand the
effects of various document properties (DOM depth, CSS rule
efficiency, different types of style changes) on reflow time.
Sometimes reflowing a single element in the document may require
reflowing its parent elements and also any elements which follow it.
In addition, it explains how to minimize it:
Reduce unnecessary DOM depth. Changes at one level in the DOM tree
can cause changes at every level of the tree - all the way up to the
root, and all the way down into the children of the modified node.
This leads to more time being spent performing reflow.
Minimize CSS rules, and remove unused CSS rules.
If you make complex rendering changes such as animations, do so out of the flow. Use position-absolute or position-fixed to accomplish
this.
Avoid unnecessary complex CSS selectors - descendant selectors in
particular - which require more CPU power to do selector matching.

A couple of ideas:
Remove half of your code (maybe via commenting it out).
Is the problem still there? Great, you've narrowed down the possibilities! Repeat.
Is the problem not there? Ok, look at the half you commented out!
Are you using any version control system (eg, Git)? If so, git checkout some of your more recent commits. When was the problem introduced? Look at the commit to see exactly what code changed when the problem first arrived.

I found the root of this message in my code, which searched and hid or showed nodes (offline). This was my code:
search.addEventListener('keyup', function() {
for (const node of nodes)
if (node.innerText.toLowerCase().includes(this.value.toLowerCase()))
node.classList.remove('hidden');
else
node.classList.add('hidden');
});
The performance tab (profiler) shows the event taking about 60 ms:
Now:
search.addEventListener('keyup', function() {
const nodesToHide = [];
const nodesToShow = [];
for (const node of nodes)
if (node.innerText.toLowerCase().includes(this.value.toLowerCase()))
nodesToShow.push(node);
else
nodesToHide.push(node);
nodesToHide.forEach(node => node.classList.add('hidden'));
nodesToShow.forEach(node => node.classList.remove('hidden'));
});
The performance tab (profiler) now shows the event taking about 1 ms:
And I feel that the search works faster now (229 nodes).

In order to identify the source of the problem, run your application, and record it in Chrome's Performance tab.
There you can check various functions that took a long time to run. In my case, the one that correlated with warnings in console was from a file which was loaded by the AdBlock extension, but this could be something else in your case.
Check these files and try to identify if this is some extension's code or yours. (If it is yours, then you have found the source of your problem.)

Look in the Chrome console under the Network tab and find the scripts which take the longest to load.
In my case there were a set of Angular add on scripts that I had included but not yet used in the app :
<script src="//cdnjs.cloudflare.com/ajax/libs/angular-ui-router/0.2.8/angular-ui-router.min.js"></script>
<script src="//cdnjs.cloudflare.com/ajax/libs/angular-ui-utils/0.1.1/angular-ui-utils.min.js"></script>
<script src="//ajax.googleapis.com/ajax/libs/angularjs/1.3.9/angular-animate.min.js"></script>
<script src="//ajax.googleapis.com/ajax/libs/angularjs/1.3.9/angular-aria.min.js"></script>
These were the only JavaScript files that took longer to load than the time that the "Long Running Task" error specified.
All of these files run on my other websites with no errors generated but I was getting this "Long Running Task" error on a new web app that barely had any functionality. The error stopped immediately upon removing.
My best guess is that these Angular add ons were looking recursively into increasingly deep sections of the DOM for their start tags - finding none, they had to traverse the entire DOM before exiting, which took longer than Chrome expects - thus the warning.

I found a solution in Apache Cordova source code.
They implement like this:
var resolvedPromise = typeof Promise == 'undefined' ? null : Promise.resolve();
var nextTick = resolvedPromise ? function(fn) { resolvedPromise.then(fn); } : function(fn) { setTimeout(fn); };
Simple implementation, but smart way.
Over the Android 4.4, use Promise.
For older browsers, use setTimeout()
Usage:
nextTick(function() {
// your code
});
After inserting this trick code, all warning messages are gone.

Adding my insights here as this thread was the "go to" stackoverflow question on the topic.
My problem was in a Material-UI app (early stages)
placement of custom Theme provider was the cause
when I did some calculations forcing rendering of the page
(one component, "display results", depends on what is set in others, "input sections").
Everything was fine until I updated the "state" that forces the "results component" to rerender. The main issue here was that I had a material-ui theme (https://material-ui.com/customization/theming/#a-note-on-performance) in the same renderer (App.js / return.. ) as the "results component", SummaryAppBarPure
Solution was to lift the ThemeProvider one level up (Index.js), and wrapping the App component here, thus not forcing the ThemeProvider to recalculate and draw / layout / reflow.
before
in App.js:
return (
<>
<MyThemeProvider>
<Container className={classes.appMaxWidth}>
<SummaryAppBarPure
//...
in index.js
ReactDOM.render(
<React.StrictMode>
<App />
//...
after
in App.js:
return (
<>
{/* move theme to index. made reflow problem go away */}
{/* <MyThemeProvider> */}
<Container className={classes.appMaxWidth}>
<SummaryAppBarPure
//...
in index.js
ReactDOM.render(
<React.StrictMode>
<MyThemeProvider>
<App />
//...

This was added in the Chrome 56 beta, even though it isn't on this changelog from the Chromium Blog: Chrome 56 Beta: “Not Secure” warning, Web Bluetooth, and CSS position: sticky
You can hide this in the filter bar of the console with the Hide violations checkbox.

This is violation error from Google Chrome that shows when the Verbose logging level is enabled.
Example of error message:
Explanation:
Reflow is the name of the web browser process for re-calculating the positions and geometries of elements in the document, for the purpose of re-rendering part or all of the document. Because reflow is a user-blocking operation in the browser, it is useful for developers to understand how to improve reflow time and also to understand the effects of various document properties (DOM depth, CSS rule efficiency, different types of style changes) on reflow time. Sometimes reflowing a single element in the document may require reflowing its parent elements and also any elements which follow it.
Original article: Minimizing browser reflow by Lindsey Simon, UX Developer, posted on developers.google.com.
And this is the link Google Chrome gives you in the Performance profiler, on the layout profiles (the mauve regions), for more info on the warning.

If you're using Chrome Canary (or Beta), just check the 'Hide Violations' option.

For what it’s worth, here are my 2¢ when I encountered the
[Violation] Forced reflow while executing JavaScript took <N>ms
warning. The page in question is generated from user content, so I don’t really have much influence over the size of the DOM. In my case, the problem is a table of two columns with potentially hundreds, even thousands of rows. (No on-demand row loading implemented yet, sorry!)
Using jQuery, on keydown the page selects a set of rows and toggles their visibility. I noticed that using toggle() on that set triggers the warning more readily than using hide() & show() explicitly.
For more details on this particular performance scenario, see also this article.

The answer is that it's a feature in newer Chrome browsers where it alerts you if the web page causes excessive browser reflows while executing JS. Please refer to

Forced reflow often happens when you have a function called multiple times before the end of execution.
For example, you may have the problem on a smartphone, but not on a classic browser.
I suggest using a setTimeout to solve the problem.
This isn't very important, but I repeat, the problem arises when you call a function several times, and not when the function takes more than 50 ms. I think you are mistaken in your answers.
Turn off 1-by-1 calls and reload the code to see if it still produces the error.
If a second script causes the error, use a setTimeOut based on the duration of the violation.

This is not an error just simple a message. To execute this message change
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> (example)
to
<!DOCTYPE html>(the Firefox source expect this)
The message was shown in Google Chrome 74 and Opera 60 . After changing it was clear, 0 verbose.
A solution approach

Related

Monitoring and debugging performance of Angular components

One of my Angular (5.1.0) components slows down the whole app considerably: reacting to click event takes 30ms in every other view of the app and around 350ms in this one problematic view. And although the desktop performance is almost indistinguishable between the "problematic" and "normal" views, the performance difference on a mobile device is obvious and the performance penalty is just staggering (in the example above, the click event would take more like a 1500ms on a smartphone).
There are basically two new components which have been added recently. One of them holds the view, the other one renders some data (and is used twice on the page). I would put my bet on the latter one, but I do not know where to start. Chrome DevTools and Safari Developer Tools could for now give me the meaningful event times, but either I do not know how to dig deeper or I need different tools or methodology altogether to pinpoint what exactly causes the lag. Any ideas?
For the "monitoring" aspect of your question, you can try Bucky an opensource tool to monitor webapp performance on browser side.
There is also a post about how to monitor AngularJS with Statsd here.
If you're really care about measuring user experiences, you can take a look at using percentiles, some information can be found here and here.

How do I stop my webpage cleanly in Firefox?

I have created an IIS MVC webpage.
Users find that if they leave it open overnight it is in some "frozen" state in the morning and it has also frozen any other tabs that might be open in the brower.
Therefore, they have to kill the whole browser window and log into my webpage again.
How can I cleanly shutdown(or put into nice state) my webpage at 10PM?
I have tried the following, which works on Chrome, but not Firefox:
setTimeout(function () { quitBox('quit') }, millisTill10);
function quitBox(cmd) {
if (cmd == 'quit') {
open(location, '_self').close();
window.close();
}
return false;
}
I am happy to leave the tab there - but put it into some kind of clean, dead state, that would not interfere with the other tabs.
I have tried to catch the error to fix it - but I have no idea what is causing it to freeze. The code below does NOT catch it:
window.onerror = function(error, url, line) {
alert('Inform please ERR:'+error+' URL:'+url+' L:'+line);
};
Fuller version:
window.onerror = function (errorMsg, url, lineNumber, column, errorObj) {
var stackTrace = "Not available";
try {
stackTrace = errorObj.prototype.stack
} catch (e) {
try {
stackTrace = errorObj.stack
} catch (e) {
try {
stackTrace = errorObj.error.stack
} catch (e) {
}
}
}
alert('Please inform of Error: ' + errorMsg + ' Script: ' + url + ' Line: ' + lineNumber
+ ' Column: ' + column + ' StackTrace: ' + errorObj + ' ST: ' + stackTrace);
}
Debugging a Browser Death
Rather than looking at how to "kill" the tab, it might be worth looking at why the application is dying in the first place. Firefox is a single-process browser (currently), but it has a lot of safety checks in place to keep the process running, which means that there's a pretty short list of things that can actually "kill" it.
First off, let's cross off some things that can't kill it: Plugins like Java and Flash. These run in a separate process already (if they're running at all):
So at most, if they're runaways, they'll kill themselves but the rest of the browser will still be running.
Second, you're not seeing memory warnings. Firefox is pretty good about displaying an error dialog when JavaScript consumes too much memory, so if you're not seeing that, odds are really good it's not an out-of-memory issue.
The Most Likely Causes
What that leaves is a fairly short list of possibilities:
Browser bug (unlikely, but we'll list it anyway)
Browser add-on/extension bug
Infinite loop in JavaScript
Infinite recursion in JavaScript
Not-quite-infinite-but-close-enough loop/recursion in JavaScript
Now let's cross those off.
A browser bug is unlikely, but possible. But, as a general rule when debugging, assume it's your code that's broken, not the third-party framework/tool/library around you. There are a thousand really sharp Mozilla devs working daily to kill any bugs in it, which means that anything you see failing probably has your code as a root cause.
A browser extension/add-on bug is possible, but if you're seeing this on everybody's computers, odds are good that they all have different configurations and it's not an issue with an extension/add-on. Still, it's probably worth testing your site in a fresh Firefox install and letting it sit overnight; if it isn't broken in the morning, then you have a problem with an extension/add-on.
A true infinite loop or infinite recursion is also pretty unlikely, based on your description: That would likely be readily detectable at a much earlier stage after the page has been loaded; I'd expect that at some point, the page would just suddenly go from fully responsive to fully dead — and, more importantly, the browser has an "Unresponsive script" dialog box that it would show you if it was stuck in a tight infinite loop. The fact that you're not seeing an "Unresponsive script" dialog means that the browser is still processing events, but likely processing them so slowly or rarely that it might as well be completely frozen.
Not Quite Infinite Death
More often than not, this is the root cause of "dead page" problems: You have some JavaScript that works well for a little bit of data, and takes forever with a lot of data.
For example, you might have code on the page tracking the page's behavior and inserting messages about it into an array, like this, so that the newest messages are at the top: logs.unshift(message) That code works fine if there are relatively few messages, and grinds to a halt when you get a few hundred thousand.
Given that your page is dying in the middle of the night, I'd wager dollars to donuts that you have something related to regular tracking or logging, or maybe a regular Ajax call, and when it kicks off, it performs some action that has overall O(n^2) or O(n^3) behavior — it only gets slow enough to be noticeable when there's a lot of data in it.
You can also get similar behavior by accidentally forcing reflows in the DOM. For example, we had a chunk of JavaScript some years ago that created a simple bulleted list in the UI. After it inserted each item, it would measure the height of the item to figure out where to put the next one. It worked fine for ten items — and died with a hundred, because "measure the height" really means to the browser, "Since there was a new item, we have to reflow the document to figure out all the element sizes before we can return the height." When inserting, say, the 3rd item, the browser only has to recompute the layout of the two before it. But when inserting the 1000th item, the browser has to recompute the layout of all 999 before it — not a cheap operation!
I would recommend you search through your code for things like this, because something like it is probably your root cause.
Finding the Bug
So how do you find it, especially in a large JavaScript codebase? There are three basic techniques:
Try another browser.
Divide and conquer.
Historical comparison.
Try Another Browser
Sometimes, using another browser will produce different behavior. Chrome or IE/Edge might abort the JavaScript instead of dying outright, and you might get an error message or a JavaScript console message instead of a dead browser. If you haven't at least tried letting Chrome or IE/Edge sit on the page overnight, you're potentially ignoring valuable debugging messages. (Even if your production users will never use Chrome or IE/Edge, it's at least worth testing the page in them to see if you get different output that could help you find the bug.)
Divide-and-Conquer
Let's say you still don't know what the cause is, even bringing other browsers into the picture. If that's the case, then I'd tackle it with the approach of "divide and conquer":
Remove half of the JavaScript from the page. (Find a half you can remove, and then get rid of it.)
Load the page, and wait for it to die.
Analyze the result:
If the page dies, you know the problem is still in the remaining half of the code, and not in the half you removed.
If the page doesn't die, you know the problem is in the half you removed, so put that code back, and then remove the good half of the code so you're left with only the buggy code in the page.
Repeat steps 1-3, cutting the remaining JavaScript in half each time, until you've isolated the bug.
Since it takes a long time for your page to die, this may make for a long debugging exercise. But the divide-and-conquer technique will find the bug, and it will find it faster than you think: Even if you have a million lines of JavaScript (and I'll bet you have far less), and you have to wait overnight after cutting it in half each time, it will still take you only twenty days to find the exact line of code with the bug. (The base-2 logarithm of 1,000,000 is approximately 20.)
Historical Analysis
One more useful technique: If you have version control (CVS, SVN, Git, Mercurial, etc.) for your source code, you may want to consider testing an older, historical copy of the code to see if it has the same bug. (Did it fail a year ago? Six months ago? Two years ago?) If you can eventually rewind time to before the bug was added, you can see what the actual change was that caused it, and you may not have to hunt through the code arbitrarily in search of it.
Conclusion
In short, while you can possibly put a band-aid on the page to make it fail gracefully — and that might be a reasonable short-term fix while you're searching for the actual cause — there's still very likely a lot more you can do to find the bug and fix it for real.
I've never seen a bug I couldn't eventually find and fix, and you shouldn't give up either.
Addendum:
I suppose for the sake of completeness in my answer, the simple code below could be a suitable "kill the page" band-aid for the short term. This just blanks out the page if a user leaves it sitting for eight hours:
<script type='text/javascript'><!--
setTimeout(function() {
document.location = 'about:blank';
}, 1000 * 60 * 60 * 8); // Maximum of 8 hours, counted in milliseconds
--></script>
Compatibility: This works on pretty much every browser with JavaScript, and should work all the way back to early versions of IE and Netscape from the '90s.
But if you use this code at all, don't leave it in there very long. It's not a good idea. You should find — and fix! — the bug instead.
If the tab was opened by JavaScript, then you may close it with JavaScript.
If the tab was NOT opened by JavaScript, then you may NOT close it with JavaScript.
You CAN configure FireFox to allow tabs to be closed by JavaScript by navigating to about:config in the URL bar and setting dom.allow_scripts_to_close_windows to true. However, this will have to be configured on a machine-by-machine basis (and opens up the possibility of other websites closing the tab via JavaScript).
So:
Open the tab via JavaScript so that it can then be closed by JavaScript
or
Change a FireFox setting so that JavaScript can close a tab.
PS. I'd also recommend taking a look at https://developers.google.com/web/tools/chrome-devtools/memory-problems/ to try to help identify memory leaks (if the application has a memory leak) or having the web app ping the server every minute for logging purposes (if the application is breaking at an unknown time at the same time every night).
As was mentioned in other comments you don't need to find how to close, you need how to avoid "freezing".
I suggest:
Collect statistic what browser/browsers fall
Collect metrics from browser that fail you can use window.performance and send logs to the server in timeout (I haven't tried, timeout can freeze before you have logs you need)

Components with large datasets runs slow on IE11/Edge only

Consider the code below. <GridBody Rows={rows} /> and imagine that rows.length would amount to any value 2000 or more with each array has about 8 columns in this example. I use a more expanded version of this code to render a part of a table that has been bottle necking my web application.
var GridBody = React.createClass({
render: function () {
return <tbody>
{this.props.Rows.map((row, rowKey) => {
return this.renderRow(row, rowKey);
})}
</tbody>;
},
renderRow: function (row, rowKey) {
return <tr key={rowKey}>
{row.map((col, colKey) => {
return this.renderColumn(col, colKey);
})}
</tr>;
},
renderColumn: function (col, colKey) {
return <td key={colKey} dangerouslySetInnerHTML={{ __html: col } }></td>;
}
});
Now onto the actual problem. It would seem that computation (even with my own code) seems to be suprisingly fast and even ReactJS's work with the virtualDOM has no issues.
But then there are these two events in reactJS.
componentWillUpdate up until where everything is still okay.
And then there is componentDidUpdate which seems to be fine and all on chrome.
The problem
But then there is IE11/Edge with about 4-6 SECONDS slower than any other browser and with the F12-Inspector this seems to be p to 8 SECONDS slower than Chrome.
The steps I have done to try and fix this issue:
Strip unnecessary code.
Shave off a handful of milliseconds in computation time.
Split the grid in loose components so that the virtualDOM doesn't try
to update the entire component at once.
Attempt to concaternate everything as a string and allow react to
only set innerhtml once. This actually seems to be a bug in IE here a
large string takes about 25-30 seconds on IE11. And only 30 ms on
chrome.
I have not found a proper solution yet. The actions I have done above seemed to make things less bad in IE but the problem still persists that a "modern" or "recent" browser is still 3-4 seconds slower.
Even worse, this seems to nearly freeze the entire browser and it's rendering.
tl;dr How to improve overal performance in IE and if possible other browsers?
I apologize if my question is unclear, I am burned out on this matter.
edit: Specifically DOM access is slow on IE as set innerHTML gets called more than 10.000 times. Can this be prevented in ReactJS?
Things to try improve IE performance:
check you are running in production mode (which removes things like prop validation) and make Webpack / Babel optimisations where applicable
Render the page server side so IE has no issues (if you can support SS rendering in your setup)
Make sure render isnt called alot of times, tools like this are helpful: https://github.com/garbles/why-did-you-update
Any reason why you are using dangerouslySetInnerHTML? If you take out the dangerouslySetInnerHTML does it speed things up dramatically?
Why not just automatically generate the rows and cols based on a array of objects (or multidimensional array passed), im pretty sure then React will make less DOM interaction this way (makes use of the VDOM). The <tr> and <td>'s will be virtual dom nodes.
Use something like https://github.com/bvaughn/react-virtualized to efficiently render large lists
Shot in the dark: try not rendering, or not displaying, until everything is completely done.
make the table element display:none until it's done
render it offscreen
in a tiny DIV with hidden overflow
or even output to a giant HTML string and then insert that into the DOM upon completion.
In addition to #Marty's excellent points, run a dynaTrace session to pinpoint the problematic code. It should give you a better insight into where the bottleneck is. It's results are often more useful than the IE developer tools.
Disclaimer - I am not linked with the dynaTrace team in any way.

Indicate that processor-heavy JS function is running (GIF spinners don't animate)

Showing then hiding animated indicator / spinner gifs are a good way to show a user that their action has worked and that something is happening while they wait for their action to complete - for example, if the action requires loading some data from a server(s) via AJAX.
My problem is, if the cause of the slowdown is a processor-intensive function, the gif freezes.
In most browsers, the GIF stops animating while the processor-hungry function executes. To a user, this looks like something has crashed or malfunctioned, when actually it's working.
JSBIN example
Note: the "This is slow" button will tie up the processor for a while - around 10 seconds for me, will vary depending on PC specs. You can change how much it does this with the "data-reps" attr in the HTML.
Expectation: On click, the animation runs. When the process is finished, the text changes (we'd normally hide the indicator too but the example is clearer if we leave it spinning).
Actual result: The animation starts running, then freezes until the process finishes. This gives the impression that something is broken (until it suddenly unexpectedly completes).
Is there any way to indicate that a process is running that doesn't freeze if JS is keeping the processor busy? If there's no way to have something animated, I'll resort to displaying then hiding a static text message saying Loading... or something similar, but something animated looks much more active.
If anyone is wondering why I'm using code that is processor-intensive rather than just avoiding the problem by optimising: It's a lot of necessarily complex rendering. The code is pretty efficient, but what it does is complex, so it's always going to be demanding on the processor. It only takes a few seconds, but that's long enough to frustrate a user, and there's plenty of research going back a long time to show that indicators are good for UX.
A second related problem with gif spinners for processor-heavy functions is that the spinner doesn't actually show until all the code in one synchronous set has run - meaning that it normally won't show the spinner until it's time to hide the spinner.
JSBIN example.
One easy fix I've found here (used in the other example above) is to wrap everything after showing the indicator in setTimeout( function(){ ... },50); with a very short interval, to make it asynchronous. This works (see first example above), but it's not very clean - I'm sure there's a better approach.
I'm sure there must be some standard approach to indicators for processor-intensive loading that I'm unaware of - or maybe it's normal to just use Loading... text with setTimeout? My searches have turned up nothing. I've read 6 or 7 questions about similar-sounding problems but they all turn out to be unrelated.
Edit Some great suggestions in the comments, here are a few more specifics of my exact issue:
The complex process involves processing big JSON data files (as in, JS data manipulation operations in memory after loading the files), and rendering SVG (through Raphael.js) visualisations including a complex, detailed zoomable world map, based on the results of the data processing from the JSON. So, some of it requires DOM manipulation, some doesn't.
I unfortunately do need to support IE8 BUT if necessary I can give IE8 / IE9 users a minimal fallback like Loading... text and give everyone else something modern.
Modern browsers now run CSS animations independently of the UI thread if the animation is implemented using a transform, rather than by changing properties. An article on this can be found at http://www.phpied.com/css-animations-off-the-ui-thread/.
For example, some of the CSS spinners at http://projects.lukehaas.me/css-loaders/ are implemented with transforms and will not freeze when the UI thread is busy (e.g., the last spinner on that page).
I've had similar problems in the past. Ultimately they've been fixed by optimizing or doing work in smaller chucks responding to user actions. In your case different zoom levels would trigger different rendering algorithms. You would only process what the user can see (plus maybe a buffer margin).
I believe the only simple workaround for you that would be cross-browser is to use setTimeout to give the ui thread a chance to run. Batch up your work into sets of operations and chain them together using several setTimeout calls. This will slow down the total processing time, but the user will at least be given feedback. Obviously this suggestion requires that your processing can be easily sectioned off. If that is the case you could also consider adding a progress bar for improved UX.

HTML validation and loading times

Does having (approx 100) HTML validation errors affect my page loading speeds? Currently the errors on my pages don't break the page in ANY browser, but I'd spend time and clear those anyhow if it would improve my page loading speed?
If not on desktops, how about mobile devices like iPhone or Android? (For example, N1 and Droid load pages much slower than iPhone although they both use Webkit engine.)
Edit: My focus here is speed optimization not cross-browser compatibility (which is already achieved). Google and other biggies seem to use invalid HTML for speed or compatibility of both?
Edit #2: I'm not in quirks mode, i.e. I use XHTML Strict Doctype and my source looks great and its mostly valid, but 100% valid HTML usually requires design (or some other kind of) sacrifice.
Thanks
It doesn't affect -loading- speed. Bad data is transferred over the wires just as fast as good data.
It does affect rendering speed though (...in some cases... ...positively! Yeah, MSIE tends to be abysmally slow in standards mode) In most cases though, render speed will be somewhat slower due to Quirks mode which is less efficient, more paranoid and generally instead of just executing your data like a well-written program, it tries its best to fish out some meaningful content from what is essentially a tag soup.
Some validation errors like missing ALT or no / at the end of single-element tags won't affect render at all, but some, like missing a closing tag or using antiquated obsolete parameters may impact performance seriously.
It might affect loading speed, or it might not. It depends on the kind of errors you're getting.
I'd say that in most cases it's likely that it will be slower because the browser will have to handle these errors. For instance if you forgot to close a div tag, some browsers will close it for you. This takes processing time and increase the loading time.
I don't think the time delta between no error and 100 errors would be minimal. But if you have that many errors, you should consider fixing your code :)
Probably yes, and here's why.
If your code is valid to the W3C doctype you are using then the browser doesn't have to put more effort in to try and fix your code. This is called quirks mode, and it would be logical that if your code were to validate, the browser wouldn't have to try and piece the website back together.
Remembering it's always beneficial to make your code validate, if only to ensure a consistent design across the popular browsers. Finally you'll probably find that you fix the first few errors and your list of 100 errors will drastically decrease.
In theory, yes it will decrease page load times because the browser has to do less to handle errors and so on.
However it does depend on the nature of the validation errors. If you're improperly nesting tags (which actually may be valid in HTML4) then the browser would have to do a little more work working out where elements start and end. And this is the kind of thing that can cause cross-browser problems.
If you're simply using unofficial attributes (say, the target attribute on links) then support for that is either built into the browser or not. If the browser understands it, it will do something with it, otherwise it will ignore the attribute.
One thing that will ramp up your validation errors is using <br> under XHTML or <br /> under HTML. Neither should increase loading times (although <br /> takes a fraction longer to download).

Categories