What are some techniques or methods that could potentially reduce jank and/or improve paint times on browsers?
There isn't an exact answer for this but here are some high level techniques that may work for most situations
Reduce paint layers - Use the browser's dev tools to see how many layers your css or markup may produce. Changing or simplifying your css could potentially reduce better outcomes
Frame Budget - If the JavaScript inside your requestAnimationFrame callback takes longer than 16ms to run, you don't have any hope of producing a frame in time for v-sync
Utlize a virtual dom, or Web Workers API - If possible offload processing away from the client and Utilize web workers api for processing
Sources:
Jank Busting
Gone in 60 Frames Per Second
Related
I am trying to understand why it's a hard task for browsers to fully render the DOM many time per second, like game-engines do for their canvas. Games engines can perform many many calculation each frame, calculating light, shadows, physics etc`, and still keep a seamless frame rate.
Why browsers can't do the same, allowing full re-rendering of the DOM many times per second seamlessly?
I understand that rendering a DOM and rendering a Game scene are two completely different tasks, but I don't understand why the later is so much harder in terms of performance.
Please try to focus on specific aspects of rendering a DOM, and explain why games-engines don't face the same problems. For example- "browsers need to parse the HTML, while all the code of the game is pre-compiled and ready to run".
EDIT: I edited my question because it was marked as opinionated. I am not asking for opinions here, only facts. I am asking why browsers can't fully re-render the DOM 60 frames per second like game-engines render their canvas. I understand that browsers faces a more difficult task, but I don't understand why exactly. Please stick with informative answers only, and avoid opinions.
Games are programs written to do operations specific to themselves - they are written in low level languages asm/c/c++ or at least languages that have access to machine level operations. When it comes to graphics, games are able to push programs into the graphics cards for rendering: drawing vectors and colouring / rasterization
https://en.wikipedia.org/wiki/OpenGL
https://en.wikipedia.org/wiki/Rasterisation#:~:text=Rasterisation%20(or%20rasterization)%20is%20the,which%20was%20represented%20via%20shapes)
they also have optimised memory, cpu usage, and IO.
Browsers on the other hand are applications, that have many requirements.
Primarily designed to render HTML documents, via the creation of objects which represent the html elements. Browsers have got a more complex job, as they support multiple version of the dom and document types (DTD), and associated security required by each DTD.
https://en.wikipedia.org/wiki/Document_type_declaration#:~:text=A%20document%20type%20declaration%2C%20or,of%20HTML%202.0%20%2D%204.0).
and have to support rending a very generic set of documents - one page is not the same as another. Have to have libraries for IO, CSS parsing, image parsing (JPEG, PNG, BMP etc.....) and movie players and associated codecs, audio players and their codecs, and web cams support. Additionally they support the JavaScript code environment (not just the language - but IO and event handling) - also have historic support for COM, Java Applets.
This makes them very versatile tools, but heavy weighted - they carry a lot of baggage.
The graphic aspects can never be quite as performant as a dedicated program in this aspect, as the API they provide for such operations is always running at a higher level.
Even the Canvas API (as the name suggests) is a layer of abstraction above the lower level rendering libraries. and each layer of abstraction adds a performance hit.
https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API
For a better graphics performance there is now a new standard available in browsers call webGL - though this is still an API, and runs in a sandbox - so still will not be as performant as dedicated code
https://en.wikipedia.org/wiki/WebGL
Even games using game engines: Unity, Unreal will be accessing graphical features, CPU, memory, and IO in much more a dedicated fashion then browsers would - as the game engines themselves provide dedicated rendering and rasterization functions, that the developer can use in their games for optimised graphical features.. Browser cant as they have to cover many generic cases, but not specific requirements.
https://docs.unrealengine.com/en-US/Engine/index.html
https://learn.unity.com/tutorial/procedural-sky-19-1
First of all, games on the Web don't use the DOM much. They use the faster Canvas API. The DOM is made for changing content on a document (that's what the D in DOM stands for), so it is a really bad fit for games.
How is it possible that my crappy phone can run Call Of Duty seamlessly, but it's so hard to write a big webpage that will run smoothly on it?
I never had performance problems with the DOM. Of course, if you update the whole <body> with a single .innerHTML assignment 60 times a second, I wouldn't be surprised if the performance is bad, because the browser needs to:
Parse the HTML and construct the DOM tree;
Apply styles and calculate the position of each element;
Render the elements.
Each of those steps is a lot of work for the CPU, and the process is mostly single-threaded in most browsers.
You can improve the performance by:
Never using .innerHTML. .innerHTML makes the browser transform HTML into a DOM tree and vice-versa. Use document.createElement() and .appendNode().
Avoid changing the DOM. Change only the CSS styles, if possible.
Generally , it's depend about the game . the most powerful games are developed in C++ or C engine , so they are directly in touch with the memory and use the full power of processor.
Instead to web pages based on DOM , they are wrote it by interpreted language like JavaScript. Also , the problem can be from the server if the webpage it's deployed not correctly or in a bad slow server .
So I understand that in order to maintain the standard 60 frames per second when running animations in a web browser, we only get around 16ms per frame to perform any task we want to. The browser has to typically go through all the steps of the rendering pipeline to render each frame:
However, experts like Paul Lewis say that we realistically have only 10ms every frame to complete our tasks as browser has some 'overheads' and 'housekeeping' to do for every frame. I would like to know what these 'overheads' and 'housekeeping' tasks actually are?
"Overheads" vary per browser, and most don't occur on "every frame," but it all adds up, and overhead tasks performed either by your browser or by common client-side third-party code like Google Analytics also takes up valuable milliseconds. Common overhead tasks include:
Garbage collection
Listening for and handling often-repeated events such as scroll, mousemove, and some touch events (e.g. if you have analytics libs that generate heatmaps, that software may be tracking every mouse operation and touch operation)
Animations on your page (CSS ones or JavaScript-manages ones) which are "overhead" as far as the operation of your page are concerned
Third-party (or your own) code which does its thing only after certain conditions are met (e.g. lazy-loading of images, where images are loaded (and painted and composited) only when onscreen or close to being onscreen
Ads served by ad networks
Your own asynchronous code (triggered by setTimeout(), setInterval(), event-handlers, etc. etc. etc.) and that of any third-party libs which executes at some point, and when it does, eats into your 16ms (obviously there's a lot of overlap between this and the previous point)
Ad blockers and similar plugins (these run on a different thread, but interact with your thread, e.g., whenever DOM manipulation is necessary or any other cross-thread communication)
Loading of streaming media (which often consists of many network requests behind the scenes), which can include even relatively short, static videos
The overhead of running GIF animations or video (yours or UGC) - which is separate from the previous item, which concerns just the network interaction)
The repainting that needs to occur whenever the user scrolls or jumps to another part of your page (independent of any listeners for scroll, resize, etc.)
The potential complete redrawing of the DOM if some types of elements are added, removed, or resized, by you or by the user
Handling XHR or Iframe server responses or other data incoming from the network (like websockets traffic)
Tracking pixels (loading them, handling their demands of valuable JavaScript engine time); note that large sites often have a dozen or two tracking pixels of different types on them, and all it takes is one poorly written one to make huge demands on your browser's limited resources)
logic which attempts to anticipate what will happen next, and performing the optimizations involved
Heavy CPU usage by other applications running in your OS (or other pages running in other tabs in your browser) which take resources away from the JavaScript engine, rendering engine, etc.
Event-loop overhead - the JavaScript event loop is an elegant way of handling single-threaded code, but there's overhead involved in operating it
All of the above (a not-even-close-to-comprehensive list) would be considered "overhead" to whatever specific stuff you're trying to accomplish within 10ms or 16 ms or whatever.
Also note that some devices are just not capable of maintaining 60fps in-browser or anywhere; a slow CPU, lack of sufficient memory or persistent storace, etc., can slow all applications down, browsers included.
Maybe you're looking for something more specific, not sure - but I think I know the Paul Lewis thing you mention (where he talks about 10ms vs 16.66ms, etc.) and I'm not sure exactly what overhead he's talking about - but if, for example, you're trying to make one animation on a webpage run at 60fps, then all of the above would be "overhead" compared to your specific task of optimizing your animataion.
Hope this helps!
After a decade or so, I started to get into web development again. I set up a page with about 50 different input fields in all variations, which manipulate the contents of a div. Favoring performance over convenience, I wrote all the event handlers in native JS. Out of interest, I replicated the page using jQuery.
Now, in native JS I cannot really group any event handlers for those input fields, even if they do similar things. Creating them with a loop doesn't save much code either, since it's never more than 1-3 related input fields. In the end, I have a whole bunch of functions that look like this:
var input = document.getElementById('input');
input.addEventListener('input', function() {
// do stuff
});
input.addEventListener('change', function() {
// do stuff
});
The JS for that test page is about 20 kb (unminified). The replicated JS using jQuery instead of native JS is about 9 kb (unminified).
While researching jQuery, all the articles that advise against it would feature some benchmark that shows that after x million iterations of some method, native JS was x seconds faster than jQuery.
The question I ask myself is: How relevant is this in real-world web applications. I mean, aside from the fact that it took me about four times longer to write the native JS, wouldn't downloading more than twice as much JS slow down the viewer much more than a theoretical x millionth slower execution time per method?
Is the load time significant?
Downloading jQuery is often negligible in many cases as it will be cached if using a CDN (https://trends.builtwith.com/cdn/jQuery-CDN - 13% of the top 10k sites)- I find writing using jQuery you'll often end up with smaller scripts (~50% reduction in your example) which can offset it a bit. However at 80KB+ jQuery can be a significant increase in a pages download if it's not cached (memory and CPU use is pretty negligible on modern devices). I find usually as a script grows the more likely it is to make use of jQuery(or another library for that matter) and the size reductions from using a library increase however it's rare that the size of jQuery completely is offset by these savings.
What about optimization?
Saving a few cycles often is pretty negligible also for a lot of methods - it's best usually to take the faster/easier development route then work on optimization as it's required (which may mean removing jQuery as a dependency but for many browser related cases DOM related operations are often the most expensive tasks). Straight up aiming for perfect optimization usually results in more bugs/problems that are more difficult to resolve.
Is saving those few cycles worth it?
If it's only a few ms (even up to like 20-50ms or so) in a method called once every 3-4s often the user won't even notice, but if you've got to make thousands or millions of calls to a method that takes a a few millionths or thousandths of a second then it might be a good idea to look at optimizing that particular method. One thing to also be mindful is the setup you're using to test performance as it may be significantly higher spec'd than your users. Browser profiling tools which are built into many browsers dev tools can assist in identifying optimization targets.
So to answer your question how relevant is this in real world applications?
For most methods and many use cases not at all - the download is often insignificant as it's likely to be cached and the majority of the code for many applications will not see a noticeable negative performance impact by using jQuery.
With that being said jQuery should not be used as a one size fits all solution - if only using a few features then consider using native alternatives SO has plenty of questions with people looking for native alternatives and there are sites such as http://youmightnotneedjquery.com/ specifically for this purpose.
From application user point of view, everything that happens in less than 0.1 sec happens immediately.
For user it is completely all the same if page element changed in 0.0999999 or in 0.0000001 sec. Despite last is 999999 times faster, this speed matters only for creators of new "superfast" frameworks and those naive enough who trades simplicity and speed of development for numbers that means nothing.
I am testing performance in my JavaScript application, a game using canvas. One problem I have is major fluctuations with FPS: going from 60 to 2 in ms.
As you can see, there are major spikes. It is not due to painting, scripting, rendering, or loading. I think it is because requestAnimationFrame doesn't assign a set FPS rate and it might be too flexible? Should I use setTimeout? Is it usually more reliable in these cases because it forces the application to run in only one set FPS rate?
Performance is always about specifics. Without more information on your app, (e.g. the specific code that renders your app ). It is hard to say how you should structure your code.
Generally, you should always use requestAnimationFrame. Especially for rendering.
Optionally store the delta time and multiply your animation attributes by that delta. This will create a smooth animation when the frame rate is not consistent.
I've also found random frame rate changes are usually related to garbage collection. Perhaps do some memory profiling to find if there are places you can avoid recreating objects each frame.
requestAnimationFrame is superior to setTimeout in nearly every way. It won't run as a background tab. It saves battery. It gives the browser more information about the type of app you are developing, and this lets the given browser make many safe performance increasing assumptions.
I highly recommend watching this talk by Nat Duca on browser performance.
I make heavy use of the excellent jTemplates plugin for a small web app.
Currently I load all of the templates into the DOM on the initial page load.
Overtime as the app has grown I have gotten more and more templates - currently about 100kb worth.
Because my app is all ajax-based, there is never a need to refresh the page after the initial page load. There is a couple second delay at the beginning while all of the templates load into the DOM, but after that the app behaves very responsively.
I am wondering: In this situation, is there any significant advantage to using jTemplates processTemplateURL method to lazy load templates as needed, as opposed to just bulk loading all of the templates on the initial page load?
(I don't mind the extra 2 or 3 seconds the initial page load takes - so I guess I am wondering -- besides the initial page load delay, is there any reason not to load a large amount of html template data into the DOM? Does having a larger amount of data in the DOM affect performance in any way?)
Thanks (in advance) for your help.
According to yahoo Best Practices for Speeding Up Your Web Site article, They reommand not having more then 500-700 elements in DOM.
The number of DOM elements is easy to test, just type in Firebug's console:
document.getElementsByTagName('*').length
Read more http://developer.yahoo.com/performance/rules.html
Like a jar that contains 100 marbles 10 of which are red color. It is easy to spot and pick the 10 red marbles from a jar of 100, but if that jar contained 1000 marbles, it will take more time to find the red marbles. Comparing this to DOM elements, the more you have the slow your selections will be, and that will effect performance.
You really should optimize your DOM in order to save memory and enhance speed. However, the key is to avoid premature optimizations.
What is your target platform? What browser(s) are your users most likely to be using?
For example, if you are targeting primarily desktop PC's and your users are running modern browsers, then you probably should prefer clarity and simplicity of your code.
If you are targeting desktop PC's but must support IE6, say, then having too many DOM elements will impact your performance and you should think in lines of optimizations.
However, if you are targeting modern browsers, but in areas with poor bandwidth (e.g. on a cruise ship etc.) then your bandwidth considerations may out-weight your DOM considerations.
If you are targeting iPhones, iPads etc., then memory is a scarce resource (as is with CPU), so you definitely should optimize the DOM. In addition, on mobile devices, you're going to give more weight in optimizing the AJAX payload due to bandwidth issues than with anything else. You'll give yet more weight in reducing the number of AJAX calls vs. saving on DOM elements. For example, you may want to just load more DOM elements in order to reduce the number of AJAX calls due to bandwidth considerations -- again, only you can decide on the right balance.
So the answer really is: depends. In a fast-connection, modern browser environment, no real need to prematurely optimize unless you DOM gets really huge. In a slow-connection, or mobile enviornment, overweight on bandwidth optimizations before DOM optimizations, but do optimize on DOM node count as well.
Having whole bunch of extra DOM elements will not only affect your initial load time, it will also affect more or less everything on the page. JavaScript DOM queries will run slower, inserts will run slower, CSS will apply slower and so on, since the entire tag soup is parsed into DOM Tree by the browser, any traversal of that tree is going to be affected. To measure how much it's going to slow your page down, you can use tools like DynaTrace AJAX and run it once with your current code and once with your code with no templates. If you notice large difference between those two runs in terms of JavaScipt execution or rendering, you should probably lazy load (though 100kb is not that much, so you might not see significant difference).