Tasks for Animation Frames in a Browser - javascript

So I understand that in order to maintain the standard 60 frames per second when running animations in a web browser, we only get around 16ms per frame to perform any task we want to. The browser has to typically go through all the steps of the rendering pipeline to render each frame:
However, experts like Paul Lewis say that we realistically have only 10ms every frame to complete our tasks as browser has some 'overheads' and 'housekeeping' to do for every frame. I would like to know what these 'overheads' and 'housekeeping' tasks actually are?

"Overheads" vary per browser, and most don't occur on "every frame," but it all adds up, and overhead tasks performed either by your browser or by common client-side third-party code like Google Analytics also takes up valuable milliseconds. Common overhead tasks include:
Garbage collection
Listening for and handling often-repeated events such as scroll, mousemove, and some touch events (e.g. if you have analytics libs that generate heatmaps, that software may be tracking every mouse operation and touch operation)
Animations on your page (CSS ones or JavaScript-manages ones) which are "overhead" as far as the operation of your page are concerned
Third-party (or your own) code which does its thing only after certain conditions are met (e.g. lazy-loading of images, where images are loaded (and painted and composited) only when onscreen or close to being onscreen
Ads served by ad networks
Your own asynchronous code (triggered by setTimeout(), setInterval(), event-handlers, etc. etc. etc.) and that of any third-party libs which executes at some point, and when it does, eats into your 16ms (obviously there's a lot of overlap between this and the previous point)
Ad blockers and similar plugins (these run on a different thread, but interact with your thread, e.g., whenever DOM manipulation is necessary or any other cross-thread communication)
Loading of streaming media (which often consists of many network requests behind the scenes), which can include even relatively short, static videos
The overhead of running GIF animations or video (yours or UGC) - which is separate from the previous item, which concerns just the network interaction)
The repainting that needs to occur whenever the user scrolls or jumps to another part of your page (independent of any listeners for scroll, resize, etc.)
The potential complete redrawing of the DOM if some types of elements are added, removed, or resized, by you or by the user
Handling XHR or Iframe server responses or other data incoming from the network (like websockets traffic)
Tracking pixels (loading them, handling their demands of valuable JavaScript engine time); note that large sites often have a dozen or two tracking pixels of different types on them, and all it takes is one poorly written one to make huge demands on your browser's limited resources)
logic which attempts to anticipate what will happen next, and performing the optimizations involved
Heavy CPU usage by other applications running in your OS (or other pages running in other tabs in your browser) which take resources away from the JavaScript engine, rendering engine, etc.
Event-loop overhead - the JavaScript event loop is an elegant way of handling single-threaded code, but there's overhead involved in operating it
All of the above (a not-even-close-to-comprehensive list) would be considered "overhead" to whatever specific stuff you're trying to accomplish within 10ms or 16 ms or whatever.
Also note that some devices are just not capable of maintaining 60fps in-browser or anywhere; a slow CPU, lack of sufficient memory or persistent storace, etc., can slow all applications down, browsers included.
Maybe you're looking for something more specific, not sure - but I think I know the Paul Lewis thing you mention (where he talks about 10ms vs 16.66ms, etc.) and I'm not sure exactly what overhead he's talking about - but if, for example, you're trying to make one animation on a webpage run at 60fps, then all of the above would be "overhead" compared to your specific task of optimizing your animataion.
Hope this helps!

Related

Why doesn't JavaScript get its own thread in common browsers?

Not enough that JavaScript isn't multithreaded, apparently JavaScript doesn't even get its own but shares a thread with a load of other stuff. Even in most modern browsers JavaScript is typically in the same queue as painting, updating styles, and handling user actions.
Why is that?
From my experience an immensely improved user experience could be gained if JavaScript ran on its own thread, alone by JS not blocking UI rendering or the liberation of intricate or limited message queue optimization boilerplate (yes, also you, webworkers!) which the developer has to write themselves to keep the UI responsive all over the place when it really comes down to it.
I'm interested in understanding the motivation which governs such a seemingly unfortunate design decision, is there a convincing reason from a software architecture perspective?
User Actions Require Participation from JS Event Handlers
User actions can trigger Javascript events (clicks, focus events, key events, etc...) that participate and potentially influence the user action so clearly the single JS thread can't be executing while user actions are being processed because, if so, then the JS thread couldn't participate in the user actions because it is already doing something else. So, the browser doesn't process the default user actions until the JS thread is available to participate in that process.
Rendering
Rendering is more complicated. A typical DOM modification sequence goes like this: 1) DOM modified by JS, layout marked dirty, 2) JS thread finishes executing so the browser now knows that JS is done modifying the DOM, 3) Browser does layout to relayout changed DOM, 4) Browser paints screen as needed.
Step 2) is important here. If the browser did a new layout and screen painting after every single JS DOM modification, the whole process could be incredibly inefficient if the JS was actually going to make a bunch of DOM modifications. Plus, there would be thread synchronization issues because if you had JS modifying the DOM at the same time as the browser was trying to do a relayout and repaint, you'd have to synchronize that activity (e.g. block somebody so an operation could complete without the underlying data being changed by another thread).
FYI, there are some work-arounds that can be used to force a relayout or to force a repaint from within your JS code (not exactly what you were asking, but useful in some circumstances).
Multiple Threads Accessing DOM Really Complex
The DOM is essentially a big shared data structure. The browser constructs it when the page is parsed. Then loading scripts and various JS events have a chance to modify it.
If you suddenly had multiple JS threads with access to the DOM running concurrently, you'd have a really complicated problem. How would you synchronize access? You couldn't even write the most basic DOM operation that would involve finding a DOM object in the page and then modifying it because that wouldn't be an atomic operation. The DOM could get changed between the time you found the DOM object and when you made your modification. Instead, you'd probably have to acquire a lock on at least a sub-tree in the DOM preventing it from being changed by some other thread while you were manipulating or searching it. Then, after making the modifications, you'd have to release the lock and release any knowledge of the state of the DOM from your code (because as soon as you release the lock, some other thread could be changing it). And, if you didn't do things correctly, you could end up with deadlocks or all sorts of nasty bugs. In reality, you'd have to treat the DOM like a concurrent, multi-user datastore. This would be a significantly more complex programming model.
Avoid Complexity
There is one unifying theme among the "single threaded JS" design decision. Keep things simple. Don't require an understanding of a multiple-threaded environment and thread synchronization tools and debugging of multiple threads in order to write solid, reliable browser Javascript.
One reason browser Javascript is a successful platform is because it is very accessible to all levels of developers and it relatively easy to learn and to write solid code. While browser JS may get more advanced features over time (like we got with WebWorkers), you can be absolutely sure that these will be done in a way that simple things stay simple while more advanced things can be done by more advanced developers, but without breaking any of the things that keep things simple now.
FYI, I've written a multi-user web server application in node.js and I am constantly amazed at how much less complicated much of the server design is because of single threaded nature of nodejs Javascript. Yes, there are a few things that are more of a pain to write (learn promises for writing lots of async code), but wow the simplifying assumption that your JS code is never interrupted by another request drastically simplifies the design, testing and reduces the hard to find and fix bugs that concurrency design and coding is always fraught with.
Discussion
Certainly the first issue could be solved by allowing user action event handlers to run in their own thread so they could occur any time. But, then you immediately have multi-threaded Javascript and now require a whole new JS infrastructure for thread synchronization and whole new classes of bugs. The designers of browser Javascript have consistently decided not to open that box.
The Rendering issue could be improved if desired, but at a significant complication to the browser code. You'd have to invent some way to guess when the running JS code seems like it is no longer changing the DOM (perhaps some number of ms go by with no more changes) because you have to avoid doing a relayout and screen paint immediately on every DOM change. If the browser did that, some JS operations would become 100x slower than they are today (the 100x is a wild guess, but the point is they'd be a lot slower). And, you'd have to implement thread synchronization between layout, painting and JS DOM modifications which is doable, but complicated, a lot of work and a fertile ground for browser implementation bugs. And, you have to decide what to do when you're part-way through a relayout or repaint and the JS thread makes a DOM modification (none of the answers are great).

Why the event loop existes from the beginning of JavaScript when there were almost no blocking operations

I am trying to understand how the JavaScript runtime works with its single thread model. There is an event loop which move the blocking operations (I/O most of them) to a different part of the runtime in order to keep clean the main thread. I found this model very innovative by the way.
I assume this model is part of JavaScript since its creation, and that most of the blocking I/O operations, like AJAX calls were "discovered" like 5 years later, so in the beginning what was the motivation to the single thread non blocking model if there were almost no blocking operations, and the language was only intended to validate forms and animate the screen. Was it long term view or only luck?
As you already stated, event loops are for coping with slow I/O - or more generally, with operations not involving the CPU that happen elsewhere from where the code runs that requires results from such operations.
But I/O is not just network and disk! There is I/O that is far slower than any device: Computers communicating with humans!
GUI input - clicking buttons, entering text, is all SLOOOOOWW because the computer waits for user input. Your code requires data from an external source (external form the CPU the code runs on).
GUI events are the primary reason for event based programming. Think about it: How would you do GUI programming synchronously? (you could use preemption by the OS - described below) You don't know when a user is going to click a button. Event based programming is the best option (we know of) for this particular task.
In addition, a requirement was to have only one thread because parallel programming is HARD and Javascript was meant to be for "normal users".
Here is a nice blog post I just found:
http://www.lanedo.com/the-main-loop-the-engine-of-a-gui-library/
Modern GUI libraries have in common that they all embody the
Event-based Programming paradigm. These libraries implement GUI
elements that draw output to a computer screen and change state in
response to incoming events. Events are generated from different
sources. The majority of events are typically generated directly from
user input, such as mouse movements and keyboard input. Other events
are generated by the windowing system, for instance requests to redraw
a certain area of a GUI, indications that a window has changed size or
notifications of changes to the session’s clipboard. Note that some of
these events are generated indirectly by user input.
I would like to add this:
We have two major options for dealing with the problem of your code having to wait for an external event (i.e. data that cannot be computed in the CPU your code is running on or retrieved from the directly attached RAM - anything that would leave the CPU unable to continue processing your code):
Events
Preemption by a "higher power" like the operating system.
In the latter case you can write sequential code and the OS will detect when your code requires data that is not there yet. It will stop the execution of your code and give the CPU to other code.
In a sense the ubiquitous event based paradigm in Javascript is a step backwards: Writing lots of event handlers for everything is a lot of work compared to just writing down what you want in sequence and letting the OS take care of managing the resource "CPU".
I noticed that I never felt like complaining when my event based programming was for the GUI - but when I had to do it for disk and network I/O it jumped out to me how much effort it was with all the event handling compared to letting the OS handle this in the background.
My theory: Coping with humans (their actions) in event handlers felt natural, it was the entire purpose of the software after all (GUI based software). But when I had to do all the event based stuff for devices it felt unnatural - I had to accommodate the hardware in my programming?
In a sense the event based programming that came upon us is a step away from previous dreams of "4th generation languages" and back towards more hardware oriented programming - for the sake of machine efficiency, not programmer efficiency. It takes A LOT of getting used to to writing event based code. Writing synchronously and letting the OS take care of resource management is actually easier - unless you are so used to event based code that you now have a knee-jerk reaction against anything else.
But think about it: In event based programming we let physical details like where our code is executed and where it gets data from determine how we write the code. Instead of concentrating on what we want we are much more into how we want it done. That is a big step away from abstraction and towards the hardware.
We are now slowly developing and introducing tools that help us with that problem, but even things like promises still require us to think "event based" - we use such constructs where we have events, i.e. we have to be aware of the breaks. So I don't see THAT much gain, because we still have to write code differently that has such "breaks" (i.e. leaves the CPU).

What are some highlevel techniques to improving paint or rendering times?

What are some techniques or methods that could potentially reduce jank and/or improve paint times on browsers?
There isn't an exact answer for this but here are some high level techniques that may work for most situations
Reduce paint layers - Use the browser's dev tools to see how many layers your css or markup may produce. Changing or simplifying your css could potentially reduce better outcomes
Frame Budget - If the JavaScript inside your requestAnimationFrame callback takes longer than 16ms to run, you don't have any hope of producing a frame in time for v-sync
Utlize a virtual dom, or Web Workers API - If possible offload processing away from the client and Utilize web workers api for processing
Sources:
Jank Busting
Gone in 60 Frames Per Second

Caching text/image assets in performance-constrained environments

I'm working on an extremely performance-constrained devices. Because of the overhead of AJAX requests, I intend to aggressively cache text and image assets in the browser, but I need to configure the cache size per-device to as low as 1MB of text and 9MB of images -- quite a challenge for a multi-screen, graphical application.
Because the device easily hits the memory limit, I must be very cautious about how I manage my application's size: code file size, # of concurrent HTTP requests, # of JS processor cycles upon event dispatch, limiting CSS reflows, etc. My question today is how to develop a size-restrained cache for text assets and images.
For text, I've rolled my own cache using JSON.encode().length for objects and 'string'.length to approximate size. The application manually gets/sets cache entries. Upon hitting a configurable upper limit, the class garbage collects itself from gcLimit to gcTarget sizes, giving weight to the last-accessed properties (i.e., if something has been accessed recently, skip collecting that object the first time around).
For images, I intend to preload interface elements and let the browser deal with garbage collection itself by removing DOM elements and never persistently storing Image() objects. For preloading, I will probably roll my own again -- I have examples to imitate like FiNGAHOLiC's ImgPreloader and this. I need to keep in mind features like "download window size" and "max cache requests" to ensure I don't inadvertently overload the device.
This is a huge challenge working in such a constrained environment, and common frameworks like Backbone don't support "max Collection size". Elsewhere on SO, users quote limits of 5MB for HTML5 localStorage, but my goal is not session persistence, so I don't see the benefit.
I can't help feeling there might be better solutions. Ideas?
Edit: #Xotic750: Thanks for the nod to IndexedDB. Sadly, this app is a standard web page built on Opera/Presto. Even better, the platform offers no persistence. Rock and a hard place :-/.
localStorage and sessionStorage (DOM Storage) limits do not apply (or can be overridden) if the application is a browser extension (you don't mention what your application is).
localStorage is persistent
sessionStorage is sessional
Idea
Take a look at IndexedDB it is far more flexible though not as widely supported yet.
Also, some references to Chrome storage
Managing HTML5 Offline Storage
chrome.storage
With modern javascript engines cpu/gpu performance is not an issue for most apps (except games, heavy animation or flash) on even low powered devices so I suspect your primary issues are memory and io. Optimising for one typically harms the other but I suspect that the issues below will be your primary concern.
I'm not sure you have any control over the cache usage of the browser. You can limit the memory taken up by the javascript app using methods like those you've suggested but the browser will still do it's own thing and that is probably the primary issue in terms of memory. By creating your own caches you will probably be actually duplicating data that is already cached by the browser and so exacerbate the memory problem. The browser (unless you're using something obscure) will normally do a better job of caching than is possible in javascript. In any case, I would strongly recommend letting the browser take care of garbage collection as there is no way in javascript to actually force browsers to free up the memory (they do garbage collection when they want, not when you tell them to). Do you have control over which browser is used on the device? If you do, then changing that may be the best way to reduce memory usage (also can you limit the size of the browser cache?).
For ajax requests ensure you fully understand the difference between GET and POST as this has big implications for caching on the browser and on proxies routing messages around the web (and therefore also affects the logic of your app). See if you can minimise the number of requests by grouping them together (JSON helps here). It is normally latency rather than bandwidth that is the issue for AJAX requests (but don't go too far as most browsers can do several requests concurrently). Ensure you construct your ajax manager to allow prioritisation of requests (i.e stuff that affects what the user sees is prioritised over preloading which is prioritised over analytics - half the web has a google analytics call the first thing that happens after page load, even before ads and other content is loaded).
Beyond that, I would suggest that images are likely to be the primary contributor to memory issues (I doubt code size even registers but you should ensure code is minimised with e.g. google closure). Reduce image resolutions to the bare minimum and experiment with file formats (e.g. gif or png might be significantly smaller than jpeg for some images (cartoons, logos, icons) but much larger for others (photos, gradients).
10MB of cache in your app may sound small but it is actually enormous compared with most apps out there. The majority leave caching to the browser (which in any case will probably still cache the data whether you want it to or not).
You mention Image objects which suggests you are using the canvas. There is a noticeable speed improvement if you create a new canvas to store the image (after which you can discard the Image object). You can use this canvas as the source of any image data you later need to copy to a canvas and as no translation between data types is required this is much faster. Given canvas operations often happen many times a frame this can be a significant boost.
Final note - don't use frameworks / libraries that were developed with a desktop environment in mind. To optimise performance (whether speed or memory) you need to understand every line of code. Look at the source code of libraries (many have some very clever optimised code) but assume that, in general, you are a special case for which they are not optimised.

Does having a larger amount of data in the DOM affect performance, Should i lazy load

I make heavy use of the excellent jTemplates plugin for a small web app.
Currently I load all of the templates into the DOM on the initial page load.
Overtime as the app has grown I have gotten more and more templates - currently about 100kb worth.
Because my app is all ajax-based, there is never a need to refresh the page after the initial page load. There is a couple second delay at the beginning while all of the templates load into the DOM, but after that the app behaves very responsively.
I am wondering: In this situation, is there any significant advantage to using jTemplates processTemplateURL method to lazy load templates as needed, as opposed to just bulk loading all of the templates on the initial page load?
(I don't mind the extra 2 or 3 seconds the initial page load takes - so I guess I am wondering -- besides the initial page load delay, is there any reason not to load a large amount of html template data into the DOM? Does having a larger amount of data in the DOM affect performance in any way?)
Thanks (in advance) for your help.
According to yahoo Best Practices for Speeding Up Your Web Site article, They reommand not having more then 500-700 elements in DOM.
The number of DOM elements is easy to test, just type in Firebug's console:
document.getElementsByTagName('*').length
Read more http://developer.yahoo.com/performance/rules.html
Like a jar that contains 100 marbles 10 of which are red color. It is easy to spot and pick the 10 red marbles from a jar of 100, but if that jar contained 1000 marbles, it will take more time to find the red marbles. Comparing this to DOM elements, the more you have the slow your selections will be, and that will effect performance.
You really should optimize your DOM in order to save memory and enhance speed. However, the key is to avoid premature optimizations.
What is your target platform? What browser(s) are your users most likely to be using?
For example, if you are targeting primarily desktop PC's and your users are running modern browsers, then you probably should prefer clarity and simplicity of your code.
If you are targeting desktop PC's but must support IE6, say, then having too many DOM elements will impact your performance and you should think in lines of optimizations.
However, if you are targeting modern browsers, but in areas with poor bandwidth (e.g. on a cruise ship etc.) then your bandwidth considerations may out-weight your DOM considerations.
If you are targeting iPhones, iPads etc., then memory is a scarce resource (as is with CPU), so you definitely should optimize the DOM. In addition, on mobile devices, you're going to give more weight in optimizing the AJAX payload due to bandwidth issues than with anything else. You'll give yet more weight in reducing the number of AJAX calls vs. saving on DOM elements. For example, you may want to just load more DOM elements in order to reduce the number of AJAX calls due to bandwidth considerations -- again, only you can decide on the right balance.
So the answer really is: depends. In a fast-connection, modern browser environment, no real need to prematurely optimize unless you DOM gets really huge. In a slow-connection, or mobile enviornment, overweight on bandwidth optimizations before DOM optimizations, but do optimize on DOM node count as well.
Having whole bunch of extra DOM elements will not only affect your initial load time, it will also affect more or less everything on the page. JavaScript DOM queries will run slower, inserts will run slower, CSS will apply slower and so on, since the entire tag soup is parsed into DOM Tree by the browser, any traversal of that tree is going to be affected. To measure how much it's going to slow your page down, you can use tools like DynaTrace AJAX and run it once with your current code and once with your code with no templates. If you notice large difference between those two runs in terms of JavaScipt execution or rendering, you should probably lazy load (though 100kb is not that much, so you might not see significant difference).

Categories