I am wondering if JS is slower in determining certain values than CSS, because they both would have to do the same calculatios.
In other words, if I set a margin in % to an element, it will use the width of its parent as a base. For instance, margin: 50% = margin: 0.5 * parent.width. Internally, then, the browser has to calculate the correct margin based on the parent's width, right? How then, is this different from calculating in JS? Why is CSS faster? What internal CSS rendering makes these computational processes faster than JS can do?
Here is a fiddle. Both child divs are the same, but one div's margin is calculated in CSS (margin: 20%) and the other in JS:
var $cont = $("#container");
$("#js").css("margin", $cont.width() * 0.2);
Considering resize: the CSS engine will have to re-calculate the margin as well on resize, right?
Considering load time: I am only talking about the actual execution time. In other words:
var $cont = $("#container");
$("#js").css("margin", $cont.width() * 0.2);
vs.
#css {margin: 20%;}
Excluding any additional (library) load times. The difference between jQuery and vanilla JS shouldn't be included in the answer. I am aware of performance differences between the two.
This question will have somewhat of a vague answer since it's extremely browser-dependent. However, it's probably fairly safe to say that CSS is generally going to be faster here.
The first thing to note is that we are not comparing the speed of CSS as a language to JavaScript, per se. CSS isn't necessarily executed repeatedly in this sense. Its focus is on specifying properties that are very native to the heart of the browser. The browser can parse it once and then do whatever it wants with that specification. In fact, it would generally be a rather poor browser if it's repeatedly checking/executing the same unchanging CSS code over and over.
So one way to look at this is that we aren't really asking why CSS is faster than JavaScript. CSS is not the one doing the calculations here. We're asking why a native web browser can be faster than the client-side JavaScript it can run on top.
If you put yourself in a browser developer's shoes, you can see a specification when parsing the CSS file initially that an element is supposed to have a relative size that is 50% of its parent.
You're now allowed to do whatever you want with the specification as the page is redrawn, scrolled, interacted with. You can store that size specification at the heart of your core data structures and even use metal-scraping assembly code if you want. You can calculate the new absolute sizes of such relative-sized children in a single pass descending down the hierarchy from parent to child. You can calculate the absolute pixel sizes of 4 children at once using SIMD. You can do anything given that initial property specification.
With JavaScript, we have no such control. The client-side script can do anything here, so we have to trigger a resize event and if the JS side resizes things inside a resize event, it might trigger a whole cascade of such events and possibly even reset/interfere with the regular process of calculating client rectangles. The size of an element, even at the percentage level, turns into a blank, a giant question mark that has to be repeatedly 'queried' in a sense. This is all speculative as it's very browser-dependent.
But what isn't browser-dependent is that CSS allows those kinds of browser-native optimizations given its static, predictable, property-specifying nature. JS has a lot more freedom, and so it has to run on top and can't have that kind of special privilege of allowing the browser developers to do whatever they want with it.
With CSS, the ball is in the native browser developers' court. With JS, the ball is in your court, and it's going to be quite difficult (if even possible) to beat the browser developers given that the JS code is running on top of the browser.
1) In the end, it's all CSS positioning. Calculating it in JS, versus letting the CSS + layout engine calculate it just means that once you have the number, you need to assign it to the element (or elsewhere) and have the CSS engine pick it up from there, and apply it during the next repaint.
2) The CSS has access, internally, to the actual pixel widths of things. There are times where you need to determine computed styles of elements, in order to make your calculations...
...that means you need to call a method which reads those values, parse them into workable numbers, do your calculations, convert back into the appropriate string representing the unit value, and send it back to CSS/layout/rendering.
This doesn't mean that JS can't be fast.
Likewise, there are things that JS can do, which CSS will never be able to do on its own (like advanced, dynamic animation-blending)...
...but the trick is to use both where appropriate, and for reasons well-described above, to know that something which you write to be run dozens of times a second, can't be compared to something which only needs to be calculated each time you touch something which invalidates the previous calculations, with the benefit of knowing that you can change minute details on a whim in the JS version, as well as compose all kinds of sequences, which can be decomposed and recomposed on the fly, where in CSS, you're at the mercy of the predefined animations.
Related
I've made some code (tool? framework? Not sure what to call it) that is intended to make it possible to style CSS with Javascript but not jump when reloading or changing pages (so for use in traditional multi-page sites... not sure the conventional term for that). I'm no web expert so am unsure if it's worth developing this further or if there's better solutions to what I'm trying to solve (more than likely).
The basic structure is
A. Under certain client-side conditions (e.g. browser resolution, but could be anything, like a certain user using the site), CSS is generated by client-side JS, written to a file on the server under the appropriate heading relative to scenario (e.g., 1024x768.css, 102400x76800.css).
B. The server code checks (via cookies) if client-side condition is met, checks if css file pertaining to condition exists, uses it, otherwise generates it (A.)
Potential uses
You inherit a legacy site or clients insist on a certain template (Wordpress theme), with predetermined HTML structure, such that it's difficult to achieve a custom look just modifying the CSS. It might be much quicker to make calculations and adjustments with Javascript than refactor the HTML or figure out the solution in CSS (time permitting the ideal solutions, arguably). On the other hand, you don't want the style to jump every time you load the page since that looks tacky.
Edit: example of the above
As noted below in the comments, I can't think of a great example off the top of the ol' noggin. Right now my test is modifying a navigation menu of the type <div class="menu"><ul><li><a>Section 1</a></li><li><a>Section n</a></li></ul></div> such that the <a>'s have just enough padding on both sides that the menu <div> fully fills up the width of the browser.
I imagine there's a conventional solution to this, so if you're feeling in the mood, please let me know.
You want particularly complicated sizing, positioning based on complex calculations (dependent on screen size, or not), but, again don't want things jumping around.
Edit: example of the above
Positioning elements in a spiral pattern (say this kind) with diminishing size. This seems to be nontrivial in CSS, perhaps done by calculating the positions beforehand and placing with absolute positioning. But then there's the problem of having everything scale depending on screen resolution.
Alternately Javascript could calculate positions and sizes dynamically. Of course writing the method to correspond to the mathematical spiral function would be a challenge (though an interesting one).
There could be other solutions like using .svg, but if written generically it would be possible to position according to other mathematical functions (e.g., sine wave), or complex ratios (golden mean) fairly easily.
You want a site where the user can customize the look (reposition or even resize elements) and you want the customization to automatically get remembered and generated in the server-side code (perhaps even without a login). I'm sure this is facilitated by many frameworks, but this kind of divests the process from a specific framework.
I was wondering if other folks had thoughts on whether:
A. There's a better solution to all this I've missed.
B. The system I described of pushing CSS from JS to be written on server sounds sound, or if the same thing could be achieved another way entirely client-side.
C. And I guess since it's not a specific technical question whether this is the right place to ask this question, and if not, where I should.
Like I said, I'm no expert, so would greatly appreciate any feedback or other things that might help me to learn.
Thanks
In the interesting article Everything You Need to Know About the CSS will-change Property it says that using the translate3d() hack is the old way of doing things if you want hardware acceleration, and instead of that you should use will-change. Is there any benefit in performance when using will-change? I have found it is extremely difficult in some cases to add will-change via JavaScript just before element triggers animation. For example
You can't just put will-change in CSS and expect it to work, because it will be worse if you had multiple elements.
You can't either put will-change in :hover pseudo selector because the browser needs some time to prepare.
You are left with animation on click event, when you can add willChange on hover via JavaScript leaving the browser enough time to prepare (200ms).
Overall you must somehow predict users behavior and that is difficult. It is too complicated (you must, also, remove will-change after animation ends) over translate3d(). Why use it?
As you've already stated, specifying a transform constitutes a hack. will-change is a standard.
There are guidelines around when to use will-change and when not to (with the general idea of using it sparingly and only when you need to). On the other hand, many authors recommend using the transform hack without abandon.
If you expect a user to interact with an element in a way that triggers some resource-intensive visual effect, set will-change. Simple as that. You don't have to predict when or if the user will ever interact with the element and decide whether or not to set will-change — you just set will-change and let the browser worry about the rest.
You don't will-change all the things, but the moment you expect a certain property to change on a certain element, tell the browser that that specific property will-change on that specific element. Whether or not a user actually triggers the change in that specific page load or browsing session is irrelevant to you as the author.
will-change doesn't inherently have better performance — in fact, it does pretty much the same thing as the transform hack at a high level. The performance boost comes mostly from judicious use of will-change so you don't waste system resources on things that don't need them (see point #2 above).
I'm looking on how to implement pagination/page breaks with page formats (A4, letter, etc.) using a rich text editor (like the Medium Editor).
The font family, font size, line height, margins are going to be fixed, as this is a very specific case study. I'm thinking of handling zoom levels in pure CSS (scale), instead of directly modifying widths, heights, etc.
Also, for the sake of the experiment, say I'll be running this in Chrome only & browser rendering differences aren't really an issue (but even if I were building this for various browsers, I'd try and use more precise units, such as "px", "em" for the font-sizes, page widths, margins between elements, etc. - probably just "px").
Keep in mind I'm not asking about "#page" rules or print rules, I know how to achieve what I want with those when I print out a PDF, but rather direct in-browser implementation. Printing should (and will) be handled by "#page" and I got no issue to handle page breaks there when I need them.
In the end, my question is - where do I start?
I imagine taking into account word-count and "h(1,2,3...)", "p" tag margins, along with case-specific CSS rules (break-after, break-word, break-line, etc) - even though taking those into account with js probably won't be very easy.
Probably even include the page height? Say, if the format is A4: 596px x 842px (72dpi) - take it into account when the total height of "each" element inside the page == height of page - [sum of bottom and top page margins]?
Other than the latter (with a simple js loop), if someone has any pointers, or maybe even a code snippet (or a plugin?), I'd be very grateful! Thank you!
DOM blocking is something many people not familiar with JavaScript's strictly single-threaded synchronous execution model find out about the hard way, and it's usually just something we want to work around somehow (using timeouts, web-workers, etc). All well and good.
However, I would like to know if blocking of the actual user-visible rendering is something you can actually rely on. I'm 90% sure it is de facto the case in most browsers but I am hoping this isn't just a happily consistent accident. I can't seem to find any definitive statements from DOM specifications or even vendor documentation like MDM.
What worries me slightly is that while changes to the DOM are indeed not visible looking at the page, the internal DOM geometry (including CSS transforms and filters) does actually update during synchronous execution. For example:
console.log(element.getBoundingRect().width);
element.classList.add("scale-and-rotate");
console.log(element.getBoundingRect().width);
element.classList.remove("scale-and-rotate");
... will indeed report two different width values, though the page does not appear to flash. Synchronously waiting after the class is added (using a while loop) doesn't make the temporary changes visible either. Doing a Timeline trace in Chrome reveals that internally paint and re-paint is taking place just the same, which makes sense...
My concern is that, lacking a specific reason not, some browsers, like say, those dealing with underpowered mobile CPUs, may choose to actually reflect those internal calculations in the user-visible layout during that function's execution, and thus will result in an ugly "flash" during such temporary operations. So, more concretely, what I'm asking is: Do they have a specific reason not to?
(If you are wondering why I care about this at all, I sometimes need to measure calculated dimensions using getBoundingRect for elements in a certain state to plan out spacing or animations or other such things, without actually putting them in that state or animating them first...)
According to various sources, getting the position or size of a DOM element will trigger a reflow of the output if necessary, so that the returned values are correct. As a matter of fact, reading the offsetHeight of an element has become a way to force a reflow, as reported by Alexander Skutin and Daniel Norton.
Paul Irish gives a list of several actions that cause a reflow. Among them are these element box metrics methods and properties:
elem.offsetLeft, elem.offsetTop, elem.offsetWidth, elem.offsetHeight,
elem.offsetParent elem.clientLeft, elem.clientTop, elem.clientWidth,
elem.clientHeight elem.getClientRects(), elem.getBoundingClientRect()
Stoyan Stefanov describes strategies used by browsers to optimize reflows (e.g. queueing DOM changes and performing them in batches), and adds the following remark:
But sometimes the script may prevent the browser from optimizing the
reflows, and cause it to flush the queue and perform all batched
changes. This happens when you request style information, such as
offsetTop, offsetLeft, offsetWidth, offsetHeight
scrollTop/Left/Width/Height
clientTop/Left/Width/Height
getComputedStyle(), or currentStyle in IE
All of these above are essentially requesting style information about
a node, and any time you do it, the browser has to give you the most
up-to-date value. In order to do so, it needs to apply all scheduled
changes, flush the queue, bite the bullet and do the reflow.
There is nothing in Javascript related to concurrency that is anything but de facto. JS simply does not define a concurrency model. Everything is happy accident or years of consensus.
That said, if your function does not make any calls to weird things like XMLHttpRequest or "alert" or something like that, you can basically treat it as single-threaded with no interrupts.
I'm working on my new portfolio and I want to use a complex javascript (for animating, moving, effecting dom elements) and i going to do as much as possible optimization to maximize the performance. BUT I can't prepare for all the case with my site will be faced. So i started to looking for a script with I can check the browser performance (maximum in a few seconds) and based on the performance test results I can set the number of displayed and calculated effects on the page.
So is there any way to check browser performance and set the optimal number of applied effect on a page?
If possible, use CSS transforms/transitions instead of pure-js effects, as the former are usually hardware accelerated and thus orders of magnitude faster.
Even if you don't use CSS transforms, you can detect support for them using e.g. modernizr, and if supported, you can assume that the browser is very modern and has pretty good performance in general. Take a look at window.requestAnimationFrame, it automatically throttles the framerate.