I do understand that requestAnimationFrame() is very helpful when running animations that constantly change a style property over time.
const e = document.getElementById('e');
let count = 0;
function move(timestamp) {
e.style.left = ++count + 'px';
requestAnimationFrame(move);
};
requestAnimationFrame(move);
Now my question is: Does it make sense to wrap everything that changes the visual appearance of the view in requestAnimationFrame?
In other words, will this pseudocode example have a better performance than without requestAnimationFrame?
const e = document.getElementById('e');
let f = document.createDocumentFragment();
nodes.forEach((node, index) => {
f.appendChild(node);
}
requestAnimationFrame(timestamp => {
e.classList.toggle('active');
container.classList.add('active');
container.appendChild(fragment);
});
The short answer is "use requestAnimationFrame (rAF) for frequent updates". This mostly involves animations or making DOM modifications during scroll/resize. If you're just doing a one-off DOM mutation, you can generally just do it and move on. It doesn't ever hurt to use rAF though, especially if you're not sure if there are other DOM mutations taking place on the page.
If you are doing a lot of DOM heavy updates, there are libraries out there which take care of this in a responsible and performant way (Greensock, React, Vue, many others).
If you're writing by hand, you should understand how browsers calculate layout and queue paint cycles. You also need to understand how JavaScript interacts with the render engine. For example, there's a bunch of element properties which cause a reflow simply by reading the property. These are expensive operations and should the carefully managed.
Related
I have a doubt about DOM manipulation.
Whether dom-manipulation requires a refresh ? Or when did DOM manipulation requires refreshing? I see some of the sites keep loading while updating some of their parts.
Also,
How does react.js help to avoid this kind of problems while dealing with the front end development?
I have a doubt about DOM manipulation. Whether dom-manipulation requires a refresh ?
It depends on what you mean by "refresh." I can think of at least three possible things you could mean:
"Refresh" like pressing F5 on a page or hitting the reload button
"Refresh" like recalculate the positions of the elements; this is more correctly called "reflow"
"Refresh" as in repaint the elements on the screen ("repaint")
Reload
No, it doesn't, and in fact if you reload the page, the changes you've made to the DOM with your browser-based code will get wiped out. Here's a really simple example of DOM manipulation; no reloading is done until the end, and when it's done you can see that the changes made previously are wiped out (and then, since the code is also reloaded, we're starting from scratch, so it all starts over):
for (let i = 0; i < 5; ++i) {
delayedAdd(i);
}
function delayedAdd(value) {
setTimeout(() => {
// DOM manipulation
const p = document.createElement("p");
p.textContent = value;
document.body.appendChild(p);
if (value === 4) {
// Refresh after the last one -- wipes out what we've done
setTimeout(() => {
location.reload();
}, 800);
}
}, value * 800);
}
Reflow
Some DOM manipulations trigger reflow, yes; or more accurately, doing some things (which might be just getting, say, an element's current clientLeft property value) triggers reflow, and reflow may involve recalculating layout, which can be expensive. Code can cause reflow repeatedly, including causing the layout to be recalculated repeatedly, doing a single series of manipulations. This list from Paul Irish claims to be "What forces layout/reflow. The comprehensive list." and Paul Irish is a deeply-experienced person in this realm and well-regarded in the industry, so the list is likely to be accurate (though no one is perfect, and these things can sometimes change over time). The page also has some guidance on how to avoid reflow (or more accurately, how to avoid layout recalculation).
Here's an example of code causing unnecessary layout recalculation:
const elementLefts = [];
// This loop triggers layout recalculation 10 times
for (let n = 0; n < 10; ++n) {
const span = document.createElement("span");
span.textContent = n;
document.body.appendChild(span);
// Triggers layout recalcuation every time:
elementLefts.push(span.offsetLeft);
}
console.log(elementLefts.join(", "));
We can get that same information after the loop and only trigger a single layout recalculation:
const spans = [];
// This loop triggers layout recalculation 10 times
for (let n = 0; n < 10; ++n) {
const span = document.createElement("span");
span.textContent = n;
document.body.appendChild(span);
spans.push(span);
}
// Triggers one layout recalcuation, because nothing has changed between
// times we get `offsetLeft` from an element
const elementLefts = spans.map(span => span.offsetLeft);
console.log(elementLefts.join(", "));
Repaint
The browser repaints the screen all the time, typically 60 times per second (or even more if the system it's running on has a higher refresh rate) unless it's blocked by a long-running task on the UI thread (which is shared with the main JavaScript code). DOM manipulations don't cause this repaint (though changing the DOM may change what's painted, which might in turn prevent the browser reusing some painting information it had from last time).
Also, How does react.js help to avoid this kind of problems while dealing with the front end development?
They can help with reflows by minimizing layout recalculations by using the knowledge of what causes recalcs and avoiding doing the things that cause them in loops, etc. That said, they're not a magic bullet, and like everything else they have cons as well as pros.
I am trying to understand how to prevent memory leaks when using timers. I believe the Chrome Dev Tools Performance tab (I am still learning about this feature) memory graph is showing me a bad pattern in terms of memory management. I believe it is a case of a "sawtooth" pattern each time a Timer is fired.
I have a simple test case (1. right click 'timer-adder.html', 2. click 'Preview' to get a preview of the loaded HTML) that involves using a constructor function to create objects that work as timers, in other words for each update, the DOM is changed inside a setInterval callback.
//...
start: function (initialTime, prefixedId) {
let $display = document.getElementById(prefixedId + '-display');
if ($display.style.display === 'none') { $display.style.display = 'block'; }
$display.textContent = TimerHandler.cycle(initialTime);
$display = null;
return setInterval(function () {
this.initialTime--;
(this.initialTime === 0 && this.pause());
document.getElementById(prefixedId + '-display').textContent = TimerHandler.cycle(this.initialTime);
}.bind(this), 1000);
},
/...
Attempts to minimize leakage although not directly related to what I believe to be the problem at hand:
assigning $display to null to make it collectible (garbage collection).
clearing the interval;
What I identify as a bad pattern:
View post on imgur.com (zoomable)
A fair memory usage would translate to DevTools showing a horizontal line? What would be a safer approach to this side effect? Because, as the test is right now, I think that in the long run, multiple active timers would overload memory and thus result in a noticeable decrease in performance.
PS: As a newcomer, I hope this is a fair presentation of the problem. Thank you.
As you can see from the last part, the Garbagge Collector was able to free that memory. Thus there is no memory leaking.
Garbagge Collection is a complicated task, thats why V8 tries to minimize the number of stop-the-world collections. Or in other words: It only cleans up memory if it needs it.
In your case it doesn't. There is still enough space available.
If (a noticible amount of) memory persists across GC calls, then you do have a memory leak.
assigning $display to null to make it collectible (garbage collection).
Thats just unneccessary. $display is not accessed in the inner function, therefore it is viable for gc'ing after init executed. Reassigning it doesn't change that.
Years ago, I heard about a nice 404 page and implemented a copy.
In working with ReactJS, the same idea is intended to be implemented, but it is slow and jerky in its motion, and after a while Chrome gives it an "unresponsive script" warning, pinpointed to line 1226, "var position = index % repeated_tokens.length;", with a few hundred milliseconds' delay between successive calls. The script consistently goes beyond an unresponsive page to bringing a computer to its knees.
Obviously, they're not the same implementation, although the ReactJS version is derived from the "I am not using jQuery yet" version. But beyond that, why is it bogging? Am I making a deep stack of closures? Why is the ReactJS port slower than the bare JavaScript original?
In both cases the work is driven by minor arithmetic and there is nothing particularly interesting about the code or what it is doing.
--UPDATE--
I see I've gotten a downvote and three close votes...
This appears to have gotten response from people who are (a) saying something sensible and (b) contradicting what Pete Hunt and other people have said.
What is claimed, among other things, by Hunt and Facebook's ReactJS video, is that the synthetic DOM is lightning-fast, enough to pull 60 frames per second on a non-JIT iPhone. And they've left an optimization hook to say "Ignore this portion of the DOM in your fast comparison," which I've used elsewhere to disclaim jurisdiction of a non-ReactJS widget.
#EdBallot's suggestion that it's "an extreme (and unnecessary) amount of work to create and render an element, and do a single document.getElementById. Now I'm factoring out that last bit; DOM manipulation is slow. But the responses here are hard to reconcile with what Facebook has been saying about performant ReactJS. There is a "Crunch all you want; we'll make more" attitude about (theoretically) throwing away the DOM and making a new one, which is lightning-fast because it's done in memory without talking to the real DOM.
In many cases I want something more surgical and can attempt to change the smallest area possible, but the letter and spirit of ReactJS videos I've seen is squarely in the spirit of "Crunch all you want; we'll make more."
Off to trying suggested edits to see what they will do...
I didn't look at all the code, but for starters, this is rather inefficient
var update = function() {
React.render(React.createElement(Pragmatometer, null),
document.getElementById('main'));
for(var instance in CKEDITOR.instances) {
CKEDITOR.instances[instance].updateElement();
}
save('Scratchpad', document.getElementById('scratchpad').value);
};
var update_interval = setInterval(update, 100);
It is doing an extreme (and unnecessary) amount of work and it is being done every 100ms. Among other things, it is calling:
React.createElement
React.render
document.getElementById
Probably with the amount of JS objects being created and released, your update function plus garbage collection is taking longer than 100ms, effectively taking the computer to its knees and lower.
At the very least, I'd recommend caching as much as you can outside of the interval callback. Also no need to call React.render multiple times. Once it is rendered into the dom, use setProps or forceUpdate to cause it to render changes.
Here's an example of what I mean:
var mainComponent = React.createElement(Pragmatometer, null);
React.render(mainComponent,
document.getElementById('main'));
var update = function() {
mainComponent.forceUpdate();
for(var instance in CKEDITOR.instances) {
CKEDITOR.instances[instance].updateElement();
}
save('Scratchpad', document.getElementById('scratchpad').value);
};
var update_interval = setInterval(update, 100);
Beyond that, I'd also recommend moving the setInterval code into whatever React component is rendering that stuff (the Scratchpad component?).
A final comment: one of the downsides of using setInterval is that it doesn't wait for the callback function to complete before queuing up the next callback. An alternative is to use setTimeout with the callback setting up the next setTimeout, like this
var update = function() {
// do some stuff
// update is done to setup the next timeout
setTimeout(update, 100);
};
setTimeout(update, 100);
My web page creates a lot of DOM elements at once in a (batch) tight loop, depending on data fed by my Comet web server.
I tried several methods to create those elements. Basically it boils down to either (1):
var container = $('#selector');
for (...) container.append('<html code of the element>');
or (2):
var html = '';
for (...) html += '<html code of the element>';
$('#selector').append(html);
or (3):
var html = [];
for (...) html.push('<html code of the element>');
$('#selector').append(html.join(''));
Performance-wise, (1) is absolutely awful (3s per batch on a desktop computer, up to 5mn on a Galaxy Note fondleslab), and (2) and (3) are roughly equivalent (300ms on desktop, 1.5s on fondleslab). Those timings are for about 4000 elements, which is about 1/4 of what I expect in production and this is not acceptable since I should be handle this amount of data (15k elements) in under 1s, even on fondleslab.
The very fact that (2) and (3) have the same performance makes me think that I'm hitting the infamous "naively concatenating strings uselessly reallocates and copies lots of memory" problem (even though I'd expect join() to be smarter than that). [edit: after looking more closely into it, it happens that I was misled about that, the problem is more on the rendering side -- thanks DanC]
In C++ I'd just go with std::string::reserve() and operator += to avoid the useless reallocations, but I have no idea how to do that in Javascript.
Any idea how to improve the performance further? Or at least point me to ways to identify the bottleneck (even though I'm pretty sure it's the string concatenations). I'm certainly no Javascript guru...
Thanks for reading me.
For what it's worth, that huge number of elements is because I'm drawing a (mostly real-time) graph using DIV's. I'm well aware of Canvas but my app has to be compatible with old browsers so unfortunately it's not an option. :(
Using DOM methods, building and appending 12000 elements clocks in around 55ms on my dual-core MacBook.
document.getElementById('foo').addEventListener('click', function () {
build();
}, false);
function build() {
console.time('build');
var fragment = document.createDocumentFragment();
for ( var e = 0; e < 12000; e++ ) {
var el = document.createElement('div');
el.appendChild(document.createTextNode(e));
fragment.appendChild(el);
}
document.querySelectorAll('body')[0].appendChild(fragment);
console.timeEnd('build')
}
Fiddle
Resig on document.createDocumentFragment
This is not a solution to the performance problem, but only a way to ensure the UI loop is free to handle other requests.
You could try something like this:
var container = $('#selector');
for (...) setTimeout(function() {container.append('<html code of the element>') };
To be slightly more performant, I would actually call setTimeout after every x iterations after building up a larger string. And, not having tried this myself, I am not sure if the ordering of setTimeout calls will be preserved. If not, then you can do something more like this:
var arrayOfStrings = 'each element is a batch of 100 or so elements html';
function processNext(arr, i) {
container.append(arr[i]);
if (i < arr.length) {
setTimeout(function() { processNext(arr, i+1); });
}
}
processNext(arrayOfStrings, 0);
Not pretty, but would ensure the UI is not locked up while the DOM is manipulated.
The while statement in this function runs too slow (prevents page load for 4-5 seconds) in IE/firefox, but fast in safari...
It's measuring pixel width of text on a page and truncating until text reaches ideal width:
function constrain(text, ideal_width){
$('.temp_item').html(text);
var item_width = $('span.temp_item').width();
var ideal = parseInt(ideal_width);
var smaller_text = text;
var original = text.length;
while (item_width > ideal) {
smaller_text = smaller_text.substr(0, (smaller_text.length-1));
$('.temp_item').html(smaller_text);
item_width = $('span.temp_item').width();
}
var final_length = smaller_text.length;
if (final_length != original) {
return (smaller_text + '…');
} else {
return text;
}
}
Any way to improve performance? How would I convert this to a bubble-sort function?
Thanks!
move the calls to $() outside of the loop, and store its result in a temporary variable. Running that function is going to be the slowest thing in your code, aside from the call to .html().
They work very very hard on making the selector engines in libraries fast, but it's still dog slow compared to normal javascript operations (like looking up a variable in the local scope) because it has to interact with the dom. Especially if you're using a class selector like that, jquery has to loop through basically every element in the document looking at each class attribute and running a regex on it. Every go round the loop! Get as much of that stuff out of your tight loops as you can. Webkit runs it fast because it has .getElementsByClassName while the other browsers don't. (yet).
Instead of removing one character at time until you find the ideal width, you could use a binary search.
I see that the problem is that you are constantly modifying the DOM in the loop, by setting the html of the temp_item, and then re reading the width.
I don't know the context of your problem, but trying to adjust the layout by measuring the rendered elements is not a good practice from my point of view.
Maybe you could approach the problem from a different angle. Truncating to a fixed width is common.
Other possibility (hack?) if dont have choices, could be to use the overflow css property of the container element and put the … in other element next to the text. Though i recommend you to rethink the need of solving the problem the way you are intending.
Hugo
Other than the suggestion by Breton, another possibility to speed up your algorithm would be to use a binary search on the text length. Currently you are decrementing the length by one character at a time - this is O(N) in the length of the string. Instead, use a search which will be O(log(N)).
Roughly speaking, something like this:
function constrain(text, ideal_width){
...
var temp_item = $('.temp_item');
var span_temp_item = $('span.temp_item');
var text_len_lower = 0;
var text_len_higher = smaller_text.length;
while (true) {
if (item_width > ideal)
{
// make smaller to the mean of "lower" and this
text_len_higher = smaller_text.length;
smaller_text = text.substr(0,
((smaller_text.length + text_len_lower)/2));
}
else
{
if (smaller_text.length>=text_len_higher) break;
// make larger to the mean of "higher" and this
text_len_lower = smaller_text.length;
smaller_text = text.substr(0,
((smaller_text.length + text_len_higher)/2));
}
temp_item.html(smaller_text);
item_width = span_temp_item.width();
}
...
}
One thing to note is that each time you add something to the DOM, or change the html in a node, the page has to redraw itself, which is an expensive operation. Moving any HTML updates outside of a loop might help speed things up quite a bit.
As other have mentioned, you could move the calls to $() to outside the loop. You can create a reference to the element, then just call the methods on it within the loop as 1800 INFORMATION mentioned.
If you use Firefox with the Firebug plugin, there's a great way of profiling the code to see what's taking the longest time. Just click profile under the first tab, do your action, then click profile again. It'll show a table with the time it took for each part of your code. Chances are you'll see a lot of things in the list that are in your js framework library; but you can isolate that as well with a little trial and error.