Understanding a pattern in Chrome DevTools Memory Performance Graph - javascript

I am trying to understand how to prevent memory leaks when using timers. I believe the Chrome Dev Tools Performance tab (I am still learning about this feature) memory graph is showing me a bad pattern in terms of memory management. I believe it is a case of a "sawtooth" pattern each time a Timer is fired.
I have a simple test case (1. right click 'timer-adder.html', 2. click 'Preview' to get a preview of the loaded HTML) that involves using a constructor function to create objects that work as timers, in other words for each update, the DOM is changed inside a setInterval callback.
//...
start: function (initialTime, prefixedId) {
let $display = document.getElementById(prefixedId + '-display');
if ($display.style.display === 'none') { $display.style.display = 'block'; }
$display.textContent = TimerHandler.cycle(initialTime);
$display = null;
return setInterval(function () {
this.initialTime--;
(this.initialTime === 0 && this.pause());
document.getElementById(prefixedId + '-display').textContent = TimerHandler.cycle(this.initialTime);
}.bind(this), 1000);
},
/...
Attempts to minimize leakage although not directly related to what I believe to be the problem at hand:
assigning $display to null to make it collectible (garbage collection).
clearing the interval;
What I identify as a bad pattern:
View post on imgur.com (zoomable)
A fair memory usage would translate to DevTools showing a horizontal line? What would be a safer approach to this side effect? Because, as the test is right now, I think that in the long run, multiple active timers would overload memory and thus result in a noticeable decrease in performance.
PS: As a newcomer, I hope this is a fair presentation of the problem. Thank you.

As you can see from the last part, the Garbagge Collector was able to free that memory. Thus there is no memory leaking.
Garbagge Collection is a complicated task, thats why V8 tries to minimize the number of stop-the-world collections. Or in other words: It only cleans up memory if it needs it.
In your case it doesn't. There is still enough space available.
If (a noticible amount of) memory persists across GC calls, then you do have a memory leak.
assigning $display to null to make it collectible (garbage collection).
Thats just unneccessary. $display is not accessed in the inner function, therefore it is viable for gc'ing after init executed. Reassigning it doesn't change that.

Related

Does JavaScript run out of timeout IDs?

Surprisingly I can not find the answer to this question anywhere on the web.
In the documentation it is stated that setTimeout and setInterval share the same pool of ids, as well as that an id will never repeat. If that is the case then they must eventually run out because there is a maximum number the computer can handle? What happens then, you can't use timeouts anymore?
TL;DR;
It depends on the browser's engine.
In Blink and Webkit:
The maximum number of concurrent timers is 231-1.
If you try to use more your browser will likely freeze due to a endless loop.
Official specification
From the W3C docs:
The setTimeout() method must run the following steps:
Let handle be a user-agent-defined integer that is greater than zero that will identify the timeout to be set by this call.
Add an entry to the list of active timeouts for handle.
[...]
Also:
Each object that implements the WindowTimers interface has a list of active timeouts and a list of active intervals. Each entry in these lists is identified by a number, which must be unique within its list for the lifetime of the object that implements the WindowTimers interface.
Note: while the W3C mentions two lists, the WHATWG spec establishes that setTimeout and setInterval share a common list of active timers. That means that you can use clearInterval() to remove a timer created by setTimeout() and vice versa.
Basically, each user agent has freedom to implement the handle Id as they please, with the only requirement to be an integer unique for each object; you can get as many answers as browser implementations.
Let's see, for example, what Blink is doing.
Blink implementation
Previous note: It's not such an easy task to find the actual source code of Blink. It belongs to the Chromium codebase which is mirrored in GitHub. I will link GitHub (its current latest tag: 72.0.3598.1) because its better tools to navigate the code. Three years ago, they were pushing commits to chromium/blink/. Nowadays, active development is on chromium/third_party/WebKit but there is a discussion going on about a new migration.
In Blink (and in WebKit, which obviously has a very similar codebase), the responsible of maintaining the aforementioned list of active timers is the DOMTimerCoordinator belonging to each ExecutionContext.
// Maintains a set of DOMTimers for a given page or
// worker. DOMTimerCoordinator assigns IDs to timers; these IDs are
// the ones returned to web authors from setTimeout or setInterval. It
// also tracks recursive creation or iterative scheduling of timers,
// which is used as a signal for throttling repetitive timers.
class DOMTimerCoordinator {
The DOMTimerCoordinator stores the timers in the blink::HeapHashMap (alias TimeoutMap) collection timers_ which key is (meeting the specs) int type:
using TimeoutMap = HeapHashMap<int, Member<DOMTimer>>;
TimeoutMap timers_;
That answer your first question (in the contex of Blink): the maximum number of active timers for each context is 231-1; much lower than the JavaScript MAX_SAFE_INTEGER (253-1) that you mentioned but still more than enough for normal use cases.
For your second question, "What happens then, you can't use timeouts anymore?", I have so far just a partial answer.
New timers are created by DOMTimerCoordinator::InstallNewTimeout(). It calls the private member function NextID() to retrieve an available integer key and DOMTimer::Create for the actual creation of the timer object. Then, it inserts the new timer and the corresponding key into timers_.
int timeout_id = NextID();
timers_.insert(timeout_id, DOMTimer::Create(context, action, timeout,
single_shot, timeout_id));
NextID() gets the next id in a circular sequence from 1 to 231-1:
int DOMTimerCoordinator::NextID() {
while (true) {
++circular_sequential_id_;
if (circular_sequential_id_ <= 0)
circular_sequential_id_ = 1;
if (!timers_.Contains(circular_sequential_id_))
return circular_sequential_id_;
}
}
It increments in 1 the value of circular_sequential_id_ or set it to 1 if it goes beyond the upper limit (although INT_MAX+1 invokes UB, most C implementations return INT_MIN).
So, when the DOMTimerCoordinator runs out of IDs, tries again from 1 up until it finds one free.
But, what happen if they are all in use? What does prevent NextID() from entering in a endless loop? It seems that nothing. Likely, Blink developers coded NextID() under the assumption that there will never be 231-1 timers concurrently. It makes sense; for every byte returned by DOMTimer::Create() you will need a GB of RAM to store timers_ if it is full. It can add up to TB if you store long callbacks. Let alone the time needed to create them.
Anyway, it looks surprising that no guard against an endless loop has been implemented, so I have contacted Blink developers, but so far I have no response. I will update my answer if they reply.

Whack-A-Mole game with huge bug! Can I get some help fixing it?

I am writing a Whack-A-Mole game for class using HTML5, CSS3 and JavaScript. I have run into a very interesting bug where, at seemingly random intervals, my moles with stop changing their "onBoard" variables and, as a result, will stop being assigned to the board. Something similar has also happened with the holes, but not as often in my testing. All of this is completely independent of user interaction.
You guys and gals are my absolute last hope before I scrap the project and start completely from scratch. This has frustrated me to no end. Here is the Codepen and my github if you prefer to have the images.
Since Codepen links apparently require accompanying code, here is the function where I believe the problem is occuring.
// Run the game
function run() {
var interval = (Math.floor(Math.random() * 7) * 1000);
if(firstRound) {
renderHole(mole(), hole(), lifeSpan());
firstRound = false;
}
setTimeout(function() {
renderHole(mole(), hole(), lifeSpan());
run();
}, interval);
}
What I believe is happening is this. The function runs at random intervals, between 0-6 seconds. If the function runs too quickly, the data that is passed to my renderHole() function gets overwritten with the new data, thus causing the previous hole and mole to never be taken off the board (variable wise at least).
EDIT: It turns out that my issue came from my not having returns on my recursive function calls. Having come from a different language, I was not aware that, in JavaScript, functions return "undefined" if nothing else is indicated. I am, however, marking GameAlchemist's answer as the correct one due to the fact that my original code was convoluted and confusing, as well as redundant in places. Thank you all for your help!
You have done here and there in your code some design mistakes that, one after another, makes the code hard to read and follow, and quite impossible to debug.
the mole() function might return a mole... or not... or create a timeout to call itself later.. what will be done with the result when mole calls itself again ? nothing, so it will just be marked as onBoard never to be seen again.
--->>> Have a clear definition and a single responsibility for mole(): for instance 'returns an available non-displayed mole character or null'. And that's all, no count, no marking of the objects, just KISS (Keep It Simple S...) : it should always return a value and never trigger a timeout.
Quite the same goes for hole() : return a free hole or null, no marking, no timeout set.
render should be simplified : get a mole, get a hole, if either couldn't be found bye bye. If a mole+hole was found, just setup the new mole/hole couple + event handler (in a separate function). Your main run function will ensure to try again and again to spawn moles.

Why is ReactJS not performant like raw JS for simple, stupid proof of concept?

Years ago, I heard about a nice 404 page and implemented a copy.
In working with ReactJS, the same idea is intended to be implemented, but it is slow and jerky in its motion, and after a while Chrome gives it an "unresponsive script" warning, pinpointed to line 1226, "var position = index % repeated_tokens.length;", with a few hundred milliseconds' delay between successive calls. The script consistently goes beyond an unresponsive page to bringing a computer to its knees.
Obviously, they're not the same implementation, although the ReactJS version is derived from the "I am not using jQuery yet" version. But beyond that, why is it bogging? Am I making a deep stack of closures? Why is the ReactJS port slower than the bare JavaScript original?
In both cases the work is driven by minor arithmetic and there is nothing particularly interesting about the code or what it is doing.
--UPDATE--
I see I've gotten a downvote and three close votes...
This appears to have gotten response from people who are (a) saying something sensible and (b) contradicting what Pete Hunt and other people have said.
What is claimed, among other things, by Hunt and Facebook's ReactJS video, is that the synthetic DOM is lightning-fast, enough to pull 60 frames per second on a non-JIT iPhone. And they've left an optimization hook to say "Ignore this portion of the DOM in your fast comparison," which I've used elsewhere to disclaim jurisdiction of a non-ReactJS widget.
#EdBallot's suggestion that it's "an extreme (and unnecessary) amount of work to create and render an element, and do a single document.getElementById. Now I'm factoring out that last bit; DOM manipulation is slow. But the responses here are hard to reconcile with what Facebook has been saying about performant ReactJS. There is a "Crunch all you want; we'll make more" attitude about (theoretically) throwing away the DOM and making a new one, which is lightning-fast because it's done in memory without talking to the real DOM.
In many cases I want something more surgical and can attempt to change the smallest area possible, but the letter and spirit of ReactJS videos I've seen is squarely in the spirit of "Crunch all you want; we'll make more."
Off to trying suggested edits to see what they will do...
I didn't look at all the code, but for starters, this is rather inefficient
var update = function() {
React.render(React.createElement(Pragmatometer, null),
document.getElementById('main'));
for(var instance in CKEDITOR.instances) {
CKEDITOR.instances[instance].updateElement();
}
save('Scratchpad', document.getElementById('scratchpad').value);
};
var update_interval = setInterval(update, 100);
It is doing an extreme (and unnecessary) amount of work and it is being done every 100ms. Among other things, it is calling:
React.createElement
React.render
document.getElementById
Probably with the amount of JS objects being created and released, your update function plus garbage collection is taking longer than 100ms, effectively taking the computer to its knees and lower.
At the very least, I'd recommend caching as much as you can outside of the interval callback. Also no need to call React.render multiple times. Once it is rendered into the dom, use setProps or forceUpdate to cause it to render changes.
Here's an example of what I mean:
var mainComponent = React.createElement(Pragmatometer, null);
React.render(mainComponent,
document.getElementById('main'));
var update = function() {
mainComponent.forceUpdate();
for(var instance in CKEDITOR.instances) {
CKEDITOR.instances[instance].updateElement();
}
save('Scratchpad', document.getElementById('scratchpad').value);
};
var update_interval = setInterval(update, 100);
Beyond that, I'd also recommend moving the setInterval code into whatever React component is rendering that stuff (the Scratchpad component?).
A final comment: one of the downsides of using setInterval is that it doesn't wait for the callback function to complete before queuing up the next callback. An alternative is to use setTimeout with the callback setting up the next setTimeout, like this
var update = function() {
// do some stuff
// update is done to setup the next timeout
setTimeout(update, 100);
};
setTimeout(update, 100);

Javascript setinterval() lag in Firefox?

I have wrote this code to make seconds (with decisec & centisec) counting up.
You've wasted time <span id="alltime">0.00</span> seconds.
<script type="text/javascript">
function zeroPad(num,count)
{
var numZeropad = num + '';
while(numZeropad.length < count) { numZeropad = "0" + numZeropad; }
return numZeropad; }
function counttwo() {
tall = document.getElementById('alltime').innerHTML;
if(parseFloat(tall) < 1.00) { tnew2 = tall.replace('0.0','').replace('0.',''); }
else { tnew2 = tall.replace('.',''); }
tnum = parseInt(tnew2) + 1;
//check if have to add zero
if(tnum >= 100) { tstr1 = tnum + ''; }
else { tstr1 = zeroPad(tnum,3); }
tlast = tstr1.substr(0,tstr1.length - 2) + '.' + tstr1.substr(tstr1.length - 2);
document.getElementById("alltime").innerHTML = tlast;
}
var inttwo=setInterval("counttwo()",10);
</script>
In HTML document and run.
It works well but when I use Firefox 4 and run the code. Seems like it LAG a bit (stop a bit before counting up) when it's on some numbers (randomly like 12.20, 4.43). I've tried change "counttwo()" to counttwo but that doesn't help.
I have told some of my friends to run on Firefox 4 too. They said it doesn't lag at all. This cause because of my computer ? or My Firefox ? or something else ?
Thanks in advance!
PS. Fiddle here: http://jsfiddle.net/XvkGy/5/ Mirror: http://bit.ly/hjVtXS
When you use setInterval or setTimeout the time interval is not exact for several reasons. It is dependent on other javascript running, the browser, the processor etc. You have to take a reliability of +- 15ms for granted afaik. See also ...
That's a lot of counting, so on some computer, yes, it might lag (if it's a prehistoric one or the user's got his processor really busy with something), also if I'm right, that thing won't work with Chrome's V8, since that script would freeze if you switched tabs, and resume executing only when you return to that tab.
If you're just seeing pauses every so often, you're probably seeing garbage collection or cycle collection pauses.
You can test this by toggling your javascript.options.mem.log preference to true in about:config and then watching the error console's "Messages" tab as your script runs. If the GC/CC messages are correlated with your pauses, then they're the explanation for what you see.
As for why you see it but others don't... do you see the problem if you disable all your extensions?
The problem with setInterval is that it can eventually lead to a back-up. This happens because the JavaScript engine tries to execute the function on the interval (in your case, 10ms), but if ever that execution takes longer than 10ms, the JS engine starts trying to execute the next interval before the current one stops (which really just means it queues it up to run as soon as the previous callback finishes).
Since JavaScript executes single-threaded (with the exception of web workers in HTML 5), this can lead to pauses in your UI or DOM updates because it is continuously processing JavaScript callbacks from your setInterval. In worst case scenarios, the whole page can become permanently unresponsive because your stack of uncompleted setInterval executions gets longer and longer, never fully finishing.
With a few exceptions, it is generally considered a safer bet to use setTimeout (and invoking the setTimeout again after execution of the callback) instead of setInterval. With setTimeout, you can ensure that one and only one timeout is ever queued up. And since the timers are only approximate anyway (just because you specify 10ms doesn't mean it will happen at exactly 10ms), you don't typically gain anything from using setInterval over setTimeout.
An example using setTimeout:
var count = function(){
// do something
// queue up execution once again
setTimeout(count, 10);
};
count();
One reason why you may see pauses on some browsers, and not others, is because not all JavaScript engines are created equal :). Some are faster than others, and as such, less likely to end up with a setInterval backup.
Different browsers use different JavaScript engines, so it's possible that this code just finds a spot where Firefox's JägerMonkey scripting engine has some problems. There doesn't seem to be any obvious inefficiencies in the counting itself..
If it's working on your friends' installs of FF4, then it's probably just an isolated problem for you, and there isn't much you'll be able to do by changing the code.

Questions about memory leak JS (try/catch, images in b64, ajax, unset variable, removeChild)

OK, i have a list with a few things that makes think that is the reason why my add-on is using a lot of memory.
1 - Use try/catch can increase memory usage?
Example:
try{
if(!condition) throw "Message";
//some code
}catch(ex){}
If this is the problem,I was thinking in use:
(function(){
if(!condition) return;
})();
2 - Use images with base64 can improve memory usage too?
<img src="data:image/png;base64,[...]==" />
Or there is no problem?
3 - I use a function to handle ajax result, so i can use responseXML...
if (o.readyState != 4) return;
var newdoc = document.implementation.createDocument(null, null, null);
var newhtml = document.createElement('div');
newhtml.innerHTML = o.responseText.replace(/script/ig, "");
newdoc.appendChild(newhtml);
newdoc.getElementById('...');
I cannot use jquery, this is the easiest way that I found to use responseXML, but may causing memory leak too.
4 - Unset variables, helps in something or is unhelpful?
var a = "something";
if(a=="something")
//
//I'm not going to use the variable "a" anymore, so..
a = undefined; //Now i unset the variable
5 - If you add an Event to a element, or you use appenChild, this increases memory.
element.appendChild(newelement);
If you change the page, looks like the memory is still the same.. If I use
window.addEventListener("unload", function() {
element.removeChild(newelement)
}
This helps in something too?
Thanks.
Sorry for the english.
On 4 and 5th.
Browser realy increases memory on every additional event\element, but don't decreases it immedialtely after some events\elements got deleted.
In garbage collecting every browser has it's "Dao":) :
IE 6-8 don't clear memory at all (until you close it or press F5).
Chrome takes up to 5 minutes to run garbage collector or when garbage size reaches some fixed amount.
Opera "eats" more memory then any other browser, but frees it immediately after it's became unneeded.
The fastest is FF nearly 1 minute to clear memory from unused data.

Categories