I have a calendar that I've built, and on clicking a day on the calendar, a function is run. You can go month to month on the calendar, and it generates months as it goes. Since every day on the calendar, shown or not, is binded to an event using the class of all the "days," I'm worried about the number of "bindings" building up into the thousands.
//after a new month is generated, the days are rebound
TDs.off("mousedown");
TDs = $(".tableDay");
TDs.on("mousedown", TDmouseDown);
While I was learning C#/Monogame, I learned about functions that repeat very quickly to update game elements. So, I was wondering if JavaScript works in the same manner. Does the JavaScript engine repeatedly check every single event binding to see if it has occurred? So a structure something like this:
function repeat60timesPerSecond(){
if(element1isClicked){ //blah }
if(element2isClicked){ //blah }
if(element3isClicked){ //blah }
}
Or is JavaScript somehow able to actually just trigger the function when the event occurs?
In short: Do JavaScript bindings take up memory solely by existing?
My (inconclusive) research so far:
I have made several attempts at answering this question on my own. First, I did a jsperf test. Apart from the obvious problems with consistency in my test, the test didn't actually test this question. It primarily tested if unbinding nothing was faster than unbinding something. Rather than how much memory the actual bindings take up after creation. I couldn't come up with a way to test this using this testing service.
I then googled around a bit, and found quite a bit of interesting stuff, but nothing directly answering this question in clear terms. I did come across this answer, which suggests using a single event binding of the event container in a similar situation.
UPDATE:
Right after posting this, I thought of a possible way to test this with local JS:
function func(){
console.log("test");
}
for(x=1;x<1000;x++){
$('#parent').append("<div id='x"+x+"' class='child'></div>");
$("#x"+x).on("mousedown", func);
}
console.time("timer");
for(i=1;i<1000000;i++){
q = Math.sqrt(i);
if(q % 1 == 0){
q = 3;
}
}
console.timeEnd("timer");
After playing around with this(changing what the for loop does, changing the number of iterations on both for loops, etc.) it appears that event bindings take up a VERY small amount of memory.
Yes, they all take up memory, but not very much. There's just one function object. Each element has a pointer to that object, so it's probably something like 4 bytes per element.
As Felix King suggested, you can reduce this by using delegation, as there's just a single binding to a container element. However, the savings in memory is offset by increased time overhead -- the handler is invoked every time the event occurs anywhere in the container, and it has to test whether the target matches the selector to which the event is delegated.
Related
I am trying to understand how to prevent memory leaks when using timers. I believe the Chrome Dev Tools Performance tab (I am still learning about this feature) memory graph is showing me a bad pattern in terms of memory management. I believe it is a case of a "sawtooth" pattern each time a Timer is fired.
I have a simple test case (1. right click 'timer-adder.html', 2. click 'Preview' to get a preview of the loaded HTML) that involves using a constructor function to create objects that work as timers, in other words for each update, the DOM is changed inside a setInterval callback.
//...
start: function (initialTime, prefixedId) {
let $display = document.getElementById(prefixedId + '-display');
if ($display.style.display === 'none') { $display.style.display = 'block'; }
$display.textContent = TimerHandler.cycle(initialTime);
$display = null;
return setInterval(function () {
this.initialTime--;
(this.initialTime === 0 && this.pause());
document.getElementById(prefixedId + '-display').textContent = TimerHandler.cycle(this.initialTime);
}.bind(this), 1000);
},
/...
Attempts to minimize leakage although not directly related to what I believe to be the problem at hand:
assigning $display to null to make it collectible (garbage collection).
clearing the interval;
What I identify as a bad pattern:
View post on imgur.com (zoomable)
A fair memory usage would translate to DevTools showing a horizontal line? What would be a safer approach to this side effect? Because, as the test is right now, I think that in the long run, multiple active timers would overload memory and thus result in a noticeable decrease in performance.
PS: As a newcomer, I hope this is a fair presentation of the problem. Thank you.
As you can see from the last part, the Garbagge Collector was able to free that memory. Thus there is no memory leaking.
Garbagge Collection is a complicated task, thats why V8 tries to minimize the number of stop-the-world collections. Or in other words: It only cleans up memory if it needs it.
In your case it doesn't. There is still enough space available.
If (a noticible amount of) memory persists across GC calls, then you do have a memory leak.
assigning $display to null to make it collectible (garbage collection).
Thats just unneccessary. $display is not accessed in the inner function, therefore it is viable for gc'ing after init executed. Reassigning it doesn't change that.
Surprisingly I can not find the answer to this question anywhere on the web.
In the documentation it is stated that setTimeout and setInterval share the same pool of ids, as well as that an id will never repeat. If that is the case then they must eventually run out because there is a maximum number the computer can handle? What happens then, you can't use timeouts anymore?
TL;DR;
It depends on the browser's engine.
In Blink and Webkit:
The maximum number of concurrent timers is 231-1.
If you try to use more your browser will likely freeze due to a endless loop.
Official specification
From the W3C docs:
The setTimeout() method must run the following steps:
Let handle be a user-agent-defined integer that is greater than zero that will identify the timeout to be set by this call.
Add an entry to the list of active timeouts for handle.
[...]
Also:
Each object that implements the WindowTimers interface has a list of active timeouts and a list of active intervals. Each entry in these lists is identified by a number, which must be unique within its list for the lifetime of the object that implements the WindowTimers interface.
Note: while the W3C mentions two lists, the WHATWG spec establishes that setTimeout and setInterval share a common list of active timers. That means that you can use clearInterval() to remove a timer created by setTimeout() and vice versa.
Basically, each user agent has freedom to implement the handle Id as they please, with the only requirement to be an integer unique for each object; you can get as many answers as browser implementations.
Let's see, for example, what Blink is doing.
Blink implementation
Previous note: It's not such an easy task to find the actual source code of Blink. It belongs to the Chromium codebase which is mirrored in GitHub. I will link GitHub (its current latest tag: 72.0.3598.1) because its better tools to navigate the code. Three years ago, they were pushing commits to chromium/blink/. Nowadays, active development is on chromium/third_party/WebKit but there is a discussion going on about a new migration.
In Blink (and in WebKit, which obviously has a very similar codebase), the responsible of maintaining the aforementioned list of active timers is the DOMTimerCoordinator belonging to each ExecutionContext.
// Maintains a set of DOMTimers for a given page or
// worker. DOMTimerCoordinator assigns IDs to timers; these IDs are
// the ones returned to web authors from setTimeout or setInterval. It
// also tracks recursive creation or iterative scheduling of timers,
// which is used as a signal for throttling repetitive timers.
class DOMTimerCoordinator {
The DOMTimerCoordinator stores the timers in the blink::HeapHashMap (alias TimeoutMap) collection timers_ which key is (meeting the specs) int type:
using TimeoutMap = HeapHashMap<int, Member<DOMTimer>>;
TimeoutMap timers_;
That answer your first question (in the contex of Blink): the maximum number of active timers for each context is 231-1; much lower than the JavaScript MAX_SAFE_INTEGER (253-1) that you mentioned but still more than enough for normal use cases.
For your second question, "What happens then, you can't use timeouts anymore?", I have so far just a partial answer.
New timers are created by DOMTimerCoordinator::InstallNewTimeout(). It calls the private member function NextID() to retrieve an available integer key and DOMTimer::Create for the actual creation of the timer object. Then, it inserts the new timer and the corresponding key into timers_.
int timeout_id = NextID();
timers_.insert(timeout_id, DOMTimer::Create(context, action, timeout,
single_shot, timeout_id));
NextID() gets the next id in a circular sequence from 1 to 231-1:
int DOMTimerCoordinator::NextID() {
while (true) {
++circular_sequential_id_;
if (circular_sequential_id_ <= 0)
circular_sequential_id_ = 1;
if (!timers_.Contains(circular_sequential_id_))
return circular_sequential_id_;
}
}
It increments in 1 the value of circular_sequential_id_ or set it to 1 if it goes beyond the upper limit (although INT_MAX+1 invokes UB, most C implementations return INT_MIN).
So, when the DOMTimerCoordinator runs out of IDs, tries again from 1 up until it finds one free.
But, what happen if they are all in use? What does prevent NextID() from entering in a endless loop? It seems that nothing. Likely, Blink developers coded NextID() under the assumption that there will never be 231-1 timers concurrently. It makes sense; for every byte returned by DOMTimer::Create() you will need a GB of RAM to store timers_ if it is full. It can add up to TB if you store long callbacks. Let alone the time needed to create them.
Anyway, it looks surprising that no guard against an endless loop has been implemented, so I have contacted Blink developers, but so far I have no response. I will update my answer if they reply.
I am writing a Whack-A-Mole game for class using HTML5, CSS3 and JavaScript. I have run into a very interesting bug where, at seemingly random intervals, my moles with stop changing their "onBoard" variables and, as a result, will stop being assigned to the board. Something similar has also happened with the holes, but not as often in my testing. All of this is completely independent of user interaction.
You guys and gals are my absolute last hope before I scrap the project and start completely from scratch. This has frustrated me to no end. Here is the Codepen and my github if you prefer to have the images.
Since Codepen links apparently require accompanying code, here is the function where I believe the problem is occuring.
// Run the game
function run() {
var interval = (Math.floor(Math.random() * 7) * 1000);
if(firstRound) {
renderHole(mole(), hole(), lifeSpan());
firstRound = false;
}
setTimeout(function() {
renderHole(mole(), hole(), lifeSpan());
run();
}, interval);
}
What I believe is happening is this. The function runs at random intervals, between 0-6 seconds. If the function runs too quickly, the data that is passed to my renderHole() function gets overwritten with the new data, thus causing the previous hole and mole to never be taken off the board (variable wise at least).
EDIT: It turns out that my issue came from my not having returns on my recursive function calls. Having come from a different language, I was not aware that, in JavaScript, functions return "undefined" if nothing else is indicated. I am, however, marking GameAlchemist's answer as the correct one due to the fact that my original code was convoluted and confusing, as well as redundant in places. Thank you all for your help!
You have done here and there in your code some design mistakes that, one after another, makes the code hard to read and follow, and quite impossible to debug.
the mole() function might return a mole... or not... or create a timeout to call itself later.. what will be done with the result when mole calls itself again ? nothing, so it will just be marked as onBoard never to be seen again.
--->>> Have a clear definition and a single responsibility for mole(): for instance 'returns an available non-displayed mole character or null'. And that's all, no count, no marking of the objects, just KISS (Keep It Simple S...) : it should always return a value and never trigger a timeout.
Quite the same goes for hole() : return a free hole or null, no marking, no timeout set.
render should be simplified : get a mole, get a hole, if either couldn't be found bye bye. If a mole+hole was found, just setup the new mole/hole couple + event handler (in a separate function). Your main run function will ensure to try again and again to spawn moles.
Years ago, I heard about a nice 404 page and implemented a copy.
In working with ReactJS, the same idea is intended to be implemented, but it is slow and jerky in its motion, and after a while Chrome gives it an "unresponsive script" warning, pinpointed to line 1226, "var position = index % repeated_tokens.length;", with a few hundred milliseconds' delay between successive calls. The script consistently goes beyond an unresponsive page to bringing a computer to its knees.
Obviously, they're not the same implementation, although the ReactJS version is derived from the "I am not using jQuery yet" version. But beyond that, why is it bogging? Am I making a deep stack of closures? Why is the ReactJS port slower than the bare JavaScript original?
In both cases the work is driven by minor arithmetic and there is nothing particularly interesting about the code or what it is doing.
--UPDATE--
I see I've gotten a downvote and three close votes...
This appears to have gotten response from people who are (a) saying something sensible and (b) contradicting what Pete Hunt and other people have said.
What is claimed, among other things, by Hunt and Facebook's ReactJS video, is that the synthetic DOM is lightning-fast, enough to pull 60 frames per second on a non-JIT iPhone. And they've left an optimization hook to say "Ignore this portion of the DOM in your fast comparison," which I've used elsewhere to disclaim jurisdiction of a non-ReactJS widget.
#EdBallot's suggestion that it's "an extreme (and unnecessary) amount of work to create and render an element, and do a single document.getElementById. Now I'm factoring out that last bit; DOM manipulation is slow. But the responses here are hard to reconcile with what Facebook has been saying about performant ReactJS. There is a "Crunch all you want; we'll make more" attitude about (theoretically) throwing away the DOM and making a new one, which is lightning-fast because it's done in memory without talking to the real DOM.
In many cases I want something more surgical and can attempt to change the smallest area possible, but the letter and spirit of ReactJS videos I've seen is squarely in the spirit of "Crunch all you want; we'll make more."
Off to trying suggested edits to see what they will do...
I didn't look at all the code, but for starters, this is rather inefficient
var update = function() {
React.render(React.createElement(Pragmatometer, null),
document.getElementById('main'));
for(var instance in CKEDITOR.instances) {
CKEDITOR.instances[instance].updateElement();
}
save('Scratchpad', document.getElementById('scratchpad').value);
};
var update_interval = setInterval(update, 100);
It is doing an extreme (and unnecessary) amount of work and it is being done every 100ms. Among other things, it is calling:
React.createElement
React.render
document.getElementById
Probably with the amount of JS objects being created and released, your update function plus garbage collection is taking longer than 100ms, effectively taking the computer to its knees and lower.
At the very least, I'd recommend caching as much as you can outside of the interval callback. Also no need to call React.render multiple times. Once it is rendered into the dom, use setProps or forceUpdate to cause it to render changes.
Here's an example of what I mean:
var mainComponent = React.createElement(Pragmatometer, null);
React.render(mainComponent,
document.getElementById('main'));
var update = function() {
mainComponent.forceUpdate();
for(var instance in CKEDITOR.instances) {
CKEDITOR.instances[instance].updateElement();
}
save('Scratchpad', document.getElementById('scratchpad').value);
};
var update_interval = setInterval(update, 100);
Beyond that, I'd also recommend moving the setInterval code into whatever React component is rendering that stuff (the Scratchpad component?).
A final comment: one of the downsides of using setInterval is that it doesn't wait for the callback function to complete before queuing up the next callback. An alternative is to use setTimeout with the callback setting up the next setTimeout, like this
var update = function() {
// do some stuff
// update is done to setup the next timeout
setTimeout(update, 100);
};
setTimeout(update, 100);
I am finally getting around to really implementing some jQuery solutions for my apps (which is seeming to also involve a crash course in javascript).
In studying examples of plugins, I ran across this code. I'm assuming the author created the zero length timer to create some seperation of the running code, so that the init functon would finish quickly.
function hovertipInit() {
var hovertipConfig = {'attribute':'hovertip',
'showDelay': 300,
'hideDelay': 700};
var hovertipSelect = 'div.hovertip';
window.setTimeout(function() {
$(hovertipSelect).hovertipActivate(hovertipConfig,
targetSelectById,
hovertipPrepare,
hovertipTargetPrepare);
}, 0);
}
Is needing this type of seperation common?
Is creating the zero length timer still the best way to handle this situation, or is there a better to to handle this in jQuery?
Thanks,
Jim
Check out this article that explains when this is necessary.
Also check out this related question.
It is not very common, but is necessary when you need to escape the call stack of an event.