Does JavaScript run out of timeout IDs? - javascript

Surprisingly I can not find the answer to this question anywhere on the web.
In the documentation it is stated that setTimeout and setInterval share the same pool of ids, as well as that an id will never repeat. If that is the case then they must eventually run out because there is a maximum number the computer can handle? What happens then, you can't use timeouts anymore?

TL;DR;
It depends on the browser's engine.
In Blink and Webkit:
The maximum number of concurrent timers is 231-1.
If you try to use more your browser will likely freeze due to a endless loop.
Official specification
From the W3C docs:
The setTimeout() method must run the following steps:
Let handle be a user-agent-defined integer that is greater than zero that will identify the timeout to be set by this call.
Add an entry to the list of active timeouts for handle.
[...]
Also:
Each object that implements the WindowTimers interface has a list of active timeouts and a list of active intervals. Each entry in these lists is identified by a number, which must be unique within its list for the lifetime of the object that implements the WindowTimers interface.
Note: while the W3C mentions two lists, the WHATWG spec establishes that setTimeout and setInterval share a common list of active timers. That means that you can use clearInterval() to remove a timer created by setTimeout() and vice versa.
Basically, each user agent has freedom to implement the handle Id as they please, with the only requirement to be an integer unique for each object; you can get as many answers as browser implementations.
Let's see, for example, what Blink is doing.
Blink implementation
Previous note: It's not such an easy task to find the actual source code of Blink. It belongs to the Chromium codebase which is mirrored in GitHub. I will link GitHub (its current latest tag: 72.0.3598.1) because its better tools to navigate the code. Three years ago, they were pushing commits to chromium/blink/. Nowadays, active development is on chromium/third_party/WebKit but there is a discussion going on about a new migration.
In Blink (and in WebKit, which obviously has a very similar codebase), the responsible of maintaining the aforementioned list of active timers is the DOMTimerCoordinator belonging to each ExecutionContext.
// Maintains a set of DOMTimers for a given page or
// worker. DOMTimerCoordinator assigns IDs to timers; these IDs are
// the ones returned to web authors from setTimeout or setInterval. It
// also tracks recursive creation or iterative scheduling of timers,
// which is used as a signal for throttling repetitive timers.
class DOMTimerCoordinator {
The DOMTimerCoordinator stores the timers in the blink::HeapHashMap (alias TimeoutMap) collection timers_ which key is (meeting the specs) int type:
using TimeoutMap = HeapHashMap<int, Member<DOMTimer>>;
TimeoutMap timers_;
That answer your first question (in the contex of Blink): the maximum number of active timers for each context is 231-1; much lower than the JavaScript MAX_SAFE_INTEGER (253-1) that you mentioned but still more than enough for normal use cases.
For your second question, "What happens then, you can't use timeouts anymore?", I have so far just a partial answer.
New timers are created by DOMTimerCoordinator::InstallNewTimeout(). It calls the private member function NextID() to retrieve an available integer key and DOMTimer::Create for the actual creation of the timer object. Then, it inserts the new timer and the corresponding key into timers_.
int timeout_id = NextID();
timers_.insert(timeout_id, DOMTimer::Create(context, action, timeout,
single_shot, timeout_id));
NextID() gets the next id in a circular sequence from 1 to 231-1:
int DOMTimerCoordinator::NextID() {
while (true) {
++circular_sequential_id_;
if (circular_sequential_id_ <= 0)
circular_sequential_id_ = 1;
if (!timers_.Contains(circular_sequential_id_))
return circular_sequential_id_;
}
}
It increments in 1 the value of circular_sequential_id_ or set it to 1 if it goes beyond the upper limit (although INT_MAX+1 invokes UB, most C implementations return INT_MIN).
So, when the DOMTimerCoordinator runs out of IDs, tries again from 1 up until it finds one free.
But, what happen if they are all in use? What does prevent NextID() from entering in a endless loop? It seems that nothing. Likely, Blink developers coded NextID() under the assumption that there will never be 231-1 timers concurrently. It makes sense; for every byte returned by DOMTimer::Create() you will need a GB of RAM to store timers_ if it is full. It can add up to TB if you store long callbacks. Let alone the time needed to create them.
Anyway, it looks surprising that no guard against an endless loop has been implemented, so I have contacted Blink developers, but so far I have no response. I will update my answer if they reply.

Related

Understanding a pattern in Chrome DevTools Memory Performance Graph

I am trying to understand how to prevent memory leaks when using timers. I believe the Chrome Dev Tools Performance tab (I am still learning about this feature) memory graph is showing me a bad pattern in terms of memory management. I believe it is a case of a "sawtooth" pattern each time a Timer is fired.
I have a simple test case (1. right click 'timer-adder.html', 2. click 'Preview' to get a preview of the loaded HTML) that involves using a constructor function to create objects that work as timers, in other words for each update, the DOM is changed inside a setInterval callback.
//...
start: function (initialTime, prefixedId) {
let $display = document.getElementById(prefixedId + '-display');
if ($display.style.display === 'none') { $display.style.display = 'block'; }
$display.textContent = TimerHandler.cycle(initialTime);
$display = null;
return setInterval(function () {
this.initialTime--;
(this.initialTime === 0 && this.pause());
document.getElementById(prefixedId + '-display').textContent = TimerHandler.cycle(this.initialTime);
}.bind(this), 1000);
},
/...
Attempts to minimize leakage although not directly related to what I believe to be the problem at hand:
assigning $display to null to make it collectible (garbage collection).
clearing the interval;
What I identify as a bad pattern:
View post on imgur.com (zoomable)
A fair memory usage would translate to DevTools showing a horizontal line? What would be a safer approach to this side effect? Because, as the test is right now, I think that in the long run, multiple active timers would overload memory and thus result in a noticeable decrease in performance.
PS: As a newcomer, I hope this is a fair presentation of the problem. Thank you.
As you can see from the last part, the Garbagge Collector was able to free that memory. Thus there is no memory leaking.
Garbagge Collection is a complicated task, thats why V8 tries to minimize the number of stop-the-world collections. Or in other words: It only cleans up memory if it needs it.
In your case it doesn't. There is still enough space available.
If (a noticible amount of) memory persists across GC calls, then you do have a memory leak.
assigning $display to null to make it collectible (garbage collection).
Thats just unneccessary. $display is not accessed in the inner function, therefore it is viable for gc'ing after init executed. Reassigning it doesn't change that.

Why is ReactJS not performant like raw JS for simple, stupid proof of concept?

Years ago, I heard about a nice 404 page and implemented a copy.
In working with ReactJS, the same idea is intended to be implemented, but it is slow and jerky in its motion, and after a while Chrome gives it an "unresponsive script" warning, pinpointed to line 1226, "var position = index % repeated_tokens.length;", with a few hundred milliseconds' delay between successive calls. The script consistently goes beyond an unresponsive page to bringing a computer to its knees.
Obviously, they're not the same implementation, although the ReactJS version is derived from the "I am not using jQuery yet" version. But beyond that, why is it bogging? Am I making a deep stack of closures? Why is the ReactJS port slower than the bare JavaScript original?
In both cases the work is driven by minor arithmetic and there is nothing particularly interesting about the code or what it is doing.
--UPDATE--
I see I've gotten a downvote and three close votes...
This appears to have gotten response from people who are (a) saying something sensible and (b) contradicting what Pete Hunt and other people have said.
What is claimed, among other things, by Hunt and Facebook's ReactJS video, is that the synthetic DOM is lightning-fast, enough to pull 60 frames per second on a non-JIT iPhone. And they've left an optimization hook to say "Ignore this portion of the DOM in your fast comparison," which I've used elsewhere to disclaim jurisdiction of a non-ReactJS widget.
#EdBallot's suggestion that it's "an extreme (and unnecessary) amount of work to create and render an element, and do a single document.getElementById. Now I'm factoring out that last bit; DOM manipulation is slow. But the responses here are hard to reconcile with what Facebook has been saying about performant ReactJS. There is a "Crunch all you want; we'll make more" attitude about (theoretically) throwing away the DOM and making a new one, which is lightning-fast because it's done in memory without talking to the real DOM.
In many cases I want something more surgical and can attempt to change the smallest area possible, but the letter and spirit of ReactJS videos I've seen is squarely in the spirit of "Crunch all you want; we'll make more."
Off to trying suggested edits to see what they will do...
I didn't look at all the code, but for starters, this is rather inefficient
var update = function() {
React.render(React.createElement(Pragmatometer, null),
document.getElementById('main'));
for(var instance in CKEDITOR.instances) {
CKEDITOR.instances[instance].updateElement();
}
save('Scratchpad', document.getElementById('scratchpad').value);
};
var update_interval = setInterval(update, 100);
It is doing an extreme (and unnecessary) amount of work and it is being done every 100ms. Among other things, it is calling:
React.createElement
React.render
document.getElementById
Probably with the amount of JS objects being created and released, your update function plus garbage collection is taking longer than 100ms, effectively taking the computer to its knees and lower.
At the very least, I'd recommend caching as much as you can outside of the interval callback. Also no need to call React.render multiple times. Once it is rendered into the dom, use setProps or forceUpdate to cause it to render changes.
Here's an example of what I mean:
var mainComponent = React.createElement(Pragmatometer, null);
React.render(mainComponent,
document.getElementById('main'));
var update = function() {
mainComponent.forceUpdate();
for(var instance in CKEDITOR.instances) {
CKEDITOR.instances[instance].updateElement();
}
save('Scratchpad', document.getElementById('scratchpad').value);
};
var update_interval = setInterval(update, 100);
Beyond that, I'd also recommend moving the setInterval code into whatever React component is rendering that stuff (the Scratchpad component?).
A final comment: one of the downsides of using setInterval is that it doesn't wait for the callback function to complete before queuing up the next callback. An alternative is to use setTimeout with the callback setting up the next setTimeout, like this
var update = function() {
// do some stuff
// update is done to setup the next timeout
setTimeout(update, 100);
};
setTimeout(update, 100);

Do JavaScript bindings take up memory while not in use?

I have a calendar that I've built, and on clicking a day on the calendar, a function is run. You can go month to month on the calendar, and it generates months as it goes. Since every day on the calendar, shown or not, is binded to an event using the class of all the "days," I'm worried about the number of "bindings" building up into the thousands.
//after a new month is generated, the days are rebound
TDs.off("mousedown");
TDs = $(".tableDay");
TDs.on("mousedown", TDmouseDown);
While I was learning C#/Monogame, I learned about functions that repeat very quickly to update game elements. So, I was wondering if JavaScript works in the same manner. Does the JavaScript engine repeatedly check every single event binding to see if it has occurred? So a structure something like this:
function repeat60timesPerSecond(){
if(element1isClicked){ //blah }
if(element2isClicked){ //blah }
if(element3isClicked){ //blah }
}
Or is JavaScript somehow able to actually just trigger the function when the event occurs?
In short: Do JavaScript bindings take up memory solely by existing?
My (inconclusive) research so far:
I have made several attempts at answering this question on my own. First, I did a jsperf test. Apart from the obvious problems with consistency in my test, the test didn't actually test this question. It primarily tested if unbinding nothing was faster than unbinding something. Rather than how much memory the actual bindings take up after creation. I couldn't come up with a way to test this using this testing service.
I then googled around a bit, and found quite a bit of interesting stuff, but nothing directly answering this question in clear terms. I did come across this answer, which suggests using a single event binding of the event container in a similar situation.
UPDATE:
Right after posting this, I thought of a possible way to test this with local JS:
function func(){
console.log("test");
}
for(x=1;x<1000;x++){
$('#parent').append("<div id='x"+x+"' class='child'></div>");
$("#x"+x).on("mousedown", func);
}
console.time("timer");
for(i=1;i<1000000;i++){
q = Math.sqrt(i);
if(q % 1 == 0){
q = 3;
}
}
console.timeEnd("timer");
After playing around with this(changing what the for loop does, changing the number of iterations on both for loops, etc.) it appears that event bindings take up a VERY small amount of memory.
Yes, they all take up memory, but not very much. There's just one function object. Each element has a pointer to that object, so it's probably something like 4 bytes per element.
As Felix King suggested, you can reduce this by using delegation, as there's just a single binding to a container element. However, the savings in memory is offset by increased time overhead -- the handler is invoked every time the event occurs anywhere in the container, and it has to test whether the target matches the selector to which the event is delegated.

Is there anything faster than setTimeout and requestAnimationFrame?

(I need a process.nextTick equivalent on browser.)
I'm trying to get the most out of javascript performance so I made a simple counter ...
In a second I make continuous calls to a function that just adds one to a variable.
The code: codepen.io/rafaelcastrocouto/pen/gDFxt
I got about 250 with setTimeout and 70 with requestAnimationFrame in google chrome / win7.
I know requestAnimationFrame goes with screen refresh rate so, how can we make this faster?
PS: I'm aware of asm.js
Well, there's setImmediate() which runs the code immediately, ie as you'd expect to get with setTimeout(0).
The difference is that setTimeout(0) doesn't actually run immediately; setTimeout is "clamped" to a minimum wait time (4ms), which is why you're only getting a count of 250 in your test program. setImmediate() really does run immediately, so your counter test will be orders of magnitude higher using it.
However you may want to check browser support for setImmediate -- it's not available yet in all browsers. (you can use setTimeout(0) as a fallback of course though, but then you're back to the minimum wait time it imposes).
postMessage() is also an option, and can achieve much the same results, although it's a more complex API as it's intended for more doing a lot more than just a simple call loop. Plus there are other considerations to think of when using it (see the linked MDN article for more).
The MDN site also mentions a polyfill library for setImmediate which uses postMessage and other techniques to add setImmediate into browsers that don't support it yet.
With requestAnimationFrame(), you ought to get 60 for your test program, since that's the standard number of frames per second. If you're getting more than that, then your program is probably running for more than an exact second.
You'll never get a high figure in your count test using it, because it only fires 60 times a second (or fewer if the hardware refresh frame-rate is lower for some reason), but if your task involves an update to the display then that's all you need, so you can use requestAnimationFrame() to limit the number of times it's called, and thus free up resources for other tasks in your program.
This is why requestAnimationFrame() exists. If all you care about is getting your code to run as often as possible then don't use requestAnimationFrame(); use setTimeout or setImmediate instead. But that's not necessarily the best thing for performance, because it will eat up the processor power that the browser needs for other tasks.
Ultimately, performance isn't just about getting something to run the maximum number of times; it's about making the user experience as smooth as possible. And that often means imposing limits on your call loops.
Shortest possible delay while still being asynchronous is from MutationObserver but it is so short that if you just keep calling it, the UI will never have chance to update.
So trick would be to use MutationObserver to increment value while using requestAnimationFrame once in a while to update UI but that is not allowed.
See http://jsfiddle.net/6TZ9J/1/
var div = document.createElement("div");
var count = 0;
var cur = true;
var now = Date.now();
var observer = new MutationObserver(function () {
count++;
if (Date.now() - now > 1000) {
document.getElementById("count").textContent = count;
} else {
change();
}
});
observer.observe(div, {
attributes: true,
childList: true,
characterData: true
});
function change() {
cur = !cur;
div.setAttribute("class", cur);
}
change();
Use postMessage() as described in this blog.

Is it possible to create 2 loops in javascript where one loop will be prioritized in case of resource deficiency?(both handle game ticks)

The problem is as such:
In a js and asm.js based multiplayer game I've got two loops.
One handles the actual game ticks, like unit position, velocity and combat.
The other handles rendering of this world onto the canvas for the user to see.
What I'd like to happen is when the processor/GPU(they made those the same thing on some machines now, can't say I'm happy about that) gets encumbered too much the rendering loop should skip and thus stop changing the canvas. I.e. freezing the game screen in a lag pike.
Meanwhile the little processing power left is used to successfully complete the actual game tick preventing de-synchronisation with other game clients.
(It's an RTS-like game when it comes to load so the user input instead of positions of all objects are sent over the net).
Failing this the client would have to be kicked by the other clients or all clients would have to pause for him to reconnect and resync. i.e. bad bad bad!
A sloppy makeshift way to do this would probably be by using timestamps and terminate the graphic loop if it won't be complete by a certain time. One would presumably do this by determining max execution time for the packet types on the stack of the loop and immediately terminate the loop if the "time to execute value" of all packets together is too great to be dealt with within the resource capacity the timestamps are indicating by slowdown measurement. Hell, maybe that's radical but perhaps even skip-terminating the graphic loop when any slowdown is detected just to be sure to avoid desync.
So priorotizing one loop over another(both handling ticks) and making the second one skip if a shortage in resource is detected to ensure the first one always completes it's tick within each timeframe(10 ticks per second here).
Any possibilities or best practice methods you guys can inform me on?
EDIT: Please focus on the ability to measure availability of cpu resources and the skipping/termination for one tick of the graphic loop if these resources would not available enough to finish both loops (i.e. if the loops won't finish in the 100ms timeframe after which the next loop tick should already be firing, don't start/terminate the graphics loop).
One solution would be to use a web worker to do your world update loop and then the normal javascript loop to do the render. You would need to hand the state back and forthright o /from the web worker but the render loop would only draw on the updated data.
An advantage is that you could still have reactionary display code on the main ui loop.
This also have the advantage of the fact that the web worker could be using a different core and with multiple web workers you could use multiple extra cores
Fo the logical loop, i would take setInterval, and for the paint - requestAnimationFrame. And even more - the callback at requestAnimationFrame also receives a timestamp, so you can track timestamps and skip single frame if some lack appear.
the processor is able to handle other tasks while also rendering the animation
This statement is wrong - processor can handle only one task, and requestAnimationFrame is not actually the Rendering, it is your callback - generic javascript. You can think about it like a setTimeout. The only difference is, that it tries to run the callback on next free framerate's frame. That's why it is much better than setTimeout. So for the animations you must use the requestAnimationFrame. Other good part about it is, when the webpage is in background(other tab opened). Then the callback wont be called until it comes to the foreground. This saves processor time, as nothing is calculated in that callback.
Going back to your question: You now but, that only one callback can be processed in a time, so if the processor is in a particular time busy with the logical function, then the callback of the animation loop won't be fired. In that case it calls 'lag'. But as I understood, it is actually the desired behavior - to give the logical callback function more time. But there is other side. What if your animation function is busy, when the time for logical function came to be fired? In this case it will be fired only when animation function ends. There is nothing to do about it. If your animation function is 'heavy', you could only try to split it for 2 frames. One frame - prepare everything for render, the second one - render.
But anyway, you never become millisecond-perfect interval or timeout in javascript. As it want be called until event-loop is not free. To get the idea:
var seconds = 0;
setInterval(function(){ seconds++; var x = 10e8; while(--x); }, 1000);
Depends on you CPU, but after 10 seconds time, variable 'seconds' will be much less then 10.
And one more thing, if you really rely on time, then it is safer to use Date.now() to synchronize next logical tick:
var setLogicalLoop = (function(){
var _startedAt,
_stop,
_ms;
function frame(){
if (_stop === true)
return;
// calculations
var dt = Date.now() - _startedAt,
diff = dt % _ms;
setTimeout(frame, ms - diff);
};
return function (callback, ms){
_startedAt = Date.now();
_stop = false;
setTimeout(frame, ms);
return function(){
_stop = true;
};
};
});
// -> start
var stopLoop = setLogicalLoop(myFunction, ms);
// -> stop
stopLoop();

Categories