I don't fully understand JavaScript Threading - javascript

Before I dive into the question. Let me state that by Event Loop I am referring to http://en.wikipedia.org/wiki/Event_loop. This is something that browsers implement. For more information, read this: http://javascript.info/tutorial/further-javascript-features/events-and-timing-depth.
This question is hard and long, so, please try to bear with it! And I do appreciate all answers!
So. Now, as I understand it, in JavaScript there is a single main thread (in most browser environments, that is). So, code like:
for (var color = 0x000; color < 0xfff; color++) {
$('div').css('background-color', color.toString(16));
}
will produce an animation from black to white, but you won't see that because the rendering is done after the code has been processed (when the next tick happens -- the browser enters the Event Loop).
If you want to see the animation, you could do:
for (var color = 0x000; color < 0xfff; color++) {
setTimeout(function() {
$('div').css('background-color', color.toString(16));
}, 0);
}
The above example would produce a visible animation, because setTimeout pushes a new event to the browser Event Loop stack which will be processed after there is nothing running (it enters the Event Loop to see what to do next).
It seems that the browser in this case have 0xfff (4095) events pushed into the stack, where each of them are processed with a render process in between them. So, my first question (#1) is that when exactly does the rendering take place? Does it always take place in between the processing of two events in the Event Loop stack?
The second question is about the code in the javascript.info website link I gave you.
...
function func() {
timer = setTimeout(func, 0)
div.style.backgroundColor = '#'+i.toString(16)
if (i++ == 0xFFFFFF) stop()
}
timer = setTimeout(func, 0)
....
My question here is that will the browser push a new "rendering" event to the Event Loop stack every time it reaches the div.style. ... = ... line? But does it not first push an event due to the setTimeout-call? So, does the browser end up in a stack like:
setTimeout event
render event
Since the setTimeout call was processed before the div style change? If that's how the stack looks like, then I would assume the next time the browser enters the Event Loop it will process the setTimeout's callback and end up having:
rendering event
setTimeout event
rendering event
and continue with the rendering event that the earlier setTimeout call produced?

Q1: Not necessarily. Browsers to varying degrees implement optimizations. For example, they may wait to collect several style changes before triggering an expensive recalculation of the layout. So the answer is: depends on the specific browser.
Try this: http://taligarsiel.com/Projects/howbrowserswork1.htm#Render_tree_construction (the document is dated Oct 2009 - i.e. it is sufficiently up to date)
Q2: The rendering is not necessarily the same as the JS execution - that's two different engines. Ths JS engine is not responsible for the rendering, it just interfaces with the render engine. It seems to me the main message for your second question is this independence of the JS from the rendering engine. Remember, a browser (or a webpage) does not need Javascript, their main purpose is to render HTML based on CSS style rules. Javascript is just one way to manipulate the HTML (the DOM tree really) and the style rules.
Note that you can force rendering by reading a style definition - at this point the rendering engine has no choice but process any outstanding style changes, especially if it involves any position changes. That's why one should remove objects from the rendering tree (e.g. by setting display:none - visibility:hidden is NOT enough since the element's size is still considered for layout) before doing a lot of style changes or adding a lot of elements, e.g. when lots of rows are added one by one (a "for" loop) to a table.
Not part of the question at all - but since I just mentioned a difference between display:none and visibility:hidden, that's also a consideration when adding hidden position:absolute elements like dialogs. While there is no visible difference whether an absolutely positioned element is hidden from you using one or the other method, internally there IS a big difference: when hidden using visibility:hidden the element is part of the rendering tree, with display:none it is not. So, if one has such an element that needs to be toggled a lot one should use visibility:hidden, because when the "display" style is switched between "none" and e.g. "block" the browser has to render it first.

The article you mention only considers Javascript. A lot more happens in the browser; reflowing and repainting are/can be triggered by a lot more things; take a look at the following links for more info on this.
http://www.phpied.com/rendering-repaint-reflowrelayout-restyle/
http://www.browserscope.org/reflow/about
I wouldn't use setTimeout for this purpose.
Edit:
As per the comments, the recommended way is to use requestAnimationFrame. As of this writing, this is only available in unstable releases of most browsers. There are however several libraries available providing cross-browser access to it, and fall back to using setTimeout if necessary.
Take a look at this demo for an example working in old browsers, as well as in new ones:
http://paulirish.com/2011/requestanimationframe-for-smart-animating/

Related

Why is setTimeout/setInterval slowing down when i click onto the page?

I found out that when you use setInterval() or loop setTimeout() functions and then hold click while moving your cursor on the website (like you do when you want to highlight text), the interval is slowed down for some reason (in Firefox). Sometimes it even slowed down when i just moved the cursor while the interval is running.
Here's an example of a "scroll to top" button that uses setInterval in which you can see that: https://jsfiddle.net/6yzhvb07/56/
This seems like no big deal in codes like the one above but when I'm e.g. coding a mobile browsergame, it is a big problem because every long touch input slows the whole game down more than 50% (in Mobile Chrome).
Has anyone encountered that problem yet or know what may cause that?
This is because of how the javascript runtime engine works. JavaScript doesn't support multithreading. JavaScript uses an EventLoop to keep track of all events happing. If a lot of things is happing events gets stacked up and waits to get proccessed.
If you want to understand exatcly how the event loop works and get details on your answer watch this youtube:
What the heck is the event loop anyway?
That's just one of the caveats of using setTimeout and setInterval, they are not supposed to be relied upon for accuracy. This is especially true since the blocking nature of JavaScript's single-threaded event loop makes it impossible to guarantee execution at a specific time. If you need something to happen at a more accurate time then one method would be to do some math with the result of Date.now() (the amount of milliseconds since January 1 1970 UTC) and occasionally clear and re-set the timeout/interval.
If you're using it for a game then I would recommend not using either and instead opt for requestAnimationFrame. This will require you to get the difference in time between frames to mathematically account for any changes.

Properly Await Propagation of InnerHTML to Complete Before Executing Function [duplicate]

I hope I won't make a fool of myself but I'm trying to understand what is happening in those two lines of code:
document.body.innerHTML = 'something';
alert('something else');
What I am observing is that alert shows before HTML has been updated (or maybe it has but the page hasn't been refreshed/repainted/whatever)
Checkout this codepen to see what I mean.
Please note that even putting alert in setTimeout(..., 0) does not help. Looks like it takes more event loops for innerHTML to actually update page.
EDIT:
I forgot to mention I am using Chrome and did not check other browsers. Looks like it's only visible in Chrome. Nevertheless I am still interested why is that happening.
Setting innerHTML is synchronous, as are most changes you can make to the DOM. However, rendering the webpage is a different story.
(Remember, DOM stands for "Document Object Model". It's just a "model", a representation of data. What the user sees on their screen is a picture of how that model should look. So, changing the model doesn't instantaneously change the picture - it take some time to update.)
Running JavaScript and rendering the webpage actually happen separately. To put it simplistically, first all of the JavaScript on the page runs (from the event loop - check out this excellent video for more detail) and then after that the browser renders any changes to the webpage for the user to see. This is why "blocking" is such a big deal - running computationally intensive code prevents the browser from getting past the "run JS" step and into the "render the page" step, causing the page to freeze or stutter.
Chrome's pipeline looks like this:
As you can see, all of the JavaScript happens first. Then the page gets styled, laid out, painted, and composited - the "render". Not all of this pipeline will execute every frame. It depends on what page elements changed, if any, and how they need to be rerendered.
Note: alert() is also synchronous and executes during the JavaScript step, which is why the alert dialog appears before you see changes to the webpage.
You might now ask "Hold on, what exactly gets run in that 'JavaScript' step in the pipeline? Does all my code run 60 times per second?" The answer is "no", and it goes back to how the JS event loop works. JS code only runs if it's in the stack - from things like event listeners, timeouts, whatever. See previous video (really).
https://developers.google.com/web/fundamentals/performance/rendering/
Yes, it is synchronous, because this works (go ahead, type it in your console):
document.body.innerHTML = 'text';
alert(document.body.innerHTML);// you will see a 'text' alert
The reason you see the alert before you see the page changing is that the browser rendering takes more time and isn't as fast as your javascript executing line by line.
The innerHTML property actual does get updated synchronously, but the visual redraw that this change causes happens asynchronously.
The visual rendering the DOM is asynchronous in Chrome and will not happen until after the current JavaScript function stack has cleared and the browser is free to accept a new event. Other browsers might use separate threads to handle JavaScript code and browser rendering, or they might let some events get priority while an alert is halting the execution of another event.
You can see this in two ways:
If you add for(var i=0; i<1000000; i++) { } before your alert, you've given the browser plenty of time to do a redraw, but it hasn't, because the function stack has not cleared (add is still running).
If you delay your alert via an asynchronous setTimeout(function() { alert('random'); }, 1), the redraw process will get to go ahead of the function delayed by setTimeout.
This does not work if you use a timeout of 0, possibly because Chrome gives event-queue priority to 0 timeouts ahead of any other events (or at least ahead of redraw events).

How do I track down where event listener is getting added?

I have a fairly good sized javascript (with react/redux but no jquery) codebase for a webapp I'm building, and I've noticed that when I repeatedly open and close a certain panel within the UI, the number of listeners according to Chrome's performance timeline keeps increasing.
The graph looks like this:
I have allowed the chrome's performance monitor run for a good minute or two with the page sitting idle (just after opening/closing the panel a bunch), hoping that perhaps the listeners will get garbage collected, but they are not. I've switched to other tabs during this process, also hoping that the listeners will get garbage collected when the tab is backgrounded, but they unfortunately are not.
I therefore suspect that some listeners are getting registered that are never unregistered.
This leads me to two main questions:
Does my hypothesis that listeners are getting added and never
unbound seems sensible, or is there more I could be doing to confirm
this suspicion?
Assuming my suspicion is correct, how can I best go
about tracking down the code where the event listener(s) is/are
being added? I have already tried the following:
Looked at the code that is responsible for opening the panel in question, seeing where it adds any listeners, and commenting out those portions to see if there's any change in the performance graph. There is not a change.
Overridden the addEventListener prototype like so:
var f = EventTarget.prototype.addEventListener;
EventTarget.prototype.addEventListener = function(type, fn, capture) {
this.f = f;
this.f(type, fn, capture);
console.trace("Added event listener on" + type);
}
Even after doing this, then commenting out all code portions that cause this console.trace to be executed (see #1) such that the console.trace is no longer printed upon open/close of the panel, I notice the same increase in listeners in the performance graph. Something else is causing the listeners to increase. I understand that there are other ways that listeners can be added, but it's not clear to me how to intercept all of those possibilities or cause them to be logged in Chrome's debugger in such a way that I can tell which code is responsible for adding them.
Edit:
- At the suggestion of cowbert in the comments, I took a look at this page:
https://developers.google.com/web/tools/chrome-devtools/console/events
I then made the following function:
function printListenerCount() {
var eles = document.getElementsByTagName("*");
var numListeners = 0;
for (idx in eles) { let listeners = getEventListeners(eles[idx]);
for(eIdx in listeners)
{
numListeners += listeners[eIdx].length;
}
console.log("ele", eles[idx], "listeners", getEventListeners(eles[idx]));
}
console.log("numListeners", numListeners)
}
I execute this function after having opened/closed the panel a bunch of times, but unfortunately the "numListeners" figure doesn't change.
If the numListeners figure changed, I would be able to diff the results before/after having open/closed the panel to discover which element
has the extra event listener registered to it, but unfortunately numListeners does not change.
There is also a monitorEvents() API described on https://developers.google.com/web/tools/chrome-devtools/console/events, but the function
call requires that you specify a DOM element that you wish to monitor. In this situation, I'm not sure which DOM element has the extra
listeners, so I'm not sure how the monitorEvents() call will really help me. I could attach it to all DOM elements, similar to how I've
written the printListenerCount function above, but I presume I'd run into a similar problem that I ran into with printListenerCount() --
for whatever reason, it's not accounting for the listener(s) in question.
Other notes:
This is a somewhat complicated reactjs (preact, technically) based application. Like most reactjs based apps, components get mounted/unmounted (inserted into and removed from the DOM) on the fly. I'm finding that this makes tracking down "stray event handler registrations" like this a bit tricky. So what I'm really hoping for is some general debugging advice about how to track down "Stray event handlers" in large/complex projects such as this. As a C programmer, I would open gdb and set a breakpoint on everything that can possibly cause the "listeners" number in the performance graph to increase. I'm not sure if there's an analog of that in the javascript world, and even if it there, I'm just not sure how to do it. Any advice would be much appreciated!
Thanks for your comments, everyone. I ended up figuring this out.
From my OP:
Does my hypothesis that listeners are getting added and never unbound seems sensible, or is there more I could be doing to confirm this suspicion?
It turns out that the answer to this question is: The hypothesis is not sensible. The listeners simply haven't had a chance to get garbage collected yet. It can take some more time than you might think.
Here's how I figured it out:
I failed to realize that while recording a performance timeline, it's possible to force a garbage collection by clicking on the trash can icon in the Performance tab (same tab used to start the timeline recording). By clicking this icon after repeated closings/openings of the UI panel, the extra listeners completely went away. The graph now looks like this, with the dips being moments where I clicked the trash icon:
Apparently, backgrounding the tab and waiting a couple of minutes like I mentioned in the OP is simply not enough time for garbage collection to occur on its own; It takes some more time than that.
I wasn't aware of the ability to manually collect garbage with the trash can icon when I wrote the OP... I strongly recommend using it before going on any wild goose chases hunting down what might at first look like a performance problem.

jquery waits the for loop to complete before executing previous events

I have a function which is triggered on a click event. Inside the function first line is to show an overlay, and after that there is a for loop. I expect the function to show the overlay first and then continue with the for loop.
Instead overlay is being shown only after the for loop completes.
Here is the jsFiddle Link
$(document).on("click",function(){
$("h1").text("Clicked");
for(var i=0;i<100000;i++){
console.log(i);
}
})
view will not update in the same thread or in same flow of execution. it will use invalidation technique. which means view updates postpone for some time, thats way we can do bunch of update in a minimal effort.
javascript is single threaded fashion, so view update will wait until for loop finish.
Use setTimeout() to have a delay between the overlay and for loop
https://jsfiddle.net/b9m5spxu/
Here is a good article that explains this behavior (thks #subash for the hint): http://javascript.info/tutorial/events-and-timing-depth
JavaScript execution and rendering
In most browsers, rendering and JavaScript use single event queue. It
means that while JavaScript is running, no rendering occurs.
Check it on the demo below. When you press run, the browser may halt
for some time, because it changes div.style.backgroundColor from
A00000 to #FFFFFF.
In most browsers, you see nothing until the script finishes, or until
the browser pauses it with a message that ‘a script is running too
long’.
The exception is Opera.
<div style="width:200px;height:50px;background-color:#A00000"></div>
<input type="button" onclick="run()" value="run()">
<script> function run() { var div =
document.getElementsByTagName('div')[0] for(var
i=0xA00000;i<0xFFFFFF;i++) {
div.style.backgroundColor = '#'+i.toString(16) } } </script>
In Opera, you may notice div is redrawn. Not every change causes a
repaint, probably because of Opera internal scheduling. That’s because
event queues for rendering and JavaScript are different in this
browser.
In other browsers the repaint is postponed until the JavaScript
finishes.
Again, the implementation may be different, but generally the nodes
are marked as “dirty” (want to be recalculated and redrawn), and
repaint is queued. Or, the browser may just look for dirty nodes after
every script and process them.
Immediate reflow The browser contains many optimizations to speedup
rendering and painting. Generally, it tries to postpone them until the
script is finished, but some actions require nodes to be rerendered
immediately.
For example: elem.innerHTML = 'new content' alert(elem.offsetHeight)
// <-- rerenders elem to get offsetHeight In the case above, the
browser has to perform relayouting to get the height. But it doesn’t
have to repaint elem on the screen.
Sometimes other dependant nodes may get involved into calculations.
This process is called reflow and may consume lots of resources if
script causes it often.
Surely, there’s much more to talk about rendering. It will be covered
by a separate article [todo].

Using jquery to add/remove class but elements are not redrawn to reflect

I have two elements (think of two buttons side by side). I dynamically toggle the class "focusd" to change the highlighted effect. However, there's a quirk it doesn't always get redrawn and/or inserted in the DOM. For example, if in chrome I do console.log, I see the class changes (I'm using removeClass/addClass in jquery). But if I go to the Elements tab in the inspector, it shows the classes from before (and in fact, I'm not seeing the redrawing reflecting the toggling of the classes.)
I tried setting the parent div to display none then back to block but that didn't work. It's a "one off" modale screen, so efficiency doesn't matter so I've resorted to this hack where I essentially copy the parent's innerhtml, remove and reinsert the element. Horrible!
// Not sure why I need this hack. But if I don't, the buttons don't seem to get redrawn
var htm = jQuery(".rdata_container").html(); // copy the innerhtml
jQuery(".rdata_container").empty(); // empty and then append back
jQuery(".rdata_container").append(htm);
This seems like a specific quirk that someone must have ran into (I hope). If so, I'd love to know why my changes aren't reflected.
EDIT
Code posted here:
http://jsfiddle.net/roblevintennis/JCZnf/
you can use setTimeout when you are doing the other operation on the element, so for example:
$elm.addClass('hide')
setTimeout(function(){
$elm.removeCalss('hide')
},0);
Or you could force a repaint like so:
$elm.addClass('hide')
$elm.scrollTop; // forces a repaint (might be expensive for large amount of items)
$elm.removeCalss('hide');
These tricks will force the browser to re-draw the change, because there are two things happening here and the browser logic just combines them into one, which isn't the desired behavior.
Not directly an answer to your question, but you can use jQuery's toggleClass function to simplify your code.
Here's an updated version that uses toggleClass() and jQuery 1.6 and AFAICT works fine.
http://jsfiddle.net/JCZnf/7/

Categories