Are zero length timers still necessary in jQuery? - javascript

I am finally getting around to really implementing some jQuery solutions for my apps (which is seeming to also involve a crash course in javascript).
In studying examples of plugins, I ran across this code. I'm assuming the author created the zero length timer to create some seperation of the running code, so that the init functon would finish quickly.
function hovertipInit() {
var hovertipConfig = {'attribute':'hovertip',
'showDelay': 300,
'hideDelay': 700};
var hovertipSelect = 'div.hovertip';
window.setTimeout(function() {
$(hovertipSelect).hovertipActivate(hovertipConfig,
targetSelectById,
hovertipPrepare,
hovertipTargetPrepare);
}, 0);
}
Is needing this type of seperation common?
Is creating the zero length timer still the best way to handle this situation, or is there a better to to handle this in jQuery?
Thanks,
Jim

Check out this article that explains when this is necessary.
Also check out this related question.
It is not very common, but is necessary when you need to escape the call stack of an event.

Related

Why are two calls to string.charCodeAt() faster than having one with another one in a never reached if?

I discovered a weird behavior in nodejs/chrome/v8. It seems this code:
var x = str.charCodeAt(5);
x = str.charCodeAt(5);
is faster than this
var x = str.charCodeAt(5); // x is not greater than 170
if (x > 170) {
x = str.charCodeAt(5);
}
At first I though maybe the comparison is more expensive than the actual second call, but when the content inside the if block is not calling str.charCodeAt(5) the performance is the same as with a single call.
Why is this? My best guess is v8 is optimizing/deoptimizing something, but I have no idea how to exactly figure this out or how to prevent this from happening.
Here is the link to jsperf that demonstrates this behavior pretty well at least on my machine:
https://jsperf.com/charcodeat-single-vs-ifstatment/1
Background: The reason i discovered this because I tried to optimize the token reading inside of babel-parser.
I tested and str.charCodeAt() is double as fast as str.codePointAt() so I though I can replace this code:
var x = str.codePointAt(index);
with
var x = str.charCodeAt(index);
if (x >= 0xaa) {
x = str.codePointAt(index);
}
But the second code does not give me any performance advantage because of above behavior.
V8 developer here. As Bergi points out: don't use microbenchmarks to inform such decisions, because they will mislead you.
Seeing a result of hundreds of millions of operations per second usually means that the optimizing compiler was able to eliminate all your code, and you're measuring empty loops. You'll have to look at generated machine code to see if that's what's happening.
When I copy the four snippets into a small stand-alone file for local investigation, I see vastly different performance results. Which of the two are closer to your real-world use case? No idea. And that kind of makes any further analysis of what's happening here meaningless.
As a general rule of thumb, branches are slower than straight-line code (on all CPUs, and with all programming languages). So (dead code elimination and other microbenchmarking pitfalls aside) I wouldn't be surprised if the "twice" case actually were faster than either of the two "if" cases. That said, calling String.charCodeAt could well be heavyweight enough to offset this effect.

Whack-A-Mole game with huge bug! Can I get some help fixing it?

I am writing a Whack-A-Mole game for class using HTML5, CSS3 and JavaScript. I have run into a very interesting bug where, at seemingly random intervals, my moles with stop changing their "onBoard" variables and, as a result, will stop being assigned to the board. Something similar has also happened with the holes, but not as often in my testing. All of this is completely independent of user interaction.
You guys and gals are my absolute last hope before I scrap the project and start completely from scratch. This has frustrated me to no end. Here is the Codepen and my github if you prefer to have the images.
Since Codepen links apparently require accompanying code, here is the function where I believe the problem is occuring.
// Run the game
function run() {
var interval = (Math.floor(Math.random() * 7) * 1000);
if(firstRound) {
renderHole(mole(), hole(), lifeSpan());
firstRound = false;
}
setTimeout(function() {
renderHole(mole(), hole(), lifeSpan());
run();
}, interval);
}
What I believe is happening is this. The function runs at random intervals, between 0-6 seconds. If the function runs too quickly, the data that is passed to my renderHole() function gets overwritten with the new data, thus causing the previous hole and mole to never be taken off the board (variable wise at least).
EDIT: It turns out that my issue came from my not having returns on my recursive function calls. Having come from a different language, I was not aware that, in JavaScript, functions return "undefined" if nothing else is indicated. I am, however, marking GameAlchemist's answer as the correct one due to the fact that my original code was convoluted and confusing, as well as redundant in places. Thank you all for your help!
You have done here and there in your code some design mistakes that, one after another, makes the code hard to read and follow, and quite impossible to debug.
the mole() function might return a mole... or not... or create a timeout to call itself later.. what will be done with the result when mole calls itself again ? nothing, so it will just be marked as onBoard never to be seen again.
--->>> Have a clear definition and a single responsibility for mole(): for instance 'returns an available non-displayed mole character or null'. And that's all, no count, no marking of the objects, just KISS (Keep It Simple S...) : it should always return a value and never trigger a timeout.
Quite the same goes for hole() : return a free hole or null, no marking, no timeout set.
render should be simplified : get a mole, get a hole, if either couldn't be found bye bye. If a mole+hole was found, just setup the new mole/hole couple + event handler (in a separate function). Your main run function will ensure to try again and again to spawn moles.

Why is ReactJS not performant like raw JS for simple, stupid proof of concept?

Years ago, I heard about a nice 404 page and implemented a copy.
In working with ReactJS, the same idea is intended to be implemented, but it is slow and jerky in its motion, and after a while Chrome gives it an "unresponsive script" warning, pinpointed to line 1226, "var position = index % repeated_tokens.length;", with a few hundred milliseconds' delay between successive calls. The script consistently goes beyond an unresponsive page to bringing a computer to its knees.
Obviously, they're not the same implementation, although the ReactJS version is derived from the "I am not using jQuery yet" version. But beyond that, why is it bogging? Am I making a deep stack of closures? Why is the ReactJS port slower than the bare JavaScript original?
In both cases the work is driven by minor arithmetic and there is nothing particularly interesting about the code or what it is doing.
--UPDATE--
I see I've gotten a downvote and three close votes...
This appears to have gotten response from people who are (a) saying something sensible and (b) contradicting what Pete Hunt and other people have said.
What is claimed, among other things, by Hunt and Facebook's ReactJS video, is that the synthetic DOM is lightning-fast, enough to pull 60 frames per second on a non-JIT iPhone. And they've left an optimization hook to say "Ignore this portion of the DOM in your fast comparison," which I've used elsewhere to disclaim jurisdiction of a non-ReactJS widget.
#EdBallot's suggestion that it's "an extreme (and unnecessary) amount of work to create and render an element, and do a single document.getElementById. Now I'm factoring out that last bit; DOM manipulation is slow. But the responses here are hard to reconcile with what Facebook has been saying about performant ReactJS. There is a "Crunch all you want; we'll make more" attitude about (theoretically) throwing away the DOM and making a new one, which is lightning-fast because it's done in memory without talking to the real DOM.
In many cases I want something more surgical and can attempt to change the smallest area possible, but the letter and spirit of ReactJS videos I've seen is squarely in the spirit of "Crunch all you want; we'll make more."
Off to trying suggested edits to see what they will do...
I didn't look at all the code, but for starters, this is rather inefficient
var update = function() {
React.render(React.createElement(Pragmatometer, null),
document.getElementById('main'));
for(var instance in CKEDITOR.instances) {
CKEDITOR.instances[instance].updateElement();
}
save('Scratchpad', document.getElementById('scratchpad').value);
};
var update_interval = setInterval(update, 100);
It is doing an extreme (and unnecessary) amount of work and it is being done every 100ms. Among other things, it is calling:
React.createElement
React.render
document.getElementById
Probably with the amount of JS objects being created and released, your update function plus garbage collection is taking longer than 100ms, effectively taking the computer to its knees and lower.
At the very least, I'd recommend caching as much as you can outside of the interval callback. Also no need to call React.render multiple times. Once it is rendered into the dom, use setProps or forceUpdate to cause it to render changes.
Here's an example of what I mean:
var mainComponent = React.createElement(Pragmatometer, null);
React.render(mainComponent,
document.getElementById('main'));
var update = function() {
mainComponent.forceUpdate();
for(var instance in CKEDITOR.instances) {
CKEDITOR.instances[instance].updateElement();
}
save('Scratchpad', document.getElementById('scratchpad').value);
};
var update_interval = setInterval(update, 100);
Beyond that, I'd also recommend moving the setInterval code into whatever React component is rendering that stuff (the Scratchpad component?).
A final comment: one of the downsides of using setInterval is that it doesn't wait for the callback function to complete before queuing up the next callback. An alternative is to use setTimeout with the callback setting up the next setTimeout, like this
var update = function() {
// do some stuff
// update is done to setup the next timeout
setTimeout(update, 100);
};
setTimeout(update, 100);

Do JavaScript bindings take up memory while not in use?

I have a calendar that I've built, and on clicking a day on the calendar, a function is run. You can go month to month on the calendar, and it generates months as it goes. Since every day on the calendar, shown or not, is binded to an event using the class of all the "days," I'm worried about the number of "bindings" building up into the thousands.
//after a new month is generated, the days are rebound
TDs.off("mousedown");
TDs = $(".tableDay");
TDs.on("mousedown", TDmouseDown);
While I was learning C#/Monogame, I learned about functions that repeat very quickly to update game elements. So, I was wondering if JavaScript works in the same manner. Does the JavaScript engine repeatedly check every single event binding to see if it has occurred? So a structure something like this:
function repeat60timesPerSecond(){
if(element1isClicked){ //blah }
if(element2isClicked){ //blah }
if(element3isClicked){ //blah }
}
Or is JavaScript somehow able to actually just trigger the function when the event occurs?
In short: Do JavaScript bindings take up memory solely by existing?
My (inconclusive) research so far:
I have made several attempts at answering this question on my own. First, I did a jsperf test. Apart from the obvious problems with consistency in my test, the test didn't actually test this question. It primarily tested if unbinding nothing was faster than unbinding something. Rather than how much memory the actual bindings take up after creation. I couldn't come up with a way to test this using this testing service.
I then googled around a bit, and found quite a bit of interesting stuff, but nothing directly answering this question in clear terms. I did come across this answer, which suggests using a single event binding of the event container in a similar situation.
UPDATE:
Right after posting this, I thought of a possible way to test this with local JS:
function func(){
console.log("test");
}
for(x=1;x<1000;x++){
$('#parent').append("<div id='x"+x+"' class='child'></div>");
$("#x"+x).on("mousedown", func);
}
console.time("timer");
for(i=1;i<1000000;i++){
q = Math.sqrt(i);
if(q % 1 == 0){
q = 3;
}
}
console.timeEnd("timer");
After playing around with this(changing what the for loop does, changing the number of iterations on both for loops, etc.) it appears that event bindings take up a VERY small amount of memory.
Yes, they all take up memory, but not very much. There's just one function object. Each element has a pointer to that object, so it's probably something like 4 bytes per element.
As Felix King suggested, you can reduce this by using delegation, as there's just a single binding to a container element. However, the savings in memory is offset by increased time overhead -- the handler is invoked every time the event occurs anywhere in the container, and it has to test whether the target matches the selector to which the event is delegated.

Does replacing $(this) with a variable make any performance difference [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have a loop that looks like this:
$('#SomeSelectorID').find('.SomeElementsByClassName').each(function () {
$(this).some code here;
$(this).some other code there;
$(this).some other code here and there;
});
If I write at the top of the loop var TheThis = $(this); and then replace $(this) with TheThis is that a performance optimization or not really?
It's a definite performance optimisation. One you'll probably not notice, but that's no reason not to do it.
The code in your example means that the DOM will be interrogated 3 times to look for the $(this) element and then perform the actions on it. Caching it in a variable means that that will only occur once.
If you really want to see the difference try comparing your original with the below in a JSPerf test.
$('#SomeSelectorID').find('.SomeElementsByClassName').each(function () {
var $this = $(this);
$this.some code here;
$this.some other code there;
$this.some other code here and there;
});
Yes there is a performance penalty. I've created a small demo that illustrates using $(this) is slower than using a stored version of it.
JSFiddle demo here.
No I don't think you need to change your code. The benefit in this case will be so small that you will hardly notice any difference. Maybe in another situation where you are developing a game or data processing app it can matter.
Here are the results of my test...
Testing jquery version...
1000000 iterations $(this): 0.006849ms
Testing non-jquery version...
1000000 iterations of this$: 0.001356ms
Of course it is a performance optimization. Whether it is worth it or not, that's the real question. If you are reiterating over the DOM then it would definitely be worth it. In this case, you are just wrapping an object in jQuery so the footprint is much smaller.
That being said, you gain a little bit of performance but lose nothing in terms of readability, maintainability, or other things that you usually have to sacrifice to gain performance, so you may as well make the tweak.
Testing this shows no performance impact, at least on Chrome:
var start = new Date().getTime(),
iterations = 50000;
$('#foo').find('.bar').each(function () {
var that = $(this);
for(var i = 0; i < iterations; i++)
that.find('i');
});
console.log(new Date().getTime() - start);
Using $(this) results are more or less the same.
http://jsfiddle.net/BuREW/
Well, duh.
In general, calling any function is an expense. Calling $() is a HUGE one (compare calling times compared to Vanilla JS) and one that should be avoided as much as possible.
Storing its return value in a variable is always a good thing, but it also avoids certain "gotchas".
For instance, let's say you want to change all .test elements to green and remove the class. You might do this:
$(".test").removeClass("test");
$(".test").css({"color":"green"});
Only to find that it doesn't change the colour to green because $(".test") isn't the same thing anymore.
Conversely, if you had done:
var test = $(".test");
test.removeClass("test");
test.css({"color":"green"});
It works. Of course, this is a trivial example since you can just rearrange the code lines and it works too, but I'm triyng to make a point here :p

Categories