I am using Mootools extensively for a site which I am developing. But recently I noticed a problem that the animations slow down alot when I zoom (using the browsers Zoom In) in into the site. What could be a possible reason for this problem ? Or is this problem inherit in Mootools itself. This happens in Chrome 6.0.472 as well as Firefox 3.6.8.
Thanks,
Nitin
many things are wrong here with regards to speed optimisations.
lets take a look at this mouseover code that seems to slow down:
this.childNodes.item(1).style.left="0px";
this.getElements('div').setStyles({'opacity':'1'});
this.getElements('div').set('morph', {duration:'normal',transition: 'sine:out'});
this.getElements('span').set('morph', {duration:'normal',transition: 'sine:out'});
this.getElements('div').morph({'left':'-28px'});
this.getElements('span').morph({'left':'-30px','color':'#FFF'});
obviously this will work as it does but it's so very wrong i don't know where to begin.
the idea is to abstract and setup the repetitive tasks so they are done as a one off.
consider line by line the code above:
this.childNodes.item(1).style.left="0px";
this is wrong for a mootools app anyway, it would need to be this.getFirst().setStyle("left", 0);
the this.getFirst() is a lookup, it should be cached - although that's not a slow one.
then comes the bad part.
you select all child divs 3 times and all spans twice, where NO SELECTION should be applicable. VERY EXPENSIVE
you reset the Fx.morph options every mouseover event where there are no changes (although you seem to have a different duration for mouseenter and mouseleave - this is expensive)
consider this code:
[document.id("menu1"), document.id("menu2")].each(function(el) {
// use element storage to save lookups during events
el.store("first", el.getFirst());
el.store("divs", el.getElements("div"));
el.store("spans", el.getElements("span"));
// store the fx.morph options once and for all, no need to do so
// on every event unless you are changing something
el.retrieve("divs").set("morph", {
duration: 'normal',
transition: 'sine:out',
link: 'cancel'
});
el.retrieve("spans").set("morph", {
duration: 'normal',
transition: 'sine:out',
link: 'cancel'
});
// add the events
el.addEvents({
mouseenter: function(e) {
// retrieve the saved selectors from storage and do effects
this.retrieve("first").setStyle("left", 0);
this.retrieve("divs").morph({
"left": -28
});
this.retrieve("spans").morph({
'left': '-30px',
'color': '#FFF'
});
}
});
});
it will save a lot of processing on the events.
similarly, there are plenty of places where you are not really using the mootools api.
document.getElementById(curr).style.cursor="pointer";
$(this).removeEvents -> no need for $, this is not jquery.
document.getElementById("lightbox").style.visibility="hidden";
m=setTimeout('gallery()',5000); --> use the mootools var timer = (function() { ... }).delay(5000);, don't use strings with setTimeout/interval as it forces eval and reflows but pure anon functions
etc etc.
you really can make a day out of refactoring all this and making it 'nice' but it's going to be worth it.
also, learn about chaining
$("ribbon").set('morph', {duration:'long',transition: 'bounce:out'});
$("ribbon").morph({'top':'-10px'});
$("ribbon").addEvents({
this is calling up a selector 3 times. instead you can:
store it. var ribbon = $("ribbon"); ribbon.set...
chain it. $("ribbon").set("morph", {duration: 500}).morph({top:-10}).addEvents() - mootools element methods tend to return the original element so you can take the response of the last function and apply more to it.
1 is better for readibility, 2 is faster to do.
also. you have way too many global variables which makes your scope chain lookups more expensive, this will affect many call ups and places. try to namespace properly, if you need to access real global vars from functions and closures, use window.varname etc etc.
Another possible improvement here would be the use of event delegation (event bubbling will cause events to fire up the dom tree to the parents, mootools has an api to deal with it so you can add singular events to parent elements and not have to attach nnn events to all children) - look it up.
P.S. please don't take this in the wrong way - it's not meant to rubbish your work and it's just some constructive (i hope) advice that can help you bring it to the next level. good luck :)
I haven't seen any specific code in MooTools or any other library that checks if browser is zooming during animation, so I think that animation slows down, since browser using more CPU for computing zooming process.
Related
I'm trying to unbind an event from a specific element and upon research, I found this. Which is useful. In itself. I didn't know you could do that. But:
Is there a way to make it work in a browser/Chrome extension? I'm talking about content scripts.
The reason why this doesn't work the way it's described there is that the website which has attached the event in question with its own script is using a different jQuery object than the one in my extension's includes/ folder. And I can try to search the event via jQuery._data(el, 'click'); but that is my jQuery object, not the one of the website where the events are apparently stored. I'm glad I figured that out after hours of fiddling around.
Or maybe it is possible to access the website's jQuery object itself?
EDIT:
What I'm ultimately trying to achieve works in theory but … it's complicated. The original script uses a plugin event and keeps reinstalling it with .on('mouseleave',….
Anyway, this is what I got thanks to you, pdoherty926:
var $el = $('div.slideshow');
$('h2', $el).click(function(){ console.log('ouch!'); }); // test event
var $slides = $('.slides', $el).detach();
$copy = $el.clone(false);
$slides.prependTo($copy);
$el.replaceWith($copy);
The test event doesn't get triggered but the event I'm actually trying to remove still fires. I can imagine figuring it out, though, now that I got closer to my goal.
Okay, the aforementioned re-installation on mouseleave really messed up this otherwise satisfying suggestion. (The site is using the jQuery Timer plug-in by Cyntaxtech). So here's how I solved it instead: I simply changed the class name (-.-' )
Now the re-installation code cannot find the element anymore.
This is how my finished script looks like:
function stop_happening() {
var $el = $('div.fullwall div.slideshow');
$el
// first, stop the current automation.
.timer('stop') // Timer plug-in
// next, change class name in order to prevent the timer
// from being started again.
.removeClass('slideshow').addClass('slideshow-disabled-automation');
//--- copied some extra code from the website itself for the onclick
// events which are supposed to keep working. I wish I could do *that*
// programmatically but I'm glad I got as far as I got. ---//
// […]
}
so i have tried making myself an infinite carousel using html, css & jQuery and everything is working apart from the back button will not loop, i've spent quite a while doing this now and i'm wondering if anyone has any insight? http://jsfiddle.net/e2SKk/ is where you can see the code! i'm only really doing this because i thought it would give me a chance to learn a lot more, but any criticisms of code layout or technique would be helpful!
specifically its this code thats seems not to work
else if(loopPrev==true){
sliderActive=true
$('.item-holder').css({
'left':clonePos
});
$('.item-holder').animate({
'left':holderPos+$('.slider').width()+'px'
},function(){
sliderActive=false;
});
};
that is only a snippet btw and won't make much sense without the rest!
jQuery is cool to write short scripts.
Your slider code in short:
var width = $('.slider').width();
$('.item').css({width:width});
var $holder = $('.item-holder').css({left:-width}).prepend($('.item:last'));
$('.prev').click(function(){
$holder.not(':animated').css({left:-2*width}).prepend($('.item:last')).animate({left:-width});
});
$('.next').click(function(){
$holder.not(':animated').css({left:0}).append($('.item:first')).animate({left:-width});
});
That's the complete code.
See this in action on http://jsfiddle.net/creativecouple/YPU2d/
Pretty cool little slider you have going here! You say you are a beginner? I'd say you've picked up on jQuery quite well! Also before I forget, addressing your comment: if you post something on stackoverflow...it WILL be viewed, likely by many people :). It's rare to come here and receive no help (albeit you may not always get an answer).
Fortunately for you, I've found your problem! It's right here:
else if(loopPrev==true){
sliderActive=true
$('.item-holder').css({
'left':clonePos
});
$('.item-holder').animate({
'left':holderPos+$('.slider').width()+'px'
},function(){
sliderActive=false;
});
};
You are checking whether or not to loop, setting the slider to active, setting the next slide to the last slide in the index (and subsequently pushing it to that at the same time), then you animate as you normally would. This results in two movements: first to the back of the index, then to the value of holderPos+$('.slider').width()+'px'...hence your strange behaviour. This should help:
else if(loopPrev==true){
sliderActive=true;
$('.item-holder').animate({
'left':"-1800px"
}, function(){
sliderActive=false;
})
};
The value "-1800px" is just the last slide in your buffer that I precalculated...you should be able to replace it with your clonePos variable without trouble.
*EDIT: You should also change your variable clonePos to look like this:
var clonePos = '-'+($('.item').index()-1)*($('.slider').width());
It will eliminate a bug when you swap between the last slide in the index and the first slide (a "smooth transition" if you will).
**
Part II
**
In order to achieve the illusion of infinite scrollability you will need to embed a callback "push back" function inside the "left pressed" animation call. It's late here so I haven't tested the code I am about to write but I'm fairly confident it will work for you.
else if(loopPrev==true){
sliderActive=true;
$('.item-holder').animate({
'left':clonePos
}, function(){
$(this).css('left':holderPos+$('.slider').width()+'px');
sliderActive=false;
})
};
If you take a look this isn't much different from the original answer I offered. All we have done is take the callback function for animate, and added a call to slip the position to the original index position. Again, untested, but the idea is that .animate() will slide to the clone, once that is done your callback will swap the clone with the original, and then deactivate the slider.
You weren't very far off! Here's a semantic rule of the animate function (to attempt to help your understanding of the way a callback works):
animate( params, [duration], [easing], [callback] )
params is our left call (to the cloned slide in this case)
duration is ignored here
easing is ignored here
callback is our function() call that does our little david copperfield swap
Hope this helps!
I have inherited a relatively big javascript app (~3k lines of code) and just do parttime javascript. I want to put all the jQuery event handlers into a more sane organization. I was thinking something like this:
before:
$('#click-here').click(function(){
//a bunch of dom manipulations and with 30 of them makes the app fairly convoluted
});
after:
function result_click(){
var global_id=$(this).data('global-id');
alert('here are the results of clicking: ' + global_id);
}
$(document).ready(function(){
$('#click-here').on('click',result_click);
});
</script>
<div id='click-here' data-global-id='23'>click HERE!</div>
This would be a fairly easy refactoring and would seem to structure it better. The event handling part seems to be the most out of control. Is this a reasonable way to do this? It would seem fairly easy to make this into a backbone.js app once it gets to this point.
thx
Yes, it's good to pre-define functions and then pass a reference to the functions into the click event.
While trying to determine why a page was taking 20s to load, I found some odd behavior in IE8.
The scenario is this.
I make an ajax call, it returns and the callback looked something like this
$("#StoreDetailsContainer").html($(tableHtml));
var StoreDetailsTable = $("#StoreDetailsTable");
StoreDetailsTable.tablesorter({ sortList: [[0, 0]], cssChildRow: "SubTable" });
StoreDetailsTable.filtertable({ cssChildRow: "SubTable" });
However, this bit of code took 20s to complete.
I was messing around, timing things, and popping up alerts between methods, and suddenly, it took only 6s. I played around a little more to find that if I introduced a delay after the .html() call, and before I attempted to manipulate the DOM, the page rendered MUCH faster. It now looks like this
$("#StoreDetailsContainer").html($(tableHtml));
window.setTimeout(function() {
var StoreDetailsTable = $("#StoreDetailsTable");
StoreDetailsTable.tablesorter({ sortList: [[0, 0]], cssChildRow: "SubTable" });
StoreDetailsTable.filtertable({ cssChildRow: "SubTable" });
}, 100);
It also only takes 6s despite having an extra 1/10th of a second added to the process.
My theory is that because the DOM wasn't fully rendered to the screen by IE by the .html() call before attempting to work with it, there is some kind of locking happening.
Is there a way to determine when IE has finished rendering what was added to the DOM by .html() so I don't need to use an arbitrary value in a setTimeout call?
You're almost on the spot with your analysis. Let me attempt to explain why setTimeout is making the difference.
If you look at this great article about DOM rendering, you'll understand that .html() will cause a reflow to happen.
Now with the next two lines, what is happening is that you're tying up the browser's rendering thread. Browsers may choose to wait till script execution completes before attempting a reflow (to avoid multiple reflows or to buffer all changes).
The solution - we need to tell the browser that our html changes are done and you can complete the rendering process. The way to do it - 1) end your script 2) use setTimeout to yield the execution to the browser.
Doing a setTimeout with 0ms delay also works because setTimeout basically relinquishes control of the sript block. One of the reasons why animation related script rely so heavily on setTimeout and setInterval.
Another potential improvement would be to use documentFragments, explained succinctly by John Resig here
I think combining these two should yield more speed but of course, no way to know until profiling is done!
You could add a single pixel image to your callback response, get that image from the DOM after .html(..) and attach to its onload event. I can't imagine it's possible for the image's onload event to fire until the browser has rendered it.
Make sure the image has a unique identifier in the src so that it doesn't get cached...
Odd problem you're having though - I'm sure someone will offer a more graceful solution :)
B
Calling setTimeout delays the given function at least for the specified time but it is never run before the current script execution finished. That said, you could replace your timeout with 0 seconds.
Another approach that might be worth trying is that you access some layout property of the generated content (for example height of StoreDetailsContainer). This way you force IE to finish rendering before returning control to your script since it can only provide the correct value that your script requested after finishing to calculate the layout.
Third guess that might help is that you ensure to parse the HTML out-with the page's layout. This would prevent painting half-done layouts over and over again. To do so, you could detach the StoreDetailsContainer element from the DOM prior your call to html. Now, IE has the change to construct the DOM without affecting the layout. After that you would re-append the StoreDetailsContainer into the DOM. Compared to a normal innerHTML set, this detaching and re-attaching of the container allows you to control when the HTML is parsed to build the DOM tree and when the layout is calculated.
Try this code. The load event is fired after ready event, so it may work.
$(document).ready(function(){
$("#StoreDetailsContainer").html($(tableHtml))
});
$(window).load(function(){
$("#StoreDetailsTable").tablesorter({ sortList: [[0, 0]], cssChildRow: "SubTable" }).filtertable({ cssChildRow: "SubTable" });
});
I think that it could work like this:
$("#StoreDetailsContainer").html($(tableHtml))
.find("#StoreDetailsTable")
.tablesorter({ sortList: [[0, 0]], cssChildRow: "SubTable" })
.filtertable({ cssChildRow: "SubTable" });
I am working on a web application that is designed to display a bunch of data that is updated periodically with AJAX. The general usage scenario would be that a user would leave it open all day and take a glance at it now and then.
I am encountering a problem where the browsers memory footprint is growing slowly over time. This is happening in both Firefox and IE 7 (Although not in Chrome). After a few hours, it can cause IE7 to have a footprint of ~200MB and FF3 to have a footprint of ~400MB.
After a lot of testing, I have found that the memory leak only occurs if the AJAX calls are being responded to. If the server doesn't respond to anything, I can leave the page open for hours and the footprint won't grow.
I am using prototype for my AJAX calls. So, I'm guessing there is an issue with the onSuccess callback creating these memory leaks.
Does anyone have any tips on preventing memory leaks with prototype / AJAX? Or any methods on how to troubleshoot this problem?
EDIT: found out the issue lies in a js graphing library I am using. Can be seen here.
The biggest thing you can watch out for is events, and how you assign them.
For instance, take this scenario (since you haven't provided one):
<div id="ajaxResponseTarget">
...
</div>
<script type="text/javascript">
$(someButton).observe('click', function() {
new Ajax.Updater($('ajaxResponseTarget'), someUrl, {
onSuccess: function() {
$$('#ajaxResponseTarget .someButtonClass').invoke('observe', 'click', function() {
...
});
}
});
});
</script>
This will create a memory leak, because when #ajaxResponseTarget is updated (internally, Prototype will use innerHTML) elements with click events will be removed from the document without their events being removed. The second time you click someButton, you will then have twice as many event handlers, and garbage collection can't remove the first set.
A way to avoid this is to use event delegation:
<div id="ajaxResponseTarget">
...
</div>
<script type="text/javascript">
$('ajaxResponseTarget').observe('click', function(e) {
if(e.element().match('.someButtonClass')) {
...
}
});
$(someButton).observe('click', function() {
new Ajax.Updater($('ajaxResponseTarget'), someUrl);
});
</script>
Because of the way DOM events work, the "click" on .someButtonClass will fire also on #ajaxResponseTarget, and Prototype makes it dead simple to determine what element was the target of the event. No events are assigned to elements within #ajaxResponseTarget, so there is no way for replacing its contents to orphan events from targets within.
I may be wrong but it sounds like you are creating closures around the response object. Each response object will be different which results in an increased memory footprint.