Different values when debugging than in console? - javascript

I'm running an Angular application that is returning two distinct values simultaneously. I'm curious if anyone has seen this:
function updateValues() {
var activeNavButton = pageNavButtons.eq(values.currentPage);
pageNavButtons.removeClass("active");
activeNavButton.addClass("active");
pageNavButtons.each(function () {
var forceRender = $(this).get(0).offsetLeft;
});
var w = 0;
$(".pages button").each(function () {
w = w + $(this).outerWidth(true)
});
var b=0;
completeHandler();
}
This is straightforward as can be. Switch which item is "active", and then force a render refresh. You'll notice none of this code is really doing anything, but thats okay. I left out some of the less important, unrelated stuff.
Yep, I'm frustrated enough that I'm trying to force the render refresh in multiple ways at once.
In the chrome debugger, if you break on this line:
var b = 0
the following occurs:
w = 790 //Watcher
However, if you open the console while the script is still at that break point and literally copy and paste the preceding 4 lines:
var w = 0;
$(".pages button").each(function () {
w = w + $(this).outerWidth(true)
});
It returns 800 for the value of w.
An important thing to note: the .active class gives the selected element a "bold" font, thus changing the element width. I'm positive this is related to the problem but I can't for the life of me figure out what the issue really is.
As you can see, I'm accessing offsetWidth to try to force the browser to update the elements but its not working.
Any ideas? This is driving me absolutely insane.

Okay. This may seem dumb but on large code bases it might not be terribly surprising:
Turns out that the button's base CSS class (a ways up in the hierarchy) had a transition: all 150ms on it.
This caused a delay, which caused widths to return incorrectly as you might expect because, and this is the important part, font weight is included in transition all.
Because of this, the font weight would change 150ms later, and those extra couple of pixels of width (in certain cases) would register "late". No wonder my timers seemed to arbitrarily succeed: I was guessing.
Lesson learned: Don't use transition: all unless you have a good reason. Target the things you want to actually transition.
Hope this helps somebody else! Cheers.

Related

Simple Collision Detection in Javascript / Jquery?

I am working on a portion of a project that I am trying to detect when certain divs hit each other. In the code that I made, that doesn't work, I basically say take the first div's left amount, compare it to the other div's left amount, if they are within a certain amount it triggers an alert. If I get that much to work I am going to implant a way to say that if the distance between the two divs is 0 then it will run a certain function. I am afraid the scope of this project is too big for me, even though I am basically at the last part, because I have spent hours researching a simple way to add collision detection, but everything I find looks like rocket science to me, that is why I tried to create my own way below. So in summary, what I want to know is why my collision detection code doesn't work, how I can make it work if possible, and if not possible what is the next best option that I should use.
//Collision
function collision(){
var tri = $('#triangle');
var enemyPos = $('.object1').css('left');
var minHit = enemyPos - 32.5;
var maxHit = enemyPos + 32.5;
var triLoc = tri.css('left');
if(triLoc > minHit && triLoc < maxHit){
alert('hit');
}
}
collision();
}
}
full code: https://jsfiddle.net/kc59vzpy/
If the code you have above is definitely where the problem is, then you need to look at the enemyPos variable. Getting the left position also adds px, so enemyPos is 100px or something like that. When you add 32.5, you get 100px32.5 and when you subtract you get NaN, neither of which you want.
Before you add or subtract, use enemyPos = parseInt($('.object1').css('left')); to turn it into an actual number.

Google DevTool Timeline: Forced reflow is likely performance bottleneck

I added parallax effect to my page. And now I have problems with performance and FPS and many questions :-)
I use transform3d and requestAnimationFrame to realize it (like this recomended http://www.html5rocks.com/en/tutorials/speed/animations/).
My code looks like this:
window.addEventListener('scroll', function() {
latestKnownScrollY = window.scrollY;
});
function updateParallax() {
var y = latestKnownScrollY * 0.4;
element.style.transform = 'translate3d(0, ' + y + 'px, 0)';
requestAnimationFrame(updateParallax);
}
updateParallax();
Sometimes I have warnings like on the screenshot:
Forced reflow is likely performance bottleneck
Call stack points to latestKnownScrollY = window.scrollY.
But why this warning appears only occasionally? I use window.scrollY each scroll event.
Each time you read window.scrollY, you're causing a reflow. It just means that the browser is calculating the styles and layout to give you the value.
It says it's likely a performance issue because it takes time and it is synchronous. If you read, set, read, set, read, set properties, or if you have this kind of thing inside a loop, it will cause a bottleneck until it can redraw the whole page all the times you triggered the reflow. The solution is usually first to read everything you need, then set everything you need to change.
But in your case, it shouldn't be a problem. It says it takes just 0.2 ms and it's doing it just once. Do you notice any performance issue? Like a lag when you scroll?

Stop Body Rotation in PhysicsJS

I'm looking for the best practice approach to stop PhysicsJS bodies from being able to rotate.
I have tried cancelling the body's rotational velocity. Yet this does not seem effective on the step as the body still somehow sneaks in some rotation. Combining this with setting the body's rotation manually seems to work at first:
world.on('step', function(){
obj.state.angular.pos = 0;
obj.state.angular.vel = 0;
world.render();
});
However in the past I have experienced a lot of bugs related to this method. Seemingly to do with the object being allowed to rotate just slightly before the step is called, which causes it to be "rotated" very quickly when its body.state.angular.pos is set back to zero. This results in objects suddenly finding themselves inside the subject, or the subject suddenly finding itself inside walls/other objects. Which as you can imagine is not a desirable situation.
I also feel like setting a body's rotation so forcefully must not be the best approach and yet I can't think of a better way. So I'm wondering if there's some method in PhysicsJS that I haven't discovered yet that basically just states "this object cannot rotate", yet still allows the body to be treated as dynamic in all other ways.
Alternatively what is the "safest" approach to gaining the desired effect, I would be happy even with a generic guide not tailored to physicsJS, just something to give me an idea on what is the general best practice for controlling dynamic body rotations in a simulation.
Thanks in advance for any advice!
The key to accomplishing this is to ensure that you put the body to sleep first and then immediately wake it up, afterward setting the angular velocity to zero.
So for example, what I've been doing to prevent bodies that have collided from rotating was:
world.on('collisions:detected', function(data, e) {
var bodyA = data.collisions[0].bodyA,
bodyB = data.collisions[0].bodyB;
bodyA.sleep(true);
bodyA.sleep(false);
bodyA.state.angular.vel = 0;
bodyB.sleep(true);
bodyB.sleep(false);
bodyB.state.angular.vel = 0;
});
I've also seen this accomplished by increasing the mass of the bodies in question to a ridiculously high number, but this would have possible side effects that you may not desire.

Improving Efficiency in jQuery function

The while statement in this function runs too slow (prevents page load for 4-5 seconds) in IE/firefox, but fast in safari...
It's measuring pixel width of text on a page and truncating until text reaches ideal width:
function constrain(text, ideal_width){
$('.temp_item').html(text);
var item_width = $('span.temp_item').width();
var ideal = parseInt(ideal_width);
var smaller_text = text;
var original = text.length;
while (item_width > ideal) {
smaller_text = smaller_text.substr(0, (smaller_text.length-1));
$('.temp_item').html(smaller_text);
item_width = $('span.temp_item').width();
}
var final_length = smaller_text.length;
if (final_length != original) {
return (smaller_text + '…');
} else {
return text;
}
}
Any way to improve performance? How would I convert this to a bubble-sort function?
Thanks!
move the calls to $() outside of the loop, and store its result in a temporary variable. Running that function is going to be the slowest thing in your code, aside from the call to .html().
They work very very hard on making the selector engines in libraries fast, but it's still dog slow compared to normal javascript operations (like looking up a variable in the local scope) because it has to interact with the dom. Especially if you're using a class selector like that, jquery has to loop through basically every element in the document looking at each class attribute and running a regex on it. Every go round the loop! Get as much of that stuff out of your tight loops as you can. Webkit runs it fast because it has .getElementsByClassName while the other browsers don't. (yet).
Instead of removing one character at time until you find the ideal width, you could use a binary search.
I see that the problem is that you are constantly modifying the DOM in the loop, by setting the html of the temp_item, and then re reading the width.
I don't know the context of your problem, but trying to adjust the layout by measuring the rendered elements is not a good practice from my point of view.
Maybe you could approach the problem from a different angle. Truncating to a fixed width is common.
Other possibility (hack?) if dont have choices, could be to use the overflow css property of the container element and put the … in other element next to the text. Though i recommend you to rethink the need of solving the problem the way you are intending.
Hugo
Other than the suggestion by Breton, another possibility to speed up your algorithm would be to use a binary search on the text length. Currently you are decrementing the length by one character at a time - this is O(N) in the length of the string. Instead, use a search which will be O(log(N)).
Roughly speaking, something like this:
function constrain(text, ideal_width){
...
var temp_item = $('.temp_item');
var span_temp_item = $('span.temp_item');
var text_len_lower = 0;
var text_len_higher = smaller_text.length;
while (true) {
if (item_width > ideal)
{
// make smaller to the mean of "lower" and this
text_len_higher = smaller_text.length;
smaller_text = text.substr(0,
((smaller_text.length + text_len_lower)/2));
}
else
{
if (smaller_text.length>=text_len_higher) break;
// make larger to the mean of "higher" and this
text_len_lower = smaller_text.length;
smaller_text = text.substr(0,
((smaller_text.length + text_len_higher)/2));
}
temp_item.html(smaller_text);
item_width = span_temp_item.width();
}
...
}
One thing to note is that each time you add something to the DOM, or change the html in a node, the page has to redraw itself, which is an expensive operation. Moving any HTML updates outside of a loop might help speed things up quite a bit.
As other have mentioned, you could move the calls to $() to outside the loop. You can create a reference to the element, then just call the methods on it within the loop as 1800 INFORMATION mentioned.
If you use Firefox with the Firebug plugin, there's a great way of profiling the code to see what's taking the longest time. Just click profile under the first tab, do your action, then click profile again. It'll show a table with the time it took for each part of your code. Chances are you'll see a lot of things in the list that are in your js framework library; but you can isolate that as well with a little trial and error.

clientWidth Performance in IE8

I have some legacy javascript that freezes the tfoot/thead of a table and lets the body scroll, it works fine except in IE8 its very slow.
I traced the problem to reading the clientWidth property of a cell in the tfoot/thead... in ie6/7 and FireFox 1.5-3 it takes around 3ms to read the clientWidth property... in IE8 it takes over 200ms and longer when the number of cells in the table is increased.
Is this a known bug ? is there any work around or solution ?
I've solved this problem if you are still interested. The solution is quite complex. Basically, you need to attach a simple HTC to the element and cache its clientWidth/Height.
The simple HTC looks like this:
<component lightweight="true">
<script>
window.clientWidth2[uniqueID]=clientWidth;
window.clientHeight2[uniqueID]=clientHeight;
</script>
</component>
You need to attach the HTC using CSS:
.my-table td {behavior: url(simple.htc);}
Remember that you only need to attach the behavior for IE8!
You then use some JavaScript to create getters for the cached values:
var WIDTH = "clientWidth",
HEIGHT = "clientHeight";
if (8 == document.documentMode) {
window.clientWidth2 = {};
Object.defineProperty(Element.prototype, "clientWidth2", {
get: function() {
return window.clientWidth2[this.uniqueID] || this.clientWidth;
}
});
window.clientHeight2 = {};
Object.defineProperty(Element.prototype, "clientHeight2", {
get: function() {
return window.clientHeight2[this.uniqueID] || this.clientHeight;
}
});
WIDTH = "clientWidth2";
HEIGHT = "clientHeight2";
}
Notice that I created the constants WIDTH/HEIGHT. You should use these to get the width/height of your elements:
var width = element[WIDTH];
It's complicated but it works. I had the same problem as you, accessing clientWidth was incredibly slow. This solves the problem very well. It is still not as fast IE7 but it is back to being usable again.
I was unable to find any documentation that this is a known bug. To improve performance, why not cache the clientWidth property and update the cache periodically? I.E if you code was:
var someValue = someElement.clientWidth + somethingElse;
Change that to:
// Note the following 3 lines use prototype
// To do this without prototype, create the function,
// create a closure out of it, and have the function
// repeatedly call itself using setTimeout() with a timeout of 1000
// milliseconds (or more/less depending on performance you need)
var updateCache = function() {
this. clientWidthCache = $('someElement').clientWidth;
};
new PeriodicalExecuter(updateCache.bind(this),1);
var someValue = this.clientWidthCache + somethingElse
Your problem may be related to something else (and not only the clientwidth call): are your updating/resizing anyhting in your DOM while calling this function?
Your browser could be busy doing reflow on IE8, thus making clientwidth slower?
IE 8 has the ability to switch between IE versions and also there is a compatibility mode.
Have you tried switching to Compatibility Mode? Does that make any difference?
I though I had noticed a slow performance also when reading the width properties. And there may very well be.
However, I discovered that the main impact to performance in our app was that the function which was attached to the window's on resize event was itself somehow causing another resize which caused a cascading effect, though not an infinite loop. I realized this when i saw the call count for the function was orders of magnitude larger in IE8 than in IE7 (love the IE Developer Tool). I think the reason is that some activities on elements, like setting element widths perhaps, now cause a reflow in IE8 that did not do so in IE7.
I fixed it by setting the window's resize event to: resize="return myfunction();" instead of just resize="myfunction();" and making sure myfunction returned false;
I realize the original question is several months old but I figured I'd post my findings in case someone else can benefit.

Categories