Canvas performance change in Chrome - javascript

I'm working on an animation library, and every once in a while I run a benchmark test to see how much of a gain or loss I get with certain features. Recently I've run into something that has me quite perplexed, perhaps someone with more knowledge can shine a light on this for me.
Performance Before:
Chrome: ~4460 sprites # 30fps
Safari: ~2817 sprites # 30fps
FireFox: ~1273 sprites # 30fps
iPhone 4S: ~450 # 30fps
Peformance Now:
Chrome: ~3000 sprites # 30fps
Safari: ~2950 sprites # 30fps
FireFox: ~1900 sprites # 30fps (before Garbage Collection becomes too distracting)
iPhone 4S: ~635 # 30fps
So you can see, Chrome took quite a hit in performance, while every other browser seems to have gotten a little better over this time frame. The biggest thing I notice, and what I'm figuring is the answer, is that the CPU usage seems to have been throttled back in Chrome (I swear before I could get up near 90%, now its maxing around 60%). The majority of the CPU is being used for the drawImage() call, and I'm not sure I can do anything to optimize that.
If its simply an issue where Chrome is now limiting my CPU usage, I'm fine with that.
Any insight would be greatly appreciated...
_s.Sprite.prototype.drawBasic = function() {
var s = this.ctx;
if(s.globalAlpha!=this._alpha) s.globalAlpha = this._alpha;
var width = this.width;
var height = this.height;
var x = this._x;
var y = this._y;
if (_s.snapToPixel) {
x = this._x + (this._x < 0 ? -1 : 0) | 0;
y = this._y + (this._y < 0 ? -1 : 0) | 0;
height = height + (height < 0 ? -1 : 0) | 0;
height = height + (height < 0 ? -1 : 0) | 0;
}
var frame = this.sequence[this.frame] || 0;
var sheetY = frame + (frame < 0 ? -1 : 0) | 0;
var sheetX = (frame - sheetY) * this.spriteSheetX || 0;
s.drawImage(this.bitmap.image, this.bitmap.frameRect.x2 * sheetX, this.bitmap.frameRect.y2 * sheetY, this.bitmap.frameRect.x2, this.bitmap.frameRect.y2, x - (width * this._scaleX) * this.anchorX, y - (height * this._scaleX) * this.anchorY, width * this._scaleX, height * this._scaleY);
this.updateFrame();
};
UPDATE
So I downloaded an old version of Chrome (25.0.1364.5), and ran my benchmark test:
Then I reran in the most current version of Chrome:
Clearly Chrome has changed. Was it on purpose? I don't know. You can see that in the old version of Chrome I've actually gained more performance over my original 4460 (+ ~400, my optimizations must have worked), but you can also see it lets me hover at 100% cpu usage. 2x cpu almost 2x object on the screen.

Update
setInterval doesn't have the issue. Only happens with requestAnimationFrame. This finally makes so much sense. requestAnimationFrame already throttles things to 60fps, what I wasn't aware of, and can't seem to find any info on is that Chrome (others?) throttle it to 30 (60/2) and then 20 (60/3) and probably 15(60/4)... this keeps it in sync with 60hz, so you never end up with 40fps that looks strange because its out of sync with your screen refresh rate.
This explains a lot. I'm really enjoying the cpu savings this provides us.
Updated
An example without any of my code... http://www.goodboydigital.com/pixijs/canvas/bunnymark/ if you run this in Chrome... you will see the point when it jumps from ~60fps straight to 30fps. You can keep adding more bunnies, pixy can handle it... Chrome is throttling the fps. This is not how Chrome use to behave.
So I figured out whats going on here. It's not that performance has changed per say, I can still get 4800 objects on the screen at 30fps. What has changed seems to be the way Chrome tries to optimize the end users experience. It actually throttles things down from 60fps to ~30fps (29.9fps according to dev tools), which causes if(fps>=30) to return false:
stage.onEnterFrame=function(fps){ // fps = the current system fps
if(fps>=30){ // add astroids until we are less than 30fps
stage.addChild(new Asteroid());
}
}
For some reason around 2800 objects, Chrome throttles down to 30fps instead of trying to go as fast as possible... So if I start the benchmark with 4800 objects, it stays at a wonderfully consistent 29.9fps.
(you can see here that its either 60fps or 29.9fps no real in-between, the only thing that changes is how often it switches)
This is the code used for stage timing...
_s.Stage.prototype.updateFPS = function() {
var then = this.ctx.then;
var now = this.ctx.now = Date.now();
var delta = now - then;
this.ctx.then = now;
this.ctx.frameRatio = 60 / (1000 / delta);
};
Hopefully this helps someone else down the road.

Related

Interpreting performance tests for non-zero check

I've recently again fallen into the trap of premature optimization, and hopefully climbed back out. However, on my short intermission, i've encountered something, which i'd like to confirm.
My very basic performance test yielded similar results on chrome (all variables are just declared globally, the first takes ~9ms, second ~7.5ms):
input = Array.from({ length: 1000000 }, () => Math.random() > 0.8 ? 0 : Math.random() * 1000000000 );
start = performance.now();
for (let i = 0; i < 1000000; i++) {
input[i] = input[i] === 0 ? 0 : 1;
}
console.log(performance.now() - start);
and
input = Array.from({ length: 1000000 }, () => Math.random() > 0.8 ? 0 : Math.random() * 1000000000 );
start = performance.now();
for (let i = 0; i < 1000000; i++) {
input[i] = ((input[i] | (~input[i] + 1)) >>> 31) & 1;
}
console.log(performance.now() - start);
When taking into account, that the loop and assignment itself is already taking a lot of time (~4.5ms), the second potentially takes ~33% less time, which is however far smaller gain than what i'd expect (and within measuring inaccuracy, e.g. on FireFox, both take much longer, and the second is ~33% worse).
Can i at this point conclude, that an optimization of the condition is already taking place, and a similar code change already being done, or am i falling for some mirage of a microbenchmark? In my mind, a branch of this category should be excessively more time consuming than the calculation.
I am primarily skeptical of my own reasoning, because i know, that these kinds of tests can very easily have skewed results for unforeseen reasons.
Yes, it's most likely an effect of measurement and JIT-compiler effects combined.
Especially the heuristik-driven JIT-compiler in JS modern browsers have is that good, that you won't have any chance of visualizing the "real" performance benefit of the intuitively faster statement.
The variance in my browser is so high that both codes need approximately equal time (running in Opera), which underlines your effect in FF.
Be glad that you found that 1.5ms advantage, you won't see more.

How to measure the accuracy of performance.now

I'm trying to figure out what the accuracy of performance.now is in Chrome. Using the following code:
const results = []
let then = 0
for(let i = 0; i < 10000; i++) {
const now = performance.now()
if (Math.abs(now - then) > 1e-6) {
results.push(now)
then = now
}
}
console.log(results.join("\n"))
I am getting the following results:
55058.699999935925
55058.79999976605
55058.89999959618
55058.99999989197
55059.09999972209
My understanding is that these values are in seconds, which means that each measurement is roughly 100ms apart. Is my testing methodology flawed or is performance.now actually limited to 100ms resolution in Chrome? I looked online and what I found stated the accuracy to be 100μs with 100μs jitter.
These results are in milliseconds (55K seconds would mean you had your page opened for 15 Hours when this script did execute...)
As for the precision, this is now browser dependent and subject to change when better solutions against TimeBased attacks will be found, but yes, Chrome does limit the accuracy (0.1s) and add jitter (±0.1ms), Firefox does limit even further (1ms by default) and also adds jitter (though there you can set these options), Edge does like Chrome according to this comment, and it seems Safari does a 1ms clamp only...

Faster way of scaling text in browser? (help interpret test)

I need to scale lots of text nodes in the browser (support of all modern desktop and mobile browsers).
If I am right there are two options that offer good performance: scaling text objects in Canvas or scaling text nodes in the DOM using transform:matrix.
I have created a scenario to test both versions but the results are inconclusive. Uncomment testDOM() or testCanvas() function to start the test. (I am using JQuery and CreateJS framework because it was convenient. It is possible to use vanilla JS but I don't think that is the bottleneck here). (It matters what portion of the screen you actually see so please switch to full screen view in codepen).
http://codepen.io/dandare/pen/pEJyYG
var WIDTH = 500;
var HEIGHT = 500;
var COUNT = 200;
var STEP = 1.02;
var MIN = 0.1;
var MAX = 10;
var stage;
var canvas;
var bg;
var canvasTexts = [];
var domTexts = [];
var domMatrix = [];
var dom;
function testDOM() {
for (var i = 0; i < COUNT; i++) {
var text = $("<div>Hello World</div>");
var scale = MIN + Math.random() * 10;
var matrix = [scale, 0, 0, scale, Math.random() * WIDTH, Math.random() * HEIGHT];
text.css("transform", "matrix(" + matrix.join(',') + ")");
domTexts.push(text);
domMatrix.push(matrix);
}
dom = $('#dom');
dom.append(domTexts);
setTimeout(tickDOM, 1000);
}
function tickDOM() {
for (var i = 0; i < domTexts.length; i++) {
var text = domTexts[i];
var matrix = domMatrix[i];
var scale = matrix[0];
scale *= STEP;
if (scale > MAX)
scale = MIN;
matrix[0] = matrix[3] = scale;
text.css("transform", "matrix(" + matrix.join(',') + ")");
}
requestAnimationFrame(tickDOM);
}
function testCanvas() {
$('#dom').hide();
stage = new createjs.Stage('canvas');
createjs.Touch.enable(stage);
createjs.Ticker.timingMode = createjs.Ticker.RAF;
canvas = stage.canvas;
devicePixelRatio = window.devicePixelRatio || 1;
stage.scaleX = devicePixelRatio;
stage.scaleY = devicePixelRatio;
console.log('devicePixelRatio = ' + devicePixelRatio);
stage.mouseMoveOutside = true;
stage.preventSelection = false;
stage.tickEnabled = false;
stage.addChild(bg = new createjs.Shape());
bg.graphics.clear();
bg.graphics.f('#F2F2F2').drawRect(0, 0, 2 * WIDTH, HEIGHT);
canvas.width = 2 * WIDTH * devicePixelRatio;
canvas.height = HEIGHT * devicePixelRatio;
canvas.style.width = 2 * WIDTH + 'px';
canvas.style.height = HEIGHT + 'px';
stage.update();
for (var i = 0; i < COUNT; i++) {
var text = new createjs.Text("Hello World", "10px", "#333333");
text.scaleX = text.scaleY = MIN + Math.random() * 10;
text.x = Math.random() * WIDTH;
text.y = Math.random() * HEIGHT;
stage.addChild(text);
canvasTexts.push(text);
}
stage.update();
setTimeout(tickCanvas, 1000);
}
function tickCanvas() {
for (var i = 0; i < canvasTexts.length; i++) {
var text = canvasTexts[i];
text.scaleX = text.scaleY *= STEP;
if (text.scaleX > MAX)
text.scaleX = text.scaleY = MIN;
}
stage.update();
requestAnimationFrame(tickCanvas);
}
testDOM();
//testCanvas();
My questions:
Is it possible to improve the performance of my tests? Am I doing something wrong?
The first 5-10 seconds are significantly slower but I don't understand why. Does the browser somehow cashes the text objects after some time? If yes, is the test unusable for real world scenario testing where the objects don't zoom in a loop for longer period of time?
According to the Chrome Profiling tool the DOM version leaves 40% more idle time (is 40% more faster) than the Canvas version but the Canvas animation looks much smoother (after the initial 5-10 seconds of lagging), how should I interpret the Profiling tool results?
In the DOM version I am trying to hide the parent of the text nodes before I apply the transformations and then unhide it but it probably does not matter because transform:matrix on absolutely positioned element does not cause reflow, am I right?
The DOM text nodes have some advantages over the Canvas nodes like native mouse over detection with cursor: pointer or support for decorations (you can not have underlined text in Canvas). Anything else I should know?
When setting the transform:matrix I have to create a string that the compiler must to parse back to numbers, is there a more efficient way of using transform:matrix?
Q.1
Is it possible to improve the performance of my tests? Am I doing
something wrong?
Yes and no. (yes improve and no nothing inherently wrong (ignoring jQuery))
Performance is browser, and device dependent, for example Firefox handles objects better than arrays, while Chrome prefers arrays. There is a long list of differences just for the javascript.
Then the rendering is a dependent on the hardware, How much memory, what capabilities, and the particular drivers. Some hardware hates state changes, while others handle them at full speed. Limiting state changes can improve the speed on one machine while the extra code complexity will impact devices that don't need the optimisation.
The OS also plays a part.
Q.2
The first 5-10 seconds are significantly slower but I don't understand
why. Does the browser somehow cashes the text objects after some time?
If yes, is the test unusable for real world scenario testing where the
objects don't zoom in a loop for longer period of time?
Performance testing in Javascript is very complicated and as a whole application (like your test) is not at all practical.
Why slow?
Many reasons, moving memory to the display device, javascript optimising compilers that run while the codes runs and will recompile if it sees fit, this impacts the performance Un-optimised JS is SLOOOOOWWWWWWWW... and you are seeing it run unoptimised.
As well. In an environment like code pen you are also having to deal with all its code that runs in the same context as yours, it has memory, dom, cpu, GC demands in the same environment as yours and thus your code can not be said to be isolated and profiling results accurate.
Q.3
According to the Chrome Profiling tool the DOM version leaves 40% more
idle time (is 40% more faster) than the Canvas version but the Canvas
animation looks much smoother (after the initial 5-10 seconds of
lagging), how should I interpret the Profiling tool results?
That is the nature of requestAnimationFrame (rAF), it will wait till the next frame is ready before it calls your function. Thus if you run 1ms past 1/60th of a second you have missed the presentation of the current display refresh and rAF will wait till the next is due 1/60th minus 1ms before presentation and the next request is called. This will result in ~50% idle time.
There is not much that can be done than make you render function smaller and call it more often, but then you will get extra overhead with the calls.
rAF can be called many times during a frame and will present all renders during that frame at the same time. That way you will not get the overrun idle time if you keep an eye on the current time and ensure you do not overrun the 1/60th second window of opportunity.
Q.4
In the DOM version I am trying to hide the parent of the text nodes
before I apply the transformations and then unhide it but it probably
does not matter because transform:matrix on absolutely positioned
element does not cause reflow, am I right?
Reflow will not be triggered until you exit the function, hiding the parent at the start of a function and then unhiding it at the end will not make much difference. Javascript is blocking, that means nothing will happen while you are in a function.
Q.5
The DOM text nodes have some advantages over the Canvas nodes like
native mouse over detection with cursor: pointer or support for
decorations (you can not have underlined text in Canvas). Anything
else I should know?
That will depend on what the intended use is. DOM offers a full API for UI and presentation. Canvas offers rendering and pixel manipulation. The logic I use is if it takes more code to do it via DOM then canvas, then it is a canvas job and visa versa
Q.6
When setting the transform:matrix I have to create a string that the
compiler must to parse back to numbers, is there a more efficient way
of using transform:matrix?
No. That is the CSS way.

How can I improve performance on my parallax scroll script?

I'm using Javascript & jQuery to build a parallax scroll script that manipulates an image in a figure element using transform:translate3d, and based on the reading I've done (Paul Irish's blog, etc), I've been informed the best solution for this task is to use requestAnimationFrame for performance reasons.
Although I understand how to write Javascript, I'm always finding myself uncertain of how to write good Javascript. In particular, while the code below seems to function correctly and smoothly, I'd like to get a few issues resolved that I'm seeing in Chrome Dev Tools.
$(document).ready(function() {
function parallaxWrapper() {
// Get the viewport dimensions
var viewportDims = determineViewport();
var parallaxImages = [];
var lastKnownScrollTop;
// Foreach figure containing a parallax
$('figure.parallax').each(function() {
// Save information about each parallax image
var parallaxImage = {};
parallaxImage.container = $(this);
parallaxImage.containerHeight = $(this).height();
// The image contained within the figure element
parallaxImage.image = $(this).children('img.lazy');
parallaxImage.offsetY = parallaxImage.container.offset().top;
parallaxImages.push(parallaxImage);
});
$(window).on('scroll', function() {
lastKnownScrollTop = $(window).scrollTop();
});
function animateParallaxImages() {
$.each(parallaxImages, function(index, parallaxImage) {
var speed = 3;
var delta = ((lastKnownScrollTop + ((viewportDims.height - parallaxImage.containerHeight) / 2)) - parallaxImage.offsetY) / speed;
parallaxImage.image.css({
'transform': 'translate3d(0,'+ delta +'px,0)'
});
});
window.requestAnimationFrame(animateParallaxImages);
}
animateParallaxImages();
}
parallaxWrapper();
});
Firstly, when I head to the 'Timeline' tab in Chrome Dev Tools, and start recording, even with no actions on the page being performed, the "actions recorded" overlay count continues to climb, at a rate of about ~40 per second.
Secondly, why is an "animation frame fired" executing every ~16ms, even when I am not scrolling or interacting with the page, as shown by the image below?
Thirdly, why is the Used JS Heap increasing in size without me interacting with the page? As shown in the image below. I have eliminated all other scripts that could be causing this.
Can anyone help me with some pointers to fix the above issues, and give me suggestions on how I should improve my code?
(1 & 2 -- same answer) The pattern you are using creates a repeating animating loop which attempts to fire at the same rate as the browser refreshes. That's usually 60 time per second so the activity you're seeing is the loop executing approximately every 1000/60=16ms. If there's no work to do, it still fires every 16ms.
(3) The browser consumes memory as needed for your animations but the browser does not reclaim that memory immediately. Instead it occasionally reclaims any orphaned memory in a process called garbage collection. So your memory consumption should go up for a while and then drop in a big chunk. If it doesn't behave that way, then you have a memory leak.
Edit: I had not seen the answers from #user1455003 and #mpd at the time I wrote this. They answered while I was writing the book below.
requestAnimationFrame is analogous to setTimeout, except the browser wont fire your callback function until it's in a "render" cycle, which typically happens about 60 times per second. setTimeout on the other hand can fire as fast as your CPU can handle if you want it to.
Both requestAnimationFrame and setTimeout have to wait until the next available "tick" (for lack of a better term) until it will run. So, for example, if you use requestAnimationFrame it should run about 60 times per second, but if the browsers frame rate drops to 30fps (because you're trying to rotate a giant PNG with a large box-shadow) your callback function will only fire 30 times per second. Similarly, if you use setTimeout(..., 1000) it should run after 1000 milliseconds. However, if some heavy task causes the CPU to get caught up doing work, your callback won't fire until the CPU has cycles to give. John Resig has a great article on JavaScript timers.
So why not use setTimeout(..., 16) instead of request animation frame? Because your CPU might have plenty of head room while the browser's frame rate has dropped to 30fps. In such a case you would be running calculations 60 times per second and trying to render those changes, but the browser can only handle half that much. Your browser would be in a constant state of catch-up if you do it this way... hence the performance benefits of requestAnimationFrame.
For brevity, I am including all suggested changes in a single example below.
The reason you are seeing the animation frame fired so often is because you have a "recursive" animation function which is constantly firing. If you don't want it firing constantly, you can make sure it only fires while the user is scrolling.
The reason you are seeing the memory usage climb has to do with garbage collection, which is the browsers way of cleaning up stale memory. Every time you define a variable or function, the browser has to allocate a block of memory for that information. Browsers are smart enough to know when you are done using a certain variable or function and free up that memory for reuse - however, it will only collect the garbage when there is enough stale memory worth collecting. I can't see the scale of the memory graph in your screenshot, but if the memory is increasing in kilobyte size amounts, the browser may not clean it up for several minutes. You can minimize the allocation of new memory by reusing variable names and functions. In your example, every animation frame (60x second) defines a new function (used in $.each) and 2 variables (speed and delta). These are easily reusable (see code).
If your memory usage continues to increase ad infinitum, then there is a memory leak problem elsewhere in your code. Grab a beer and start doing research as the code you've posted here is leak-free. The biggest culprit is referencing an object (JS object or DOM node) which then gets deleted and the reference still hangs around. For example, if you bind a click event to a DOM node, delete the node, and never unbind the event handler... there ya go, a memory leak.
$(document).ready(function() {
function parallaxWrapper() {
// Get the viewport dimensions
var $window = $(window),
speed = 3,
viewportDims = determineViewport(),
parallaxImages = [],
isScrolling = false,
scrollingTimer = 0,
lastKnownScrollTop;
// Foreach figure containing a parallax
$('figure.parallax').each(function() {
// The browser should clean up this function and $this variable - no need for reuse
var $this = $(this);
// Save information about each parallax image
parallaxImages.push({
container = $this,
containerHeight: $this.height(),
// The image contained within the figure element
image: $this.children('img.lazy'),
offsetY: $this.offset().top
});
});
// This is a bit overkill and could probably be defined inline below
// I just wanted to illustrate reuse...
function onScrollEnd() {
isScrolling = false;
}
$window.on('scroll', function() {
lastKnownScrollTop = $window.scrollTop();
if( !isScrolling ) {
isScrolling = true;
animateParallaxImages();
}
clearTimeout(scrollingTimer);
scrollingTimer = setTimeout(onScrollEnd, 100);
});
function transformImage (index, parallaxImage) {
parallaxImage.image.css({
'transform': 'translate3d(0,' + (
(
lastKnownScrollTop +
(viewportDims.height - parallaxImage.containerHeight) / 2 -
parallaxImage.offsetY
) / speed
) + 'px,0)'
});
}
function animateParallaxImages() {
$.each(parallaxImages, transformImage);
if (isScrolling) {
window.requestAnimationFrame(animateParallaxImages);
}
}
}
parallaxWrapper();
});
#markE's answer is right on for 1 & 2
(3) Is due to the fact that your animation loop is infinitely recursive:
function animateParallaxImages() {
$.each(parallaxImages, function(index, parallaxImage) {
var speed = 3;
var delta = ((lastKnownScrollTop + ((viewportDims.height - parallaxImage.containerHeight) / 2)) - parallaxImage.offsetY) / speed;
parallaxImage.image.css({
'transform': 'translate3d(0,'+ delta +'px,0)'
});
});
window.requestAnimationFrame(animateParallaxImages); //recursing here, but there is no base base
}
animateParallaxImages(); //Kick it off
If you look at the example on MDN:
var start = null;
var element = document.getElementById("SomeElementYouWantToAnimate");
function step(timestamp) {
if (!start) start = timestamp;
var progress = timestamp - start;
element.style.left = Math.min(progress/10, 200) + "px";
if (progress < 2000) {
window.requestAnimationFrame(step);
}
}
window.requestAnimationFrame(step);
I would suggest either stopping recursion at some point, or refactor your code so functions/variables aren't being declared in the loop:
var SPEED = 3; //constant so only declare once
var delta; // declare outside of the function to reduce the number of allocations needed
function imageIterator(index, parallaxImage){
delta = ((lastKnownScrollTop + ((viewportDims.height - parallaxImage.containerHeight) / 2)) - parallaxImage.offsetY) / SPEED;
parallaxImage.image.css({
'transform': 'translate3d(0,'+ delta +'px,0)'
});
}
function animateParallaxImages() {
$.each(parallaxImages, imageIterator); // you could also change this to a traditional loop for a small performance gain for(...)
window.requestAnimationFrame(animateParallaxImages); //recursing here, but there is no base base
}
animateParallaxImages(); //Kick it off
Try getting rid of the animation loop and putting the scroll changes in the 'scroll' function. This will prevent your script from doing transforms when lastKnownScrollTop is unchanged.
$(window).on('scroll', function() {
lastKnownScrollTop = $(window).scrollTop();
$.each(parallaxImages, function(index, parallaxImage) {
var speed = 3;
var delta = ((lastKnownScrollTop + ((viewportDims.height - parallaxImage.containerHeight) / 2)) - parallaxImage.offsetY) / speed;
parallaxImage.image.css({
'transform': 'translate3d(0,'+ delta +'px,0)'
});
});
});

IE10 laggy rendering for rapid positioning changes

I am creating a rendering engine for box2djs that uses elements on the page to render rather than canvas, because it is much easier to style and manipulate elements than it is to implement the same effects on Canvas.
Anyways, Chrome (best as always) renders it flawlessly at 60fps the whole time, where IE10 starts lagging once it is dealing with many elements (about 20+ on my machine).
The thing is IE10 beats V8 (Chrome's JS Engine) in the WebKit Sunspider, so I don't understand why it would be more laggy on IE10 than Chrome.
Why does IE10 start lagging when Chrome doesn't if is faster?
My only guess is that IE10 is slower at page rendering and can't handle that many redraws (60 times a second).
Here's my rendering code:
JS
function drawShape(shape) {
if (shape.m_type === b2Shape.e_circleShape) {
var circle = shape,
pos = circle.m_position,
r = circle.m_radius,
ax = circle.m_R.col1,
pos2 = new b2Vec2(pos.x + r * ax.x, pos.y + r * ax.y);
var div = document.getElementById(shape.GetUserData());
if (div != undefined) {
var x = shape.m_position.x - shape.m_radius,
y = shape.m_position.y - shape.m_radius,
r = circle.m_radius;
div.style.left = x + "px";
div.style.top = y + "px";
}
} else {
var poly = shape;
var div = document.getElementById(shape.GetUserData());
if (div != undefined) {
var x = poly.m_position.x - (poly.m_vertices[0].x),
y = poly.m_position.y - (poly.m_vertices[0].y);
div.style.left = x + "px";
div.style.top = y + "px";
}
}
}
If you are unfamiliar with box2d this function is called for each shape from drawWorld() and drawWorld() is called in each tick. I have my ticks set at 1000/60 miliseconds, or 60 frames per second.
My hunch is that IE10 is struggling with the repaint and reflow of your page. So when you render your elements on the page and move them around and what not (with their styling), it'll cause TONS of repaints. As to why it's performing worse than Chrome, it's probably because of the underlying layout/rendering engine.
IE uses the Trident engine, developed by yours truly, Mircosoft, and has been around since IE4.
Chrome on the other hand, uses Webkit, along with Safari and recently, Opera.
Nicole Sullivan has a good article explaining repaint/reflow process: http://www.stubbornella.org/content/2009/03/27/reflows-repaints-css-performance-making-your-javascript-slow/
If you want to improve the performance of your page on IE10, maybe using canvas is your answer.

Categories