I need to scale lots of text nodes in the browser (support of all modern desktop and mobile browsers).
If I am right there are two options that offer good performance: scaling text objects in Canvas or scaling text nodes in the DOM using transform:matrix.
I have created a scenario to test both versions but the results are inconclusive. Uncomment testDOM() or testCanvas() function to start the test. (I am using JQuery and CreateJS framework because it was convenient. It is possible to use vanilla JS but I don't think that is the bottleneck here). (It matters what portion of the screen you actually see so please switch to full screen view in codepen).
http://codepen.io/dandare/pen/pEJyYG
var WIDTH = 500;
var HEIGHT = 500;
var COUNT = 200;
var STEP = 1.02;
var MIN = 0.1;
var MAX = 10;
var stage;
var canvas;
var bg;
var canvasTexts = [];
var domTexts = [];
var domMatrix = [];
var dom;
function testDOM() {
for (var i = 0; i < COUNT; i++) {
var text = $("<div>Hello World</div>");
var scale = MIN + Math.random() * 10;
var matrix = [scale, 0, 0, scale, Math.random() * WIDTH, Math.random() * HEIGHT];
text.css("transform", "matrix(" + matrix.join(',') + ")");
domTexts.push(text);
domMatrix.push(matrix);
}
dom = $('#dom');
dom.append(domTexts);
setTimeout(tickDOM, 1000);
}
function tickDOM() {
for (var i = 0; i < domTexts.length; i++) {
var text = domTexts[i];
var matrix = domMatrix[i];
var scale = matrix[0];
scale *= STEP;
if (scale > MAX)
scale = MIN;
matrix[0] = matrix[3] = scale;
text.css("transform", "matrix(" + matrix.join(',') + ")");
}
requestAnimationFrame(tickDOM);
}
function testCanvas() {
$('#dom').hide();
stage = new createjs.Stage('canvas');
createjs.Touch.enable(stage);
createjs.Ticker.timingMode = createjs.Ticker.RAF;
canvas = stage.canvas;
devicePixelRatio = window.devicePixelRatio || 1;
stage.scaleX = devicePixelRatio;
stage.scaleY = devicePixelRatio;
console.log('devicePixelRatio = ' + devicePixelRatio);
stage.mouseMoveOutside = true;
stage.preventSelection = false;
stage.tickEnabled = false;
stage.addChild(bg = new createjs.Shape());
bg.graphics.clear();
bg.graphics.f('#F2F2F2').drawRect(0, 0, 2 * WIDTH, HEIGHT);
canvas.width = 2 * WIDTH * devicePixelRatio;
canvas.height = HEIGHT * devicePixelRatio;
canvas.style.width = 2 * WIDTH + 'px';
canvas.style.height = HEIGHT + 'px';
stage.update();
for (var i = 0; i < COUNT; i++) {
var text = new createjs.Text("Hello World", "10px", "#333333");
text.scaleX = text.scaleY = MIN + Math.random() * 10;
text.x = Math.random() * WIDTH;
text.y = Math.random() * HEIGHT;
stage.addChild(text);
canvasTexts.push(text);
}
stage.update();
setTimeout(tickCanvas, 1000);
}
function tickCanvas() {
for (var i = 0; i < canvasTexts.length; i++) {
var text = canvasTexts[i];
text.scaleX = text.scaleY *= STEP;
if (text.scaleX > MAX)
text.scaleX = text.scaleY = MIN;
}
stage.update();
requestAnimationFrame(tickCanvas);
}
testDOM();
//testCanvas();
My questions:
Is it possible to improve the performance of my tests? Am I doing something wrong?
The first 5-10 seconds are significantly slower but I don't understand why. Does the browser somehow cashes the text objects after some time? If yes, is the test unusable for real world scenario testing where the objects don't zoom in a loop for longer period of time?
According to the Chrome Profiling tool the DOM version leaves 40% more idle time (is 40% more faster) than the Canvas version but the Canvas animation looks much smoother (after the initial 5-10 seconds of lagging), how should I interpret the Profiling tool results?
In the DOM version I am trying to hide the parent of the text nodes before I apply the transformations and then unhide it but it probably does not matter because transform:matrix on absolutely positioned element does not cause reflow, am I right?
The DOM text nodes have some advantages over the Canvas nodes like native mouse over detection with cursor: pointer or support for decorations (you can not have underlined text in Canvas). Anything else I should know?
When setting the transform:matrix I have to create a string that the compiler must to parse back to numbers, is there a more efficient way of using transform:matrix?
Q.1
Is it possible to improve the performance of my tests? Am I doing
something wrong?
Yes and no. (yes improve and no nothing inherently wrong (ignoring jQuery))
Performance is browser, and device dependent, for example Firefox handles objects better than arrays, while Chrome prefers arrays. There is a long list of differences just for the javascript.
Then the rendering is a dependent on the hardware, How much memory, what capabilities, and the particular drivers. Some hardware hates state changes, while others handle them at full speed. Limiting state changes can improve the speed on one machine while the extra code complexity will impact devices that don't need the optimisation.
The OS also plays a part.
Q.2
The first 5-10 seconds are significantly slower but I don't understand
why. Does the browser somehow cashes the text objects after some time?
If yes, is the test unusable for real world scenario testing where the
objects don't zoom in a loop for longer period of time?
Performance testing in Javascript is very complicated and as a whole application (like your test) is not at all practical.
Why slow?
Many reasons, moving memory to the display device, javascript optimising compilers that run while the codes runs and will recompile if it sees fit, this impacts the performance Un-optimised JS is SLOOOOOWWWWWWWW... and you are seeing it run unoptimised.
As well. In an environment like code pen you are also having to deal with all its code that runs in the same context as yours, it has memory, dom, cpu, GC demands in the same environment as yours and thus your code can not be said to be isolated and profiling results accurate.
Q.3
According to the Chrome Profiling tool the DOM version leaves 40% more
idle time (is 40% more faster) than the Canvas version but the Canvas
animation looks much smoother (after the initial 5-10 seconds of
lagging), how should I interpret the Profiling tool results?
That is the nature of requestAnimationFrame (rAF), it will wait till the next frame is ready before it calls your function. Thus if you run 1ms past 1/60th of a second you have missed the presentation of the current display refresh and rAF will wait till the next is due 1/60th minus 1ms before presentation and the next request is called. This will result in ~50% idle time.
There is not much that can be done than make you render function smaller and call it more often, but then you will get extra overhead with the calls.
rAF can be called many times during a frame and will present all renders during that frame at the same time. That way you will not get the overrun idle time if you keep an eye on the current time and ensure you do not overrun the 1/60th second window of opportunity.
Q.4
In the DOM version I am trying to hide the parent of the text nodes
before I apply the transformations and then unhide it but it probably
does not matter because transform:matrix on absolutely positioned
element does not cause reflow, am I right?
Reflow will not be triggered until you exit the function, hiding the parent at the start of a function and then unhiding it at the end will not make much difference. Javascript is blocking, that means nothing will happen while you are in a function.
Q.5
The DOM text nodes have some advantages over the Canvas nodes like
native mouse over detection with cursor: pointer or support for
decorations (you can not have underlined text in Canvas). Anything
else I should know?
That will depend on what the intended use is. DOM offers a full API for UI and presentation. Canvas offers rendering and pixel manipulation. The logic I use is if it takes more code to do it via DOM then canvas, then it is a canvas job and visa versa
Q.6
When setting the transform:matrix I have to create a string that the
compiler must to parse back to numbers, is there a more efficient way
of using transform:matrix?
No. That is the CSS way.
Related
I'm currently tiling many PNG images on several stacked FastLayers with Konva.js. The PNGs contain opacity, and they do not require dragging or hitboxes. The tiles are replaced often, and this seems to work well for medium-sized grids with dimensions of around 30x30. Once the tiles start growing to around 100x100, or even 60x60, the performance begins to slow when replacing individual tiles.
I've started to work on "chunking" tiles, i.e., adding tiles into smaller FastLayer groups. For example, a single 100x100 FastLayer would be divided into several 10x10 FastLayers. When a single tile changes, the idea is that only that chunk should should re-render, ideally speeding up the rendering time overall.
Is this is a good design to attempt, or should I try a different approach? I've looked over the performance tips in the Konva.js documentation, but I haven't seen anything directly relevant to this case.
So, after some research and tinkering, I've discovered the fastest way to render ~4000 images.
Don't use React components for Konva.js. I use React to structure my app, but I've skipped using an intermediate library for Konva.js rendering. Using React Components for the canvas will halve your performance.
Cache common images. I use a simple LRU cache to reuse HTMLImageElement objects.
Reuse Konva.js nodes (Konva.Image) whenever possible. My implementation is rendering a grid of images. The locations do not change, but the images may. Before, I would destroy() a node, and the add another. The destroy() causes an additional render, which creates jank for your users. Instead, I just use the image() method in combination with id() and name() to find and replace images at grid coordinates.
My app allows users to paint long strokes across the grid. This works OK in small strokes, when only using the literal mouse events. For long strokes, this does not work for two reasons. First, the OS and browser throttle the mouse events, giving you intermittent mouse events. Second, being in the middle of a render will give the same side effect. Instead, the software now detects long strokes, and "fills in" the missing coordinates that the user intended to draw between the intermittent mouse events.
Render at intervals. Since my grid can change often, I decided to sample the grid information 24 times a second, rather than allowing each tile change to queue up a batchDraw(). The underlying implementation is using RxJS to poll a Redux store once every 42ms, and only queues a batchDraw() if something has changed.
Caching definitely helps performance, but so does hiding. Konva doesn't (or didn't last I researched this) do any view culling. Below is code I used to hide the island shapes in my Konva strategy game.
stage.on('dragmove', function() {
cullView();
});
function cullView() {
var boundingX = ((-1 * (stage.x() * (1/zoomLevel)))-(window.innerWidth/2))-degreePixels;
var boundingY = ((-1 * (stage.y() * (1/zoomLevel)))-(window.innerHeight/2))-degreePixels;
var boundingWidth = (2 * window.innerWidth * (1/zoomLevel)) + (2*degreePixels);
var boundingHeight = (2 * window.innerHeight * (1/zoomLevel)) + (2*degreePixels);
var x = 0;
var y = 0;
for (var i = 0; i < oceanIslands.length; i++) {
x = oceanIslands[i].getX();
y = oceanIslands[i].getY();
if (((x > boundingX) && (x < (boundingX + boundingWidth))) && ((y > boundingY) && (y < (boundingY + boundingHeight)))) {
if (!oceanIslands[i].visible()) {
oceanIslands[i].show();
oceanIslands[i].clearCache();
if (zoomLevel <= cacheMaxZoom) {
oceanIslands[i].cache();
}
}
} else {
oceanIslands[i].hide();
}
}
I'm using Javascript & jQuery to build a parallax scroll script that manipulates an image in a figure element using transform:translate3d, and based on the reading I've done (Paul Irish's blog, etc), I've been informed the best solution for this task is to use requestAnimationFrame for performance reasons.
Although I understand how to write Javascript, I'm always finding myself uncertain of how to write good Javascript. In particular, while the code below seems to function correctly and smoothly, I'd like to get a few issues resolved that I'm seeing in Chrome Dev Tools.
$(document).ready(function() {
function parallaxWrapper() {
// Get the viewport dimensions
var viewportDims = determineViewport();
var parallaxImages = [];
var lastKnownScrollTop;
// Foreach figure containing a parallax
$('figure.parallax').each(function() {
// Save information about each parallax image
var parallaxImage = {};
parallaxImage.container = $(this);
parallaxImage.containerHeight = $(this).height();
// The image contained within the figure element
parallaxImage.image = $(this).children('img.lazy');
parallaxImage.offsetY = parallaxImage.container.offset().top;
parallaxImages.push(parallaxImage);
});
$(window).on('scroll', function() {
lastKnownScrollTop = $(window).scrollTop();
});
function animateParallaxImages() {
$.each(parallaxImages, function(index, parallaxImage) {
var speed = 3;
var delta = ((lastKnownScrollTop + ((viewportDims.height - parallaxImage.containerHeight) / 2)) - parallaxImage.offsetY) / speed;
parallaxImage.image.css({
'transform': 'translate3d(0,'+ delta +'px,0)'
});
});
window.requestAnimationFrame(animateParallaxImages);
}
animateParallaxImages();
}
parallaxWrapper();
});
Firstly, when I head to the 'Timeline' tab in Chrome Dev Tools, and start recording, even with no actions on the page being performed, the "actions recorded" overlay count continues to climb, at a rate of about ~40 per second.
Secondly, why is an "animation frame fired" executing every ~16ms, even when I am not scrolling or interacting with the page, as shown by the image below?
Thirdly, why is the Used JS Heap increasing in size without me interacting with the page? As shown in the image below. I have eliminated all other scripts that could be causing this.
Can anyone help me with some pointers to fix the above issues, and give me suggestions on how I should improve my code?
(1 & 2 -- same answer) The pattern you are using creates a repeating animating loop which attempts to fire at the same rate as the browser refreshes. That's usually 60 time per second so the activity you're seeing is the loop executing approximately every 1000/60=16ms. If there's no work to do, it still fires every 16ms.
(3) The browser consumes memory as needed for your animations but the browser does not reclaim that memory immediately. Instead it occasionally reclaims any orphaned memory in a process called garbage collection. So your memory consumption should go up for a while and then drop in a big chunk. If it doesn't behave that way, then you have a memory leak.
Edit: I had not seen the answers from #user1455003 and #mpd at the time I wrote this. They answered while I was writing the book below.
requestAnimationFrame is analogous to setTimeout, except the browser wont fire your callback function until it's in a "render" cycle, which typically happens about 60 times per second. setTimeout on the other hand can fire as fast as your CPU can handle if you want it to.
Both requestAnimationFrame and setTimeout have to wait until the next available "tick" (for lack of a better term) until it will run. So, for example, if you use requestAnimationFrame it should run about 60 times per second, but if the browsers frame rate drops to 30fps (because you're trying to rotate a giant PNG with a large box-shadow) your callback function will only fire 30 times per second. Similarly, if you use setTimeout(..., 1000) it should run after 1000 milliseconds. However, if some heavy task causes the CPU to get caught up doing work, your callback won't fire until the CPU has cycles to give. John Resig has a great article on JavaScript timers.
So why not use setTimeout(..., 16) instead of request animation frame? Because your CPU might have plenty of head room while the browser's frame rate has dropped to 30fps. In such a case you would be running calculations 60 times per second and trying to render those changes, but the browser can only handle half that much. Your browser would be in a constant state of catch-up if you do it this way... hence the performance benefits of requestAnimationFrame.
For brevity, I am including all suggested changes in a single example below.
The reason you are seeing the animation frame fired so often is because you have a "recursive" animation function which is constantly firing. If you don't want it firing constantly, you can make sure it only fires while the user is scrolling.
The reason you are seeing the memory usage climb has to do with garbage collection, which is the browsers way of cleaning up stale memory. Every time you define a variable or function, the browser has to allocate a block of memory for that information. Browsers are smart enough to know when you are done using a certain variable or function and free up that memory for reuse - however, it will only collect the garbage when there is enough stale memory worth collecting. I can't see the scale of the memory graph in your screenshot, but if the memory is increasing in kilobyte size amounts, the browser may not clean it up for several minutes. You can minimize the allocation of new memory by reusing variable names and functions. In your example, every animation frame (60x second) defines a new function (used in $.each) and 2 variables (speed and delta). These are easily reusable (see code).
If your memory usage continues to increase ad infinitum, then there is a memory leak problem elsewhere in your code. Grab a beer and start doing research as the code you've posted here is leak-free. The biggest culprit is referencing an object (JS object or DOM node) which then gets deleted and the reference still hangs around. For example, if you bind a click event to a DOM node, delete the node, and never unbind the event handler... there ya go, a memory leak.
$(document).ready(function() {
function parallaxWrapper() {
// Get the viewport dimensions
var $window = $(window),
speed = 3,
viewportDims = determineViewport(),
parallaxImages = [],
isScrolling = false,
scrollingTimer = 0,
lastKnownScrollTop;
// Foreach figure containing a parallax
$('figure.parallax').each(function() {
// The browser should clean up this function and $this variable - no need for reuse
var $this = $(this);
// Save information about each parallax image
parallaxImages.push({
container = $this,
containerHeight: $this.height(),
// The image contained within the figure element
image: $this.children('img.lazy'),
offsetY: $this.offset().top
});
});
// This is a bit overkill and could probably be defined inline below
// I just wanted to illustrate reuse...
function onScrollEnd() {
isScrolling = false;
}
$window.on('scroll', function() {
lastKnownScrollTop = $window.scrollTop();
if( !isScrolling ) {
isScrolling = true;
animateParallaxImages();
}
clearTimeout(scrollingTimer);
scrollingTimer = setTimeout(onScrollEnd, 100);
});
function transformImage (index, parallaxImage) {
parallaxImage.image.css({
'transform': 'translate3d(0,' + (
(
lastKnownScrollTop +
(viewportDims.height - parallaxImage.containerHeight) / 2 -
parallaxImage.offsetY
) / speed
) + 'px,0)'
});
}
function animateParallaxImages() {
$.each(parallaxImages, transformImage);
if (isScrolling) {
window.requestAnimationFrame(animateParallaxImages);
}
}
}
parallaxWrapper();
});
#markE's answer is right on for 1 & 2
(3) Is due to the fact that your animation loop is infinitely recursive:
function animateParallaxImages() {
$.each(parallaxImages, function(index, parallaxImage) {
var speed = 3;
var delta = ((lastKnownScrollTop + ((viewportDims.height - parallaxImage.containerHeight) / 2)) - parallaxImage.offsetY) / speed;
parallaxImage.image.css({
'transform': 'translate3d(0,'+ delta +'px,0)'
});
});
window.requestAnimationFrame(animateParallaxImages); //recursing here, but there is no base base
}
animateParallaxImages(); //Kick it off
If you look at the example on MDN:
var start = null;
var element = document.getElementById("SomeElementYouWantToAnimate");
function step(timestamp) {
if (!start) start = timestamp;
var progress = timestamp - start;
element.style.left = Math.min(progress/10, 200) + "px";
if (progress < 2000) {
window.requestAnimationFrame(step);
}
}
window.requestAnimationFrame(step);
I would suggest either stopping recursion at some point, or refactor your code so functions/variables aren't being declared in the loop:
var SPEED = 3; //constant so only declare once
var delta; // declare outside of the function to reduce the number of allocations needed
function imageIterator(index, parallaxImage){
delta = ((lastKnownScrollTop + ((viewportDims.height - parallaxImage.containerHeight) / 2)) - parallaxImage.offsetY) / SPEED;
parallaxImage.image.css({
'transform': 'translate3d(0,'+ delta +'px,0)'
});
}
function animateParallaxImages() {
$.each(parallaxImages, imageIterator); // you could also change this to a traditional loop for a small performance gain for(...)
window.requestAnimationFrame(animateParallaxImages); //recursing here, but there is no base base
}
animateParallaxImages(); //Kick it off
Try getting rid of the animation loop and putting the scroll changes in the 'scroll' function. This will prevent your script from doing transforms when lastKnownScrollTop is unchanged.
$(window).on('scroll', function() {
lastKnownScrollTop = $(window).scrollTop();
$.each(parallaxImages, function(index, parallaxImage) {
var speed = 3;
var delta = ((lastKnownScrollTop + ((viewportDims.height - parallaxImage.containerHeight) / 2)) - parallaxImage.offsetY) / speed;
parallaxImage.image.css({
'transform': 'translate3d(0,'+ delta +'px,0)'
});
});
});
I'm new to game development. Currently I'm doing a game for js13kgames contest, so the game should be small and that's why I don't use any of modern popular frameworks.
While developing my infinite game loop I found several articles and pieces of advice to implement it. Right now it looks like this:
self.gameLoop = function () {
self.dt = 0;
var now;
var lastTime = timestamp();
var fpsmeter = new FPSMeter({decimals: 0, graph: true, theme: 'dark', left: '5px'});
function frame () {
fpsmeter.tickStart();
now = window.performance.now();
// first variant - delta is increasing..
self.dt = self.dt + Math.min(1, (now-lastTime)/1000);
// second variant - delta is stable..
self.dt = (now - lastTime)/16;
self.dt = (self.dt > 10) ? 10 : self.dt;
self.clearRect();
self.createWeapons();
self.createTargets();
self.update('weapons');
self.render('weapons');
self.update('targets');
self.render('targets');
self.ticks++;
lastTime = now;
fpsmeter.tick();
requestAnimationFrame(frame);
}
requestAnimationFrame(frame);
};
So the problem is in self.dt I've eventually found out that first variant is not suitable for my game because it increases forever and the speed of weapons is increasing with it as well (e.g. this.position.x += (Math.cos(this.angle) * this.speed) * self.dt;..
Second variant looks more suitable, but does it correspond to this kind of loop (http://codeincomplete.com/posts/2013/12/4/javascript_game_foundations_the_game_loop/)?
Here' an implementation of an HTML5 rendering system using a fixed time step with a variable rendering time:
http://jsbin.com/ditad/10/edit?js,output
It's based on this article:
http://gameprogrammingpatterns.com/game-loop.html
Here is the game loop:
//Set the frame rate
var fps = 60,
//Get the start time
start = Date.now(),
//Set the frame duration in milliseconds
frameDuration = 1000 / fps,
//Initialize the lag offset
lag = 0;
//Start the game loop
gameLoop();
function gameLoop() {
requestAnimationFrame(gameLoop, canvas);
//Calcuate the time that has elapsed since the last frame
var current = Date.now(),
elapsed = current - start;
start = current;
//Add the elapsed time to the lag counter
lag += elapsed;
//Update the frame if the lag counter is greater than or
//equal to the frame duration
while (lag >= frameDuration){
//Update the logic
update();
//Reduce the lag counter by the frame duration
lag -= frameDuration;
}
//Calculate the lag offset and use it to render the sprites
var lagOffset = lag / frameDuration;
render(lagOffset);
}
The render function calls a render method on each sprite, with a reference to the lagOffset
function render(lagOffset) {
ctx.clearRect(0, 0, canvas.width, canvas.height);
sprites.forEach(function(sprite){
ctx.save();
//Call the sprite's `render` method and feed it the
//canvas context and lagOffset
sprite.render(ctx, lagOffset);
ctx.restore();
});
}
Here's the sprite's render method that uses the lag offset to interpolate the sprite's render position on the canvas.
o.render = function(ctx, lagOffset) {
//Use the `lagOffset` and previous x/y positions to
//calculate the render positions
o.renderX = (o.x - o.oldX) * lagOffset + o.oldX;
o.renderY = (o.y - o.oldY) * lagOffset + o.oldY;
//Render the sprite
ctx.strokeStyle = o.strokeStyle;
ctx.lineWidth = o.lineWidth;
ctx.fillStyle = o.fillStyle;
ctx.translate(
o.renderX + (o.width / 2),
o.renderY + (o.height / 2)
);
ctx.beginPath();
ctx.rect(-o.width / 2, -o.height / 2, o.width, o.height);
ctx.stroke();
ctx.fill();
//Capture the sprite's current positions to use as
//the previous position on the next frame
o.oldX = o.x;
o.oldY = o.y;
};
The important part is this bit of code that uses the lagOffset and the difference in the sprite's rendered position between frames to figure out its new current canvas position:
o.renderX = (o.x - o.oldX) * lagOffset + o.oldX;
o.renderY = (o.y - o.oldY) * lagOffset + o.oldY;
Notice that the oldX and oldY values are being re-calculated each frame at the end of the method, so that they can be used in the next frame to help figure out the difference.
o.oldX = o.x;
o.oldY = o.y;
I'm actually not sure if this interpolation is completely correct or if this is best way to do it. If anyone out there reading this knows that it's wrong, please let us know :)
The modern version of requestAnimationFrame now sends in a timestamp that you can use to calculate elapsed time. When your desired time interval has elapsed you can do your update, create and render tasks.
Here's example code:
var lastTime;
var requiredElapsed = 1000 / 10; // desired interval is 10fps
requestAnimationFrame(loop);
function loop(now) {
requestAnimationFrame(loop);
if (!lastTime) { lastTime = now; }
var elapsed = now - lastTime;
if (elapsed > requiredElapsed) {
// do stuff
lastTime = now;
}
}
This isn't really an answer to your question, and without knowing more about the particular game I can't say for sure if it will help you, but do you really need to know dt (and FPS)?
In my limited forays into JS game development I've found that often you don't really need to to calculate any kind of dt as you can usually come up with a sensible default value based on your expected frame rate, and make anything time-based (such as weapon reloading) simply work based on the number of ticks (i.e. a bow might take 60 ticks to reload (~1 second # ~60FPS)).
I usually use window.setTimeout() rather than window.requestAnimationFrame(), which I've found generally provides a more stable frame rate which will allow you to define a sensible default to use in place of dt. On the down-side the game will be more of a resource hog and less performant on slower machines (or if the user has a lot of other things running), but depending on your use case those may not be real concerns.
Now this is purely anecdotal advice so you should take it with a pinch of salt, but it has served me pretty well in the past. It all depends on whether you mind the game running more slowly on older/less powerful machines, and how efficient your game loop is. If it's something simple that doesn't need to display real times you might be able to do away with dt completely.
At some point you will want to think about decoupling your physics from your rendering. Otherwise your players could have inconsistent physics. For example, someone with a beefy machine getting 300fps will have very sped up physics compared to someone chugging along at 30fps. This could manifest in the first player cruising around in a mario-like scrolling game at super speed and the other player crawling at half speed (if you did all your testing at 60fps). A way to fix that is to introduce delta time steps. The idea is that you find the time between each frame and use that as part of your physics calculations. It keeps the gameplay consistent regardless of frame rate. Here is a good article to get you started: http://gafferongames.com/game-physics/fix-your-timestep/
requestAnimationFrame will not fix this inconsistency, but it is still a good thing to use sometimes as it has battery saving advantages. Here is a source for more info http://www.chandlerprall.com/2012/06/requestanimationframe-is-not-your-logics-friend/
I did not check the logic of the math in your code .. however here what works for me:
GameBox = function()
{
this.lastFrameTime = Date.now();
this.currentFrameTime = Date.now();
this.timeElapsed = 0;
this.updateInterval = 2000; //in ms
}
GameBox.prototype.gameLoop = function()
{
window.requestAnimationFrame(this.gameLoop.bind(this));
this.lastFrameTime = this.currentFrameTime;
this.currentFrameTime = Date.now();
this.timeElapsed += this.currentFrameTime - this.lastFrameTime ;
if(this.timeElapsed >= this.updateInterval)
{
this.timeElapsed = 0;
this.update(); //modify data which is used to render
}
this.render();
}
This implementation is idenpendant from the CPU-speed(ticks). Hope you can make use of it!
A great solution to your game engine would be to think in objects and entities. You can think of everything in your world as objects and entities. Then you want to make a game object manager that will have a list of all your game objects. Then you want to make a common communication method in the engine so game objects can make event triggers. The entities in your game for example a player would not need to inherent from anything to get the ability to render to the screen or have collision detection. You would simple make common methods in the entity that the game engine is looking for. Then let the game engine handle the entity as it would like. Entities in your game can be created or destroyed at anytime in the game so you should not hard-code any entities at all in the game loop.
You will want other objects in your game engine to respond to event triggers that the engine has received. This can be done using methods in the entity that the game engine will check to see if the method is available and if it is would pass the events to the entity. Do not hard code any of your game logic into the engine it messes up portability and limits your ability to expand on the game later on.
The problem with your code is first your calling different objects render and updates not in the correct order. You need to call ALL your updates then call ALL your renders in that order. Another is your method of hard coding objects into the loop is going to give you a lot of problems, when you want one of the objects to no longer be in the game or if you want to add more objects into the game later on.
Your game objects will have an update() and a render() your game engine will look for that function in the object/entity and call it every frame. You can get very fancy and make the engine work in a way to check if the game object/entity has the functions prior to calling them. for example maybe you want an object that has an update() but never renders anything to the screen. You could make the game object functions optional by making the engine check prior to calling them. Its also good practice to have an init() function for all game objects. When the game engine starts up the scene and creates the objects it will start by calling the game objects init() when first creating the object then every frame calling update() that way you can have a function that you only run one time on creation and another that runs every frame.
delta time is not really needed as window.requestAnimationFrame(frame); will give you ~60fps. So if you're keeping track of the frame count you can tell how much time has passed. Different objects in your game can then, (based off of a set point in the game and what the frame count was) determine how long its been doing something based off its new frame count.
window.requestAnimationFrame = window.requestAnimationFrame || function(callback){window.setTimeout(callback,16)};
gameEngine = function () {
this.frameCount=0;
self=this;
this.update = function(){
//loop over your objects and run each objects update function
}
this.render = function(){
//loop over your objects and run each objects render function
}
this.frame = function() {
self.update();
self.render();
self.frameCount++;
window.requestAnimationFrame(frame);
}
this.frame();
};
I have created a full game engine located at https://github.com/Patrick-W-McMahon/Jinx-Engine/ if you review the code at https://github.com/Patrick-W-McMahon/Jinx-Engine/blob/master/JinxEngine.js you will see a fully functional game engine built 100% in javascript. It includes event handlers and permits action calls between objects that are passed into the engine using the event call stack. check out some of the examples https://github.com/Patrick-W-McMahon/Jinx-Engine/tree/master/examples where you will see how it works. The engine can run around 100,000 objects all being rendered and executed per frame at a rate of 60fps. This was tested on a core i5. different hardware may vary. mouse and keyboard events are built into the engine. objects passed into the engine just need to listen for the event passed by the engine. Scene management and multi scene support is currently being built in for more complex games. The engine also supports high pixel density screens.
Reviewing my source code should get you on the track for building a more fully functional game engine.
I would also like to point out that you should have requestAnimationFrame() called when you're ready to repaint and not prior (aka at the end of the game loop). One good example why you should not call requestAnimationFrame() at the beginning of the loop is if you're using a canvas buffer. If you call requestAnimationFrame() at the beginning, then begin to draw to the canvas buffer you can end up having it draw half of the new frame with the other half being the old frame. This will happen on every frame depending on the time it takes to finish the buffer in relation to the repaint cycle (60fps). But at the same time you would end up overlapping each frame so the buffer will get more messed up as it loops over its self. This is why you should only call requestAnimationFrame() when the buffer is fully ready to draw to the canvas. by having the requestAnimationFrame() at the end you can have it skip a repaint if the buffer is not ready to draw and so every repaint is drawn as it is expected. The position of requestAnimationFrame() in the game loop has a big difference.
I am creating a rendering engine for box2djs that uses elements on the page to render rather than canvas, because it is much easier to style and manipulate elements than it is to implement the same effects on Canvas.
Anyways, Chrome (best as always) renders it flawlessly at 60fps the whole time, where IE10 starts lagging once it is dealing with many elements (about 20+ on my machine).
The thing is IE10 beats V8 (Chrome's JS Engine) in the WebKit Sunspider, so I don't understand why it would be more laggy on IE10 than Chrome.
Why does IE10 start lagging when Chrome doesn't if is faster?
My only guess is that IE10 is slower at page rendering and can't handle that many redraws (60 times a second).
Here's my rendering code:
JS
function drawShape(shape) {
if (shape.m_type === b2Shape.e_circleShape) {
var circle = shape,
pos = circle.m_position,
r = circle.m_radius,
ax = circle.m_R.col1,
pos2 = new b2Vec2(pos.x + r * ax.x, pos.y + r * ax.y);
var div = document.getElementById(shape.GetUserData());
if (div != undefined) {
var x = shape.m_position.x - shape.m_radius,
y = shape.m_position.y - shape.m_radius,
r = circle.m_radius;
div.style.left = x + "px";
div.style.top = y + "px";
}
} else {
var poly = shape;
var div = document.getElementById(shape.GetUserData());
if (div != undefined) {
var x = poly.m_position.x - (poly.m_vertices[0].x),
y = poly.m_position.y - (poly.m_vertices[0].y);
div.style.left = x + "px";
div.style.top = y + "px";
}
}
}
If you are unfamiliar with box2d this function is called for each shape from drawWorld() and drawWorld() is called in each tick. I have my ticks set at 1000/60 miliseconds, or 60 frames per second.
My hunch is that IE10 is struggling with the repaint and reflow of your page. So when you render your elements on the page and move them around and what not (with their styling), it'll cause TONS of repaints. As to why it's performing worse than Chrome, it's probably because of the underlying layout/rendering engine.
IE uses the Trident engine, developed by yours truly, Mircosoft, and has been around since IE4.
Chrome on the other hand, uses Webkit, along with Safari and recently, Opera.
Nicole Sullivan has a good article explaining repaint/reflow process: http://www.stubbornella.org/content/2009/03/27/reflows-repaints-css-performance-making-your-javascript-slow/
If you want to improve the performance of your page on IE10, maybe using canvas is your answer.
I'm trying to draw a huge canvas rectangle on top of the page (some kind of lightbox background), the code is quite straightforward:
var el = document.createElement('canvas');
el.style.position = 'absolute';
el.style.top = 0;
el.style.left = 0;
el.style.zIndex = 1000;
el.width = window.innerWidth + window.scrollMaxX;
el.height = window.innerHeight + window.scrollMaxY;
...
document.body.appendChild(el);
// and later
var ctx = el.getContext("2d");
ctx.fillStyle = "rgba(0, 0, 0, 0.4)";
ctx.fillRect(0, 0, el.width, el.height);
And sometimes (not always) the last line throws:
Component returned failure code: 0x80004005 (NS_ERROR_FAILURE) [nsIDOMCanvasRenderingContext2D.fillRect]
I've been guessing if that happens because of image size or because of the content type beneath the canvas (e.g. embeded video playing), but apparently not.
So I'm looking for any ideas on how to isolate and/or solve this issue.
Looking at the nsIDOMCanvasRenderingContext2D.fillRect() implementation (and going through the functions it calls) - there aren't all too many conditions that will return NS_ERROR_FAILURE. It can only happen if either EnsureSurface() or mThebes->CopyPath() fail. And the following two lines in EnsureSurface() are most likely the source of your issue:
// Check that the dimensions are sane
if (gfxASurface::CheckSurfaceSize(gfxIntSize(mWidth, mHeight), 0xffff)) {
What's being checked here:
Neither the width nor the height of the canvas can exceed 65535 pixels.
The height cannot exceed 32767 pixels on Mac OS X (platform limitation).
The size of canvas data (width * height * 4) cannot exceed 2 GB.
If any of these conditions is violated EnsureSurface() will return false and consequently produce the exception you've seen. Note that the above are implementation details that can change at any time, you shouldn't rely on them. But they might give you an idea which particular limit your code violates.
You could apply a try-catch logic. Firefox seems to be the only browser which behaves this a bit odd way.
el.width = window.innerWidth + window.scrollMaxX;
el.height = window.innerHeight + window.scrollMaxY;
// try first to draw something to check that the size is ok
try
{
var ctx = el.getContext("2d");
ctx.fillRect(0, 0, 1, 1);
}
// if it fails, decrease the canvas size
catch(err)
{
el.width = el.width - 1000;
el.height = el.height - 1000;
}
I haven't found any variable that tells what is the maximum canvas size. It varies from browser to browser and device to device.
The only cross browser method to detect the maximum canvas size seems to be a loop that decreases the canvas size eg. by 100px until it doesn't produce the error. I tested a loop, and it is rather fast way to detect the maximum canvas size. Because other browsers doesn't throw an error when trying to draw on an over sized canvas, it is better to try to draw eg. red rect and read a pixel and check if it is red.
To maximize detection performance:
- While looping, the canvas should be out of the DOM to maximize speed
- Set the starting size to a well known maximum which seems to be 32,767px (SO: Maximum size of a <canvas> element)
- You can make a more intelligent loop which forks the maximum size: something like using first bigger decrease step (eg.1000px) and when an accepted size is reached, tries to increase the size by 500px. If this is accepted size, then increase by 250px and so on. This way the maximum should be found in least amount of trials.