Large 2D world rendering in HTML5 Canvas - javascript

I have a world made up of randomly generated blocks (black being on, white being off). When zoomed out, it essentially looks like white noise. However, instead of each block being 1 pixel, they are 40 pixels and drawn as an image texture.
My game works in a camera basis, so you can only see a fraction of the map at a time and you must move the character around to explore the rest.
Currently, I have my game simply render each image (block texture) that is in range of the canvas. This results in drawing 80-100 images every single frame. While it works fine on a desktop computer, it doesn't do very well on mobile.
Considering the map look doesn't change throughout the game, I wanted to try a different approach. I created a canvas the size of the world, which ended up being 1600x24000 pixels large. I drew all textures onto an external, hidden canvas. This was done once upon initialization. Then I would use the clipping attributes in drawImage to take the subsection that I needed. While it worked, it was extremely laggy and made things very much worse than they were before. In addition, image quality dropped to a more blurred look, which is undesirable.
Now I'm looking for ways to better go about this. So my question is, how should I go about this? Thank you.

When you're using a huge canvas, you can't be sure the renderer won't load the whole texture to render even a part of it. Since you see a huge performance drop, that might well be happening.
A few things i would do :
• try only with fillRect to see how much drawImage is to blame.
• try to set-up once and for all the context then only use drawImage with its simplest flavor :
var topLeft = { col:12, row : 6 }; // shift of the left-most rect (indexes)
context.save();
context.scale( scale, scale);
for column = 0 to columnSeenCount
for row = 0 to rowSeenCount
image = the image of ( topLeft.col + column , topLeft.row + row )
context.drawImage( image, column, row) ;
context.restore();
this way you avoid to re-compute a transform matrix for every drawImage. Much less math involved for the renderer.
• if you do the drawImage by yourself, try to use only rounded coordinates, it will be faster.
• You must round also the scale to prevent artifacts. You can round on 1, but for the scale it might be too much a limit : you can easily 'round' to 0.5 or 0.25 or... by doing :
var precision = 2 ; // 0 => floor ; 1 => at 0.5 ; 2 => 0.25 ; ....
var factor = 1 << precision ;
var roundedFigure = Math.floor( figure * factor) / factor ;
• if the way your application is done makes it easy to draw tile type per tile type, do it and you might win some time (you'll benefit from the fact that image in cache ).
• After that your only resort will be to use webGL or a webGL based renderer...

Two more Ideas could increase your performance:
Check if your whole world is rendered, or just the visible images (on the stage). For example double the world size and see, if it impacts the performance. It shouldn't, if you only render the relevant images.
Use CocoonJS to compile your application. It promises to speed up your application speed by 10 times for mobile devices. But be aware that it implies some serious restrictions on your html around your canvas.
obsolete answer, which assumed that the problem is caused by zooming out too far:
In 3D graphics Mipmaps can be used to avoid this problem. Essentially smaller images (i.e. less pixel) are used, when the object is more distant to the camera.
Maybe you can find something appropriate if you google something like html5 canvas 2D Mipmaps. Or you could build a simple mipmapping algorithm yourself.
But before investing the work, try how performant this approach is, by simple changing all block images, with 1x1-pixel images. Maybe your performance problem is not caused by slow rendering, as you assume. Learn to use a profiler, if it doesn't solve the problem.

A couple of questions and thoughts:
I would ditto #GameAlchemist's tip that using the clipping version of drawImage is slower than "blitting" a separate tile image onto the canvas. Use separate images instead when you have such an overly large map image.
24000 pixels is too much width to contain in any 1 image.
It looks like you're panning horizontally. You could slice your 24000 pixel wide image into individual images of a more reasonable size. Each image might be 3X the screen width. Exchange the image when the user pans beyond the edge of the current image.
How many unique block image tiles are you using?
Perhaps reduce the number of unique tiles when you detect a mobile user. Then put each unique tile on a separate image or canvas.
Is your map largely 1 tile type (eg. white/off)?
If so, you could make 1 single image of a grid of enough white tiles to fill the entire canvas. Then add black tiles where necessary. This reduces your drawing to 1 white grid image plus any required black images.

Related

Three.JS and WebGL texture automatic resize

I need some help to solve a problem :)
I use Three.JS to display very high quality equirectangular images into a sphere (20000x10000 pixels). The quality is very important for my webapp, and there is no consideration about bandwidth here.
My problem is that ThreeJS resize images because WebGL MaxTextureSize limit is exceeded.
Is there a way to pass through this limit ? Maybe by cutting textures into several parts ? Which is the best way to do this ?
Thanks you, have a good day !
Alex
You cannot pass through this limit. That's the whole idea behind a limit. What you can do is visit http://webglreport.com/ to see what the maximum capabilities are on your target devices (Under Textures > Max Texture Size), and then chop down your texture to fit nicely within those dimensions.
For instance, the iPhone 6 has a limit of 4096x4096, so you'd need to tile 5 meshes side-to-side to reach 20480. It all depends on your device's graphics card limitations, so how you divide it will vary from one user to the next.
Another thing you could do is instead of using one huge equirectangular image mapped onto a sphere, you could use 6 smaller images and either map it into a Cube or a CubeTexture. You can do this with any cubemap converter tool, such as this one: HDRI to CubeMap converter. That way you could load the 6 images, and let Three.js take the wheel; if your device can handle it, it'll show higher resolutions, but if it can't, it'll scale the textures down as necessary.
Look at this :
http://glayve.com/relief/verdon.html
It's better to divide, you can play more with individual faces.
You just have to set.position(x,y,z) and set.rotation(x,y,z) of individuals faces according to the 24 plans sizes.

How to increase canvas perfomance

I wanna to scale the game from 1 ship to fleet of 10 000 ships if it is posible to milion of ships.
Canvas drawing all what you tell him to draw even if it is negative coordinates. So I draw only game objects which is in the display range.
I used everywhere ES5 which is faster and more supported.
Scaling and rotation calculation is made by Camera, Mouse and KeyBoard events (cant apply only for rockets and laser beams).
But the main time is used by this part of code which used for each game object in display range (there can be thousands game objects or few):
`ctx.save()
ctx.translate(drawX, drawY);
ctx.rotate(alfa);
ctx.drawImage(images.image, -width/2, -height/2, width, height);
ctx.restore()`
How can I make it faster?
What is the best thing to do to increase perfomance?
Now I thinkg about removing ctx.rotate(alfa) and rotate image based on events and use rotated image and resizing that image with current scale (only for objects in display range).
Thanks.
You are probably going to run into hard limits if you try to render some thousands, let alone a million, of independent things in 2d canvas alone. You would probably be better using WebGL, perhaps with a library like PixiJS.
However, if you still plan on using canvas, user Blindman67 gave some good tips regarding performance in a different question. In short, to your case, avoid using save/restore and use setTransform instead, and draw images with dimensions that are power of 2 (2, 4, 8, 16, 32, etc.).

Isometric rendering without tiles, is that goal reachable?

I'm creating a 2d game in HTML5 canvas. It's an isometric world, so actually it's also 3d.
You always see that isometric games use tiles, and I think the reason is just for the depth logic.
My goal is to create the game without using a tile system. Each item can be placed by the user, so item locations like walls, trees, etc., have variable positions. The positions are isometric x, y, z coordinates.
If the game was just tiled, you could determine a fixed tile area for each item. (I mean: a one-tile item, or a wall of 10 tiles long).
But in my game I use an areaX and areaY for the space an item uses on the ground. And I use a height to store an item's height, which is the z value. (z axiz in my world, is y axis on screen).
The problem is hard to explain. It's about depth sorting.
See the following image:
The brown bar on top of the other bar should be after the gray pole.
I'm now using the simplest form of a painter's algorithm, that only compares the x, y, z coords of each item.
I know this incorrect rendering is a famous problem of the painter's algorithm.
If this was a tiled game, the bars could be divided into 2 tiles next to each other. Then the tiles could be drawn in the order of their depth.
But since I'm trying to create it without tiles, I am looking for a really challenging logic.
The items should be rendered as if they were 3D objects. I would even like to have the following behavior: If multiple items would intersect, then the visible pixels of each item should be drawn, like in this image:
The main problem is that there is no information to determine what parts of an image should be visible, and how they must be cut.
I could create a depth mask for each image, like:
It works a little bit like a z-buffer.
But this is not possible due to the performance of a canvas, because you have to iterate literally over each pixel of each image in the map.
And the second big disadvantage is that you have to load twice as much resources from the server...
Another solution might be cutting all images into vertical strips of 1 pixel wide. Then handle each strip as if it's a tile of 1x1 pixel. Then I'm still creating a tiled game, buy the tiles would be so small that I still reach my goal. But also this solution has the disadvantage performance... Since each image would be split in hundreds of strips, which are new seperate images.
So I'm looking for a challenging solution. Who can help me finding a way to define the depths (or depth areas) for images in a way that correct rendering is possible for the performance of canvas?
This question was effectively asked again and answered over here
The short answer is you can use depth sprites with WebGL

Wraparound for HTML5 Canvas: How to Get (Static) Shapes to Continue around Edges of Canvas

I'm trying to draw a tiled background using Javascript on an HTML5 canvas, but it's not working because shapes that intersect the edges of the canvas don't wrap around to the other side. (Just to be clear: these are static shapes--no motion in time is involved.) How can I get objects interrupted by one side of the canvas to wrap around to the other side?
Basically I'm looking for the "wraparound" effect that many video games use--most famously Asteroids; I just want that effect for a static purpose here. This page seems to be an example that shows it is possible. Note how an asteroid, say, on the right edge of the screen (whether moving or not) continues over to the left edge. Or for that matter, an object in the corner is split between all four corners. Again, no motion is necessarily involved.
Anyone have any clues how I might be able to draw, say, a square or a line that wraps around the edges? Is there perhaps some sort of option for canvas or Javascript? My google searches using obvious keywords have come up empty.
Edit
To give a little more context, I'm basing my work off the example here: Canvas as Background Image. (Also linked from here: Use <canvas> as a CSS background.) Repeating the image is no problem. The problem is getting the truncated parts of shapes to wrap around to the other side.
I'm not sure how you have the tiles set-up, however, if they are all part of a single 'wrapper' slide which has it's own x,x at say 0,0, then you could actually just draw it twice, or generate a new slide as needed. Hopefully this code will better illustrate the concept.
// Here, the 'tilegroup' is the same size of the canvas
function renderbg() {
tiles.draw(tiles.posx, tiles.posy);
if(tiles.posx < 0)
tiles.draw(canvas.width + tiles.posx, tiles.posy);
if(tiles.posx > 0)
tiles.draw(-canvas.width + tiles.posx, tiles.posy);
}
So basically, the idea here is to draw the groupings of tiles twice. Once in it's actual position, and again to fill in the gap. You still need to calculate when the entire group leaves the canvas completely, and then reset it, but hopefully this leads you in the correct direction!
You could always create your tillable image in canvas, generate a toDataUrl(), and then assign that data url as a background to something and let CSS do the tiling.. just a thought.
Edit: If you're having trouble drawing a tillable image, you could create a 3*widthx3*width canvas, draw on it as regular (assuming you grab data from the center square of data as the final result), and then see if you can't draw from subsets of the canvas to itself. Looks like you'd have to use:
var myImageData = context.getImageData(left, top, width, height);
context.putImageData(myImageData, dx, dy);
(with appropriate measurements)
https://developer.mozilla.org/En/HTML/Canvas/Pixel_manipulation_with_canvas/
Edit II: The idea was that you'd have a canvas big enough that has a center area of interest, and buffer areas around it big enough to account for any of the shapes you may draw, like so:
XXX
XCX
XXX
You could draw the shapes once to this big canvas and then just blindly draw each of the areas X around that center area to the center area (and then clear those areas out for the next drawing). So, if K is the number of shapes instead of 4*K draws, you have K + 8 draws (and then 8 clears). Obviously the practical applicability of this depends on the number of shapes and overlapping concerns, although I bet it could be tweaked. Depending upon the complexity of your shapes it may make sense to draw a shape 4 times as you originally thought, or to draw to some buffer or buffer area and then draw it's pixel data 4 times or something. I'll admit, this is some idea that just popped into my head so I might be missing something.
Edit III: And really, you could be smart about it. If you know how a set of objects are going to overlap, you should only have to draw from the buffer once. Say you got a bunch of shapes in a row that only draw to the north overlapping region. All you should need to do is draw those shapes, and then draw the north overlapping region to the south side. The hairy regions would be the corners, but I don't think they really get hairy unless the shapes are large.... sigh.. at this point I probably need to quiet down and see if there's any existing implementations of what I speak out there because I'm not sure my writing off-the-cuff is helping anybody.

EaselJS line fuzziness

I am using EaselJS as an API for HTML5 canvas.
I noticed that the following code:
line.graphics.setStrokeStyle(1).beginStroke("black").moveTo(100,100).lineTo(200,200);
stage.addChild(line);
...produces following line:
I set the thickness to 1 - but the line is still fuzzy. If you zoom in with the snapshot, you can see it actually occupies 3 pixels. I believe I read somewhere canvas draws a point between two pixels, so that both pixels will be colored in fact. And you need to shift where you draw the point by half the pixel width so it falls on the entire pixel.
I need sharp image for my applications, please advise.
EaselJS is just an abstraction for the canvas APIs - which draws all lines on the specified coordinates. The snapToPixel API is specifically for doing automatic rounding, but doesn't take into account the half-pixel issue you are describing.
The best practice approach is to put everything into a Container, and put the container at positive or negative (0.5,0.5) - which will adjust everything, and you can work in a normal coordinate space, rather than offsetting all your calculations.

Categories