I wanna to scale the game from 1 ship to fleet of 10 000 ships if it is posible to milion of ships.
Canvas drawing all what you tell him to draw even if it is negative coordinates. So I draw only game objects which is in the display range.
I used everywhere ES5 which is faster and more supported.
Scaling and rotation calculation is made by Camera, Mouse and KeyBoard events (cant apply only for rockets and laser beams).
But the main time is used by this part of code which used for each game object in display range (there can be thousands game objects or few):
`ctx.save()
ctx.translate(drawX, drawY);
ctx.rotate(alfa);
ctx.drawImage(images.image, -width/2, -height/2, width, height);
ctx.restore()`
How can I make it faster?
What is the best thing to do to increase perfomance?
Now I thinkg about removing ctx.rotate(alfa) and rotate image based on events and use rotated image and resizing that image with current scale (only for objects in display range).
Thanks.
You are probably going to run into hard limits if you try to render some thousands, let alone a million, of independent things in 2d canvas alone. You would probably be better using WebGL, perhaps with a library like PixiJS.
However, if you still plan on using canvas, user Blindman67 gave some good tips regarding performance in a different question. In short, to your case, avoid using save/restore and use setTransform instead, and draw images with dimensions that are power of 2 (2, 4, 8, 16, 32, etc.).
Related
I'm creating a 2d game in HTML5 canvas. It's an isometric world, so actually it's also 3d.
You always see that isometric games use tiles, and I think the reason is just for the depth logic.
My goal is to create the game without using a tile system. Each item can be placed by the user, so item locations like walls, trees, etc., have variable positions. The positions are isometric x, y, z coordinates.
If the game was just tiled, you could determine a fixed tile area for each item. (I mean: a one-tile item, or a wall of 10 tiles long).
But in my game I use an areaX and areaY for the space an item uses on the ground. And I use a height to store an item's height, which is the z value. (z axiz in my world, is y axis on screen).
The problem is hard to explain. It's about depth sorting.
See the following image:
The brown bar on top of the other bar should be after the gray pole.
I'm now using the simplest form of a painter's algorithm, that only compares the x, y, z coords of each item.
I know this incorrect rendering is a famous problem of the painter's algorithm.
If this was a tiled game, the bars could be divided into 2 tiles next to each other. Then the tiles could be drawn in the order of their depth.
But since I'm trying to create it without tiles, I am looking for a really challenging logic.
The items should be rendered as if they were 3D objects. I would even like to have the following behavior: If multiple items would intersect, then the visible pixels of each item should be drawn, like in this image:
The main problem is that there is no information to determine what parts of an image should be visible, and how they must be cut.
I could create a depth mask for each image, like:
It works a little bit like a z-buffer.
But this is not possible due to the performance of a canvas, because you have to iterate literally over each pixel of each image in the map.
And the second big disadvantage is that you have to load twice as much resources from the server...
Another solution might be cutting all images into vertical strips of 1 pixel wide. Then handle each strip as if it's a tile of 1x1 pixel. Then I'm still creating a tiled game, buy the tiles would be so small that I still reach my goal. But also this solution has the disadvantage performance... Since each image would be split in hundreds of strips, which are new seperate images.
So I'm looking for a challenging solution. Who can help me finding a way to define the depths (or depth areas) for images in a way that correct rendering is possible for the performance of canvas?
This question was effectively asked again and answered over here
The short answer is you can use depth sprites with WebGL
So I found out that texturing planets can be really hard. I created a 4096k image and wrapped it around a high poly sphere. Apart from the possible memory management performance issue that comes with a 3-4 mb image, the texture looks bad / pixelated on a close up (orbital) view.
I was thinking that I could maybe increase the resolution significantly by splitting up the picture. Then create a low, medium and high version of each section. If the camera viewport is very close to that particular section then render the high resolution image. If far away remove image from memory and apply low or medium version.
To be honest I am not sure what strategy to use to render high quality planets. Should I maybe avoid textures and just use height maps and color the planet with Javascript? Same thing for the clouds. Should I create a sphere with an alpha map or should I use shaders?
As you can see this is the issue im having and hopefully you could enlighten me. Performance with Webgl / three.js has significantly improved over time but since this is all done within the browser I assume thinking about the right solution is vital in the long term.
You're going to need to implement a lod system. lod = "level of detail" and in 3d it means generally means switching from high-polygon to low-polygon models but in general it means doing anything to switch high detail to low-detail
Because you can't make textures 1000000x100000 which is pretty much what you'd need to do to get the results you want you'll need build a sphere out of multiple sections and texture each one separately. How many sections depends on how close you want to be able to zoom in. Google Maps has millions of sections. At the same time, if you can zoom out to see the whole planet (like you can in Google Maps) you can't draw millions of sections. Instead you'd switch to a single sphere. That process is called "LODing"
There is no "generic" solution. You generally need to write your own for your specific case.
In the case of something like Google Maps what they most likely do is have several levels of detail. A single sphere when you can see the whole planet. A sphere made of say 100 pieces when slightly closer. A sphere made of 1000 pieces when closer, A sphere made of 10000 pieces when closer, etc etc. They also only show the pieces you can see. Deciding and managing which pieces to show with a generic solution would be way to slow (look at millions of pieces every frame) but you, as the application writer know what pieces are visible so you can make sure only those pieces are in your scene.
Another thing that people often do is fade between LODs. So when Google Maps is showing the single mesh sphere when all the say zoomed out and they transition to the 100 piece or 1000 piece sphere they crossfade between the two.
Some examples of lodding
http://acko.net/blog/making-worlds-1-of-spheres-and-cubes/
http://vterrain.org/LOD/spherical.html
You could create a sphere with different topology.
Say you create 6 square planes, arranged in such a way that they form a box. You can tesselate these planes to give the sphere enough resolution. These planes would have UV mapping that would work similar to cube-mapping, each will hold a cubemap face.
Then you loop through all the vertices, take the position vector and normalize it. This will yield a sphere.
You can convert an equirectangular panorama image into a cubemap. I think it will allow you to to get more resolution and less stretching for cheap.
For starters, the 4096 x 4096 should be 4096x2048 on the default sphere with equirectangular, but the newly mapped sphere can hold 6 x 4096 x 4096 with no stretching, and can be drawn in 6 draw calls.
Further splitting these could yield a good basis for what gman suggests.
Say we are coding something in Javascript and we have a body, say an apple, and want to detect collision of a rock being thrown at it: it's easy because we can simply consider the apple as a circle.
But how about we have, for example, a "very complex" fractal? Then there is no polygon similar to it and we also cannot break it into smaller polygons without a herculean amount of effort. Is there any way to detect perfect collision in this case, as opposed to making something that "kind" of works, like considering the fractal a polygon (not perfect because there will be collision detected even in blank spaces)?
You can use a physics editor
https://www.codeandweb.com/physicseditor
It'll work with most game engines. You'll have to figure how to make it work in JS.
Here's an tutorial from the site using typescript - related to JS
http://www.gamefromscratch.com/post/2014/11/27/Adventures-in-Phaser-with-TypeScript-Physics-using-P2-Physics-Engine.aspx
If you have coordinates of the polygons, you can make an intersection of subject and clip polygons using Javascript Clipper
The question doesn't provide too much information of the collision objects, but usually anything can be represented as polygon(s) to certain precision.
EDIT:
It should be fast enough for real time rendering (depending of complexity of polygons). If the polygons are complex (many self intersections and/or many points), there are many methods to speedup the intersection detection:
reduce the point count using ClipperLib.JS.Lighten(). It removes the points that have no effect to the outline (eg. duplicate points and points on edge)
get first bounding rectangles of polygons using ClipperLib.JS.BoundsOfPath() or ClipperLib.JS.BoundsOfPaths(). If bounding rectangles are not in collision, there is no need to make intersection operation. This function is very fast, because it just gets min/max of x and y.
If the polygons are static (ie their geometry/pointdata doesn't change during animation), you can lighten and get bounds of paths and add polygons to Clipper before animation starts. Then during each frame, you have to do only minimal effort to get the actual intersections.
EDIT2:
If you are worried about the framerate, you could consider using an experimental floating point (double) Clipper, which is 4.15x faster than IntPoint version and when big integers are needed in IntPoint version, the float version is 8.37x faster than IntPoint version. The final speed is actually a bit higher because IntPoint Clipper needs that coordinates are first scaled up (to integers) and then scaled down (to floats) and this scaling time is not taken into account in the above measurements. However float version is not fully tested and should be used with care in production environments.
The code of experimental float version: http://jsclipper.sourceforge.net/6.1.3.4b_fpoint/clipper_unminified_6.1.3.4b_fpoint.js
Demo: http://jsclipper.sourceforge.net/6.1.3.4b_fpoint/main_demo3.html
Playground: http://jsbin.com/sisefo/1/edit?html,javascript,output
EDIT3:
If you don't have polygon point coordinates of your objects and the objects are bitmaps (eg. png/canvas), you have to first trace the bitmaps eg. using Marching Squares algorithm. One implementation is at
https://github.com/sakri/MarchingSquaresJS.
There you get an array of outline points, but because the array consists of huge amount of unneeded points (eg. straight lines can easily be represented as start and end point), you can reduce the point count using eg. ClipperLib.JS.Lighten() or http://mourner.github.io/simplify-js/.
After these steps you have very light polygonal representations of your bitmap objects, which are fast to run through intersection algorithm.
You can create bitmaps that indicate the area occupied by your objects in pixels. If there is intersection between the bitmaps, then there is a collision.
I have a world made up of randomly generated blocks (black being on, white being off). When zoomed out, it essentially looks like white noise. However, instead of each block being 1 pixel, they are 40 pixels and drawn as an image texture.
My game works in a camera basis, so you can only see a fraction of the map at a time and you must move the character around to explore the rest.
Currently, I have my game simply render each image (block texture) that is in range of the canvas. This results in drawing 80-100 images every single frame. While it works fine on a desktop computer, it doesn't do very well on mobile.
Considering the map look doesn't change throughout the game, I wanted to try a different approach. I created a canvas the size of the world, which ended up being 1600x24000 pixels large. I drew all textures onto an external, hidden canvas. This was done once upon initialization. Then I would use the clipping attributes in drawImage to take the subsection that I needed. While it worked, it was extremely laggy and made things very much worse than they were before. In addition, image quality dropped to a more blurred look, which is undesirable.
Now I'm looking for ways to better go about this. So my question is, how should I go about this? Thank you.
When you're using a huge canvas, you can't be sure the renderer won't load the whole texture to render even a part of it. Since you see a huge performance drop, that might well be happening.
A few things i would do :
• try only with fillRect to see how much drawImage is to blame.
• try to set-up once and for all the context then only use drawImage with its simplest flavor :
var topLeft = { col:12, row : 6 }; // shift of the left-most rect (indexes)
context.save();
context.scale( scale, scale);
for column = 0 to columnSeenCount
for row = 0 to rowSeenCount
image = the image of ( topLeft.col + column , topLeft.row + row )
context.drawImage( image, column, row) ;
context.restore();
this way you avoid to re-compute a transform matrix for every drawImage. Much less math involved for the renderer.
• if you do the drawImage by yourself, try to use only rounded coordinates, it will be faster.
• You must round also the scale to prevent artifacts. You can round on 1, but for the scale it might be too much a limit : you can easily 'round' to 0.5 or 0.25 or... by doing :
var precision = 2 ; // 0 => floor ; 1 => at 0.5 ; 2 => 0.25 ; ....
var factor = 1 << precision ;
var roundedFigure = Math.floor( figure * factor) / factor ;
• if the way your application is done makes it easy to draw tile type per tile type, do it and you might win some time (you'll benefit from the fact that image in cache ).
• After that your only resort will be to use webGL or a webGL based renderer...
Two more Ideas could increase your performance:
Check if your whole world is rendered, or just the visible images (on the stage). For example double the world size and see, if it impacts the performance. It shouldn't, if you only render the relevant images.
Use CocoonJS to compile your application. It promises to speed up your application speed by 10 times for mobile devices. But be aware that it implies some serious restrictions on your html around your canvas.
obsolete answer, which assumed that the problem is caused by zooming out too far:
In 3D graphics Mipmaps can be used to avoid this problem. Essentially smaller images (i.e. less pixel) are used, when the object is more distant to the camera.
Maybe you can find something appropriate if you google something like html5 canvas 2D Mipmaps. Or you could build a simple mipmapping algorithm yourself.
But before investing the work, try how performant this approach is, by simple changing all block images, with 1x1-pixel images. Maybe your performance problem is not caused by slow rendering, as you assume. Learn to use a profiler, if it doesn't solve the problem.
A couple of questions and thoughts:
I would ditto #GameAlchemist's tip that using the clipping version of drawImage is slower than "blitting" a separate tile image onto the canvas. Use separate images instead when you have such an overly large map image.
24000 pixels is too much width to contain in any 1 image.
It looks like you're panning horizontally. You could slice your 24000 pixel wide image into individual images of a more reasonable size. Each image might be 3X the screen width. Exchange the image when the user pans beyond the edge of the current image.
How many unique block image tiles are you using?
Perhaps reduce the number of unique tiles when you detect a mobile user. Then put each unique tile on a separate image or canvas.
Is your map largely 1 tile type (eg. white/off)?
If so, you could make 1 single image of a grid of enough white tiles to fill the entire canvas. Then add black tiles where necessary. This reduces your drawing to 1 white grid image plus any required black images.
I've been working on a WebGL project that runs on top of the Three.js library. I am rendering several semi-transparent meshes, and I notice that depending on the angle you tilt the camera, a different object is on top.
To illustrate the problem, I made a quick demo using three semi-transparent cubes. When you rotate the image past perpendicular to the screen, the second half of the smallest cube "jumps" and is no longer visible. However, shouldn't it still be visible? I tried adjusting some of the blending equations, but that didn't seem to make a difference.
What I'm wondering is whether or not this is a bug in WebGL/Three, or something I can fix. Any insight would be much appreciated :)
Well, that's something they weren't able to solve when they invented all this hardware accelerated graphics business and sounds like we'll have to deal with this for a long while.
The issue here is that graphic cards do not sort the polygons, nor objects. The graphics card is "dumb", you tell it to draw an object and it will draw the pixels that represent it and also, in another non-visible "image" called zbuffer (or depthbuffer), will draw the pixels that represent the object but instead of color it will draw the distance to the camera for each pixels. Any other objects that you draw afterwards, the graphics card will check if the distance to the camera for each pixel, and if it's farther, it won't draw it (unless you disable the check, that is).
This speeds up things a lot and gives you nice intersections between solid objects. But it doesn't play well with transparency. Say that you have 2 transparent objects and you want A to be drawn behind B. You'll need to tell the graphics card to draw A first and then B. This works fine as long as they're not intersecting. In order to draw 2 transparent objects intersecting then the graphics would have to sort all the polygons, and as the graphics card doesn't do that, then you'll have to do it.
It's one of these things that you need to understand and specifically tweak for your case.
In three.js, if you set material.transparent = true we'll sort that object so it's drawn before (earlier) other objects that are in front. But we can't really help you if you want to intersect them.