So I found out that texturing planets can be really hard. I created a 4096k image and wrapped it around a high poly sphere. Apart from the possible memory management performance issue that comes with a 3-4 mb image, the texture looks bad / pixelated on a close up (orbital) view.
I was thinking that I could maybe increase the resolution significantly by splitting up the picture. Then create a low, medium and high version of each section. If the camera viewport is very close to that particular section then render the high resolution image. If far away remove image from memory and apply low or medium version.
To be honest I am not sure what strategy to use to render high quality planets. Should I maybe avoid textures and just use height maps and color the planet with Javascript? Same thing for the clouds. Should I create a sphere with an alpha map or should I use shaders?
As you can see this is the issue im having and hopefully you could enlighten me. Performance with Webgl / three.js has significantly improved over time but since this is all done within the browser I assume thinking about the right solution is vital in the long term.
You're going to need to implement a lod system. lod = "level of detail" and in 3d it means generally means switching from high-polygon to low-polygon models but in general it means doing anything to switch high detail to low-detail
Because you can't make textures 1000000x100000 which is pretty much what you'd need to do to get the results you want you'll need build a sphere out of multiple sections and texture each one separately. How many sections depends on how close you want to be able to zoom in. Google Maps has millions of sections. At the same time, if you can zoom out to see the whole planet (like you can in Google Maps) you can't draw millions of sections. Instead you'd switch to a single sphere. That process is called "LODing"
There is no "generic" solution. You generally need to write your own for your specific case.
In the case of something like Google Maps what they most likely do is have several levels of detail. A single sphere when you can see the whole planet. A sphere made of say 100 pieces when slightly closer. A sphere made of 1000 pieces when closer, A sphere made of 10000 pieces when closer, etc etc. They also only show the pieces you can see. Deciding and managing which pieces to show with a generic solution would be way to slow (look at millions of pieces every frame) but you, as the application writer know what pieces are visible so you can make sure only those pieces are in your scene.
Another thing that people often do is fade between LODs. So when Google Maps is showing the single mesh sphere when all the say zoomed out and they transition to the 100 piece or 1000 piece sphere they crossfade between the two.
Some examples of lodding
http://acko.net/blog/making-worlds-1-of-spheres-and-cubes/
http://vterrain.org/LOD/spherical.html
You could create a sphere with different topology.
Say you create 6 square planes, arranged in such a way that they form a box. You can tesselate these planes to give the sphere enough resolution. These planes would have UV mapping that would work similar to cube-mapping, each will hold a cubemap face.
Then you loop through all the vertices, take the position vector and normalize it. This will yield a sphere.
You can convert an equirectangular panorama image into a cubemap. I think it will allow you to to get more resolution and less stretching for cheap.
For starters, the 4096 x 4096 should be 4096x2048 on the default sphere with equirectangular, but the newly mapped sphere can hold 6 x 4096 x 4096 with no stretching, and can be drawn in 6 draw calls.
Further splitting these could yield a good basis for what gman suggests.
Related
I need some help to solve a problem :)
I use Three.JS to display very high quality equirectangular images into a sphere (20000x10000 pixels). The quality is very important for my webapp, and there is no consideration about bandwidth here.
My problem is that ThreeJS resize images because WebGL MaxTextureSize limit is exceeded.
Is there a way to pass through this limit ? Maybe by cutting textures into several parts ? Which is the best way to do this ?
Thanks you, have a good day !
Alex
You cannot pass through this limit. That's the whole idea behind a limit. What you can do is visit http://webglreport.com/ to see what the maximum capabilities are on your target devices (Under Textures > Max Texture Size), and then chop down your texture to fit nicely within those dimensions.
For instance, the iPhone 6 has a limit of 4096x4096, so you'd need to tile 5 meshes side-to-side to reach 20480. It all depends on your device's graphics card limitations, so how you divide it will vary from one user to the next.
Another thing you could do is instead of using one huge equirectangular image mapped onto a sphere, you could use 6 smaller images and either map it into a Cube or a CubeTexture. You can do this with any cubemap converter tool, such as this one: HDRI to CubeMap converter. That way you could load the 6 images, and let Three.js take the wheel; if your device can handle it, it'll show higher resolutions, but if it can't, it'll scale the textures down as necessary.
Look at this :
http://glayve.com/relief/verdon.html
It's better to divide, you can play more with individual faces.
You just have to set.position(x,y,z) and set.rotation(x,y,z) of individuals faces according to the 24 plans sizes.
I have a world made up of randomly generated blocks (black being on, white being off). When zoomed out, it essentially looks like white noise. However, instead of each block being 1 pixel, they are 40 pixels and drawn as an image texture.
My game works in a camera basis, so you can only see a fraction of the map at a time and you must move the character around to explore the rest.
Currently, I have my game simply render each image (block texture) that is in range of the canvas. This results in drawing 80-100 images every single frame. While it works fine on a desktop computer, it doesn't do very well on mobile.
Considering the map look doesn't change throughout the game, I wanted to try a different approach. I created a canvas the size of the world, which ended up being 1600x24000 pixels large. I drew all textures onto an external, hidden canvas. This was done once upon initialization. Then I would use the clipping attributes in drawImage to take the subsection that I needed. While it worked, it was extremely laggy and made things very much worse than they were before. In addition, image quality dropped to a more blurred look, which is undesirable.
Now I'm looking for ways to better go about this. So my question is, how should I go about this? Thank you.
When you're using a huge canvas, you can't be sure the renderer won't load the whole texture to render even a part of it. Since you see a huge performance drop, that might well be happening.
A few things i would do :
• try only with fillRect to see how much drawImage is to blame.
• try to set-up once and for all the context then only use drawImage with its simplest flavor :
var topLeft = { col:12, row : 6 }; // shift of the left-most rect (indexes)
context.save();
context.scale( scale, scale);
for column = 0 to columnSeenCount
for row = 0 to rowSeenCount
image = the image of ( topLeft.col + column , topLeft.row + row )
context.drawImage( image, column, row) ;
context.restore();
this way you avoid to re-compute a transform matrix for every drawImage. Much less math involved for the renderer.
• if you do the drawImage by yourself, try to use only rounded coordinates, it will be faster.
• You must round also the scale to prevent artifacts. You can round on 1, but for the scale it might be too much a limit : you can easily 'round' to 0.5 or 0.25 or... by doing :
var precision = 2 ; // 0 => floor ; 1 => at 0.5 ; 2 => 0.25 ; ....
var factor = 1 << precision ;
var roundedFigure = Math.floor( figure * factor) / factor ;
• if the way your application is done makes it easy to draw tile type per tile type, do it and you might win some time (you'll benefit from the fact that image in cache ).
• After that your only resort will be to use webGL or a webGL based renderer...
Two more Ideas could increase your performance:
Check if your whole world is rendered, or just the visible images (on the stage). For example double the world size and see, if it impacts the performance. It shouldn't, if you only render the relevant images.
Use CocoonJS to compile your application. It promises to speed up your application speed by 10 times for mobile devices. But be aware that it implies some serious restrictions on your html around your canvas.
obsolete answer, which assumed that the problem is caused by zooming out too far:
In 3D graphics Mipmaps can be used to avoid this problem. Essentially smaller images (i.e. less pixel) are used, when the object is more distant to the camera.
Maybe you can find something appropriate if you google something like html5 canvas 2D Mipmaps. Or you could build a simple mipmapping algorithm yourself.
But before investing the work, try how performant this approach is, by simple changing all block images, with 1x1-pixel images. Maybe your performance problem is not caused by slow rendering, as you assume. Learn to use a profiler, if it doesn't solve the problem.
A couple of questions and thoughts:
I would ditto #GameAlchemist's tip that using the clipping version of drawImage is slower than "blitting" a separate tile image onto the canvas. Use separate images instead when you have such an overly large map image.
24000 pixels is too much width to contain in any 1 image.
It looks like you're panning horizontally. You could slice your 24000 pixel wide image into individual images of a more reasonable size. Each image might be 3X the screen width. Exchange the image when the user pans beyond the edge of the current image.
How many unique block image tiles are you using?
Perhaps reduce the number of unique tiles when you detect a mobile user. Then put each unique tile on a separate image or canvas.
Is your map largely 1 tile type (eg. white/off)?
If so, you could make 1 single image of a grid of enough white tiles to fill the entire canvas. Then add black tiles where necessary. This reduces your drawing to 1 white grid image plus any required black images.
I have a webgl application, I've written using threejs. But the FPS is not good enough on some of my test machines. I've tried to profile my application using Chrome's about:tracing with the help from this article : http://www.html5rocks.com/en/tutorials/games/abouttracing/
It appears that the gpu is being overloaded. I also found out that my FPS falls drastically when I have my entire scene in the camera's view. The scene contains about 17 meshes and a single directional light source. Its not really a heavy scene. I've seen much heavier scenes get render flawlessly on the same GPU.
So, what changes can I make in the scene to make it less heavy, without completely changing it? I've already tried removing the textures? But that doesn't seem to fix the problem.
Is there a way to figure out what computation threejs is pushing on to the GPU? Or would this be breaking the basic abstraction threejs gives?
What are general tips for profiling GPU webgl-threejs apps?
There are various things to try.
Are you draw bound?
Change your canvas to 1x1 pixel big. Does your framerate go way up? If so you're drawing too many pixels or your fragment shaders are too complex.
To see if simplifying your fragment shader would help use a simpler shader. I don't know three.js that well. Maybe the Basic Material?
Do you have shadows? Turn them off. Does it go faster? Can you use simpler shadows? For example the shadows in this sample are fake. They are just planes with a circle texture.
Are you using any postprocessing effects? Post processing effects are expensive, specially on mobile GPUs.
Are you drawing lots of opaque stuff? If so can you sort your drawing order so you draw front to back (close to far). Not sure if three.js has an option to do this or not. I know it can sort transparent stuff back to front so it should be simple to reverse the test. This will make rendering go quicker assuming you're drawing with the depth test on because pixels in the back will be rejected by the DEPTH_TEST and so won't have the fragment shader run for them.
Another thing you can do to save bandwidth is draw to a smaller canvas and have it be stretched using CSS to cover the area you want it to appear. Lots of games do this.
Are you geometry bound?
You say you're only drawing 17 meshes but how big are those meshes? 17 12 triangle cubes or 17 one million triangle meshes?
If you're geometry bound can use simplify? If the geometry goes far into the distance can you split it and use lods? see lod sample.
I am currently working on a map generator application based on javascript, and I have wrote more than 400 lines of code, that creates a hexagonal map, adds coordinates to tiles, adds textures on tiles like grass, ocean and elements like castles, units etc.
I have added quite a few useful functions to this offline map editor, like zoom in and zoom out, turning grid on/off, dragging the map, and a few others, and I'm currently studying on how to add save and load functionality to this offline game map editor.
It sort of looks like a paint application, except that instated of drawing pixels, you use it to draw a map with hex tiles. You simply click on Generate a new map and you give your desired map size (e.g 64 tiles width by 64 tiles height) and the map is drawn for you, the tiles are simple divs that have the relative background image as texture. Tiles are drawn one by one using a simple for loop. But as the code grows in size so does my worries.
Because the map I create on my own map editor will be used on an online multiplayer game, it will be huge! for example to support at least 20000 users on the upcoming game there should be at least 20000 tiles, only for the users to occupy, not to mention the territory they will own, mountains, jungles, barbarian tribes, and so on..
I have made the calculations and found out that a 512 by 512 (about 262000 tiles) map will sufficiently answer the needs of that many users. However, the map will be huge. so I decided to test and see how much load time does it take to make such a map using the codes I have created with the least process possible and I found out that it takes nearly a minute or more, which is not acceptable, from a gamers perspective.
A zoom in for example in such a huge map will mean looping through every 262000 tile to change their size. although the process takes less time than drawing/loading the map from scratch, but it is still slow.
I was thinking with a map that huge which won't even fit in a browser's window, why should I draw the entire map? why not instead load the part which the user is currently looking at. Loading/drawing only the part that is needed, this way reducing load time and increasing performance. But this is proving to be a real challenge, and there are very limited resources online about implementing such a functionality. Where to start? How to approach the problem and respective solution?
I would start out by separating your concerns a little more. You're able to view WxH number of pixels, and the top left of the user's screen sits at (x,y) coordinates.
Loading the entire map, as you have pointed out, is crazy. But by knowing how large the game world is, and by knowing the user's coordinates in that world, you can easily select the subset of items that are in view.
Keep in mind that at a zoomed out resolution you shouldn't be using the full-sized images. Loading 262000 images (for just the map!) is going to be too heavy and probably crash. You should have different images for different zoom levels. This is a much bigger question and you should buy a book and do more research on google. But at least the thinking about the "where the user is" vs "where the items in the world are" is a place that I would start at.
Hope that helps.
I've been working on a WebGL project that runs on top of the Three.js library. I am rendering several semi-transparent meshes, and I notice that depending on the angle you tilt the camera, a different object is on top.
To illustrate the problem, I made a quick demo using three semi-transparent cubes. When you rotate the image past perpendicular to the screen, the second half of the smallest cube "jumps" and is no longer visible. However, shouldn't it still be visible? I tried adjusting some of the blending equations, but that didn't seem to make a difference.
What I'm wondering is whether or not this is a bug in WebGL/Three, or something I can fix. Any insight would be much appreciated :)
Well, that's something they weren't able to solve when they invented all this hardware accelerated graphics business and sounds like we'll have to deal with this for a long while.
The issue here is that graphic cards do not sort the polygons, nor objects. The graphics card is "dumb", you tell it to draw an object and it will draw the pixels that represent it and also, in another non-visible "image" called zbuffer (or depthbuffer), will draw the pixels that represent the object but instead of color it will draw the distance to the camera for each pixels. Any other objects that you draw afterwards, the graphics card will check if the distance to the camera for each pixel, and if it's farther, it won't draw it (unless you disable the check, that is).
This speeds up things a lot and gives you nice intersections between solid objects. But it doesn't play well with transparency. Say that you have 2 transparent objects and you want A to be drawn behind B. You'll need to tell the graphics card to draw A first and then B. This works fine as long as they're not intersecting. In order to draw 2 transparent objects intersecting then the graphics would have to sort all the polygons, and as the graphics card doesn't do that, then you'll have to do it.
It's one of these things that you need to understand and specifically tweak for your case.
In three.js, if you set material.transparent = true we'll sort that object so it's drawn before (earlier) other objects that are in front. But we can't really help you if you want to intersect them.